url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.esaral.com/q/how-much-time-would-it-take-to-distribute-one-avogadro-number-of-wheat-grains-60990
How much time would it take to distribute one Avogadro number of wheat grains, Question: How much time would it take to distribute one Avogadro number of wheat grains, if 1010 grains are distributed each second? Solution: Avogadro number $=6.02 \times 10^{23}$ Thus, time required $=\frac{6.02 \times 10^{23}}{10^{10}} \mathrm{~s}$ $=6.02 \times 10^{23} \mathrm{~s}$ $=\frac{6.02 \times 10^{23}}{60 \times 60 \times 24 \times 365}$ years $=1.909 \times 10^{6}$ years Hence, the time taken would be $1.909 \times 10^{6}$ years.
2023-04-01 07:31:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8107913732528687, "perplexity": 2202.322411539564}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00662.warc.gz"}
http://mathoverflow.net/questions/85139/which-matrices-have-fifth-power-equal-to-identity-matrix
## Which matrices have fifth power equal to identity matrix [closed] Let $A$ be a $10 \times 10$ matrix of complex numbers. If $A^5=I$ the identity matrix, what can be said about $A$? - This is a homework problem (obviously - why else the numbers $5$ and $10$ instead of $n$ and $k$?). For a hint, try diagonalization / Jordan Normal Form. The right place for such questions is math.stackexchange or artofproblemsolving. – darij grinberg Jan 7 2012 at 17:11 @darij 1. watch out for "obviously"; 2. in history many good problems were out of homework problems. If you are able to do something more glamorous and high-level, please generalize, for example to matrices of homogeneous polynomials of degree d, and change I correspondingly. – Gavriil Jan 7 2012 at 20:34 Thing is, MathOverflow is not for problems like this, as the FAQ will show. It is for questions at a graduate level (or upwards). I have mentioned two forums (which you can easily google up) where you can post such a problem instead. (Anyway, it is not a good homework problem, since the only thing that can be said about $A$ is that $A$ is diagonalizable and the eigenvalues of $A$ are fifth roots of unity. And both is very easy to show.) – darij grinberg Jan 7 2012 at 20:41 Hi Gavriil: I agree with the decision to close the question as it is currently written. I haven't thought about the problem yet, and probably won't, so for all I know there is interesting mathematics in the question. But the onus is on you to give some indication that there is. If this question is related to your research, you could explain your motivation and background — look over mathoverflow.net/howtoask. Homework help (at any level), though, is not the goal of MathOverflow. – Theo Johnson-Freyd Jan 8 2012 at 4:21
2013-05-21 02:38:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6840813159942627, "perplexity": 479.8438598216559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699675907/warc/CC-MAIN-20130516102115-00027-ip-10-60-113-184.ec2.internal.warc.gz"}
https://brilliant.org/problems/find-the-profit/
# Find the profit Algebra Level 3 The price of a jewel, passing through three hands, rises on the whole by 65% . If the first and the second sellers earned 20% and 25% profit respectively, find the percentage profit earned by the third seller. ×
2016-10-25 06:44:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8364505171775818, "perplexity": 1232.2241769628527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719960.60/warc/CC-MAIN-20161020183839-00246-ip-10-171-6-4.ec2.internal.warc.gz"}
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2012-July/072900.html
# [gmx-users] Computing vectors normal (perpendicular) to molecules Sun Jul 1 22:50:50 CEST 2012 Hi, Suppose that I have a system of benzene molecules (in reality, my system is more complicated, but for my question, it will be simpler to consider just an ensemble of benzene molecules). I would like to find a vector normal (i.e., perpendicular) to the plane of each benzene molecule in my system. I want to track these "normal vectors" over time; eventually, I would like to calculate a histogram of angles theta between the vectors normal to the benzene molecules and some fixed "laboratory" axis. If you have time, can you please tell me if there is any Gromacs utility that already does this -- i.e., computes normal vectors like this? It does not seem so, even with the functionalities provided by g_angle. This may not be the most rigorous idea, but one way to do this might be to compute the vector cross product of two vectors. Suppose that a benzene molecule has carbon atoms named C1, C2, C3, C4, C5, and C6 (arranged in that order around the ring). Then I could, for example, find two vectors: \vec{C2-C1} \vec{C2-C3} where \vec{C2-C1} is the vector from C2 to C1, and \vec{C2-C3} is the vector from C2 to C3. The cross product of those two vectors gives a vector that is perpendicular to both vectors. Is there any way to compute cross products in the Gromacs utilities? It does not seem so, but I am not certain. So, I am thinking that I will need to use g_traj to extract the x, y, and z coordinates of the atoms of interest (i.e., C1, C2, and C3 in each of the many benzene molecules in the system). Then I will need to write a C or Fortran script to find the relevant vectors and compute their cross products, at every timestep in the trajectory. However, to complicate things even more, I may in the future wish to make dynamic selections of benzene molecules, using g_select. I may want to consider only the benzene molecules satisfying z<12, for example. Since the benzene molecules obviously move, the particular benzene molecules that I consider for the vector/cross product calculation will change over time. I am able (using the -oi option in g_select in conjunction with a script of my own) to generate an index file with the indices (whose groups correspond to the simulation timestep number) of the C1, C2, and C3 atoms in benzene molecules whose centers-of-mass satisfy z<12. But is there any way I can feed this index file to g_traj without having to call g_traj as many times as I have timesteps in my trajectory? Thanks so much for your time! Andrew DeYoung Carnegie Mellon University
2019-07-17 07:55:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7101770043373108, "perplexity": 1475.3871335484569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525094.53/warc/CC-MAIN-20190717061451-20190717083451-00149.warc.gz"}
https://scicomp.stackexchange.com/questions/31432/a-fast-way-to-check-if-a-matrix-is-ill-conditioned-and-turning-it-into-well-con
# A fast way to check if a Matrix is ill-conditioned, and turning it into well-conditioned I'm running a simulation, and some linear solvers are returning a message of ill-conditioned matrix. Hence, I'm looking for a fast, easy to implement, method to detect if a matrix is ill-conditioned, before using the linear solver. And in case it's ill-conditioned, what's good way to make it well-conditioned? • You need to understand at a higher level why your system of equations is ill-conditioned. The proper solution will almost certainly lie within the process that is generating these systems of equations. – Brian Borchers Apr 15 '19 at 15:11 • The method of letting LU-decomposition (with or without pivoting) run its course is a valid way of checking whether the matrix is ill-conditioned. The other measures common to NLA are actually more expensive. On the practical side I agree with @BrianBorchers – Nox Apr 15 '19 at 20:53 • I will third this: If the matrix you have in your linear system is ill-conditioned, then it is ill-conditioned. If you want to change the matrix, you change the linear system, so you get a solution to a different problem. You need to find out why it is ill-conditioned, not patch over it. – Wolfgang Bangerth Apr 17 '19 at 3:04 • @BrianBorchers please, can you expand your comment? I would be particularly interested in inverse problems. In these cases can you improve the generating process? Or you must work with regularization? – Mauro Vanzetto May 5 '19 at 17:42 • @MauroVanzetto: Preconditioning is for poorly conditioned matrices, i.e., with bad but not terrible condition numbers. But when a linear solver says that the matrix is ill-conditioned, then that's often an indication that one is either using a poorly chosen formulation, or that there is a bug in the code. In either case, it's important to understand the cause of the conditioning, not to paper over it. – Wolfgang Bangerth May 7 '19 at 13:06 To detect if a matrix is ill-conditioned you can check the condition number defined, for the matrix $$A$$ as: $$k(A) = ||A|| \, ||A^{-1}||$$ For norm 2 this is equal to the ratio of singular values: $$k(A) = \frac{\sigma_{max}(A)}{\sigma_{min}(A)}$$ Numerically there are also other methods to estimate $$k(A)$$. For more details see chapter 15 of [1], and [2] where you can find source code for different methods (Hager, from LINPACK, sampling) in different languages. To threat an ill-conditioned system there are two principal ways: preconditioning: using this technique you obtain a system mathematically equivalent to the start situation, but with a better condition number. The methods depend on the structure of the matrix that you have, but you can see for example [3] for iterative methods. regularization: here you obtain an approximation of the starter system, these methods work also for ill-posed problem. Example of techniques in this family are: For more details and references see for example [4]. [1] Higham, Nicholas J., Accuracy and stability of numerical algorithms., Philadelphia, PA: SIAM. xxx, 680 p. (2002). ZBL1011.65010. [2] web page Matrix Condition Number Estimation [3] Saad, Yousef, Iterative methods for sparse linear systems., Philadelphia, PA: SIAM Society for Industrial and Applied Mathematics. xviii, 528 p. (2003). ZBL1031.65046.
2020-08-14 11:58:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6274917125701904, "perplexity": 487.9106191150978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739211.34/warc/CC-MAIN-20200814100602-20200814130602-00399.warc.gz"}
https://lw2.issarice.com/posts/Xg2YycEfCnLYrCcjy/defining-capability-and-alignment-in-gradient-descent
# Defining capability and alignment in gradient descent post by Edouard Harris · 2020-11-05T14:36:53.153Z · LW · GW · 6 comments ## Contents Defining inner alignment The base objective The true objective Two examples Capability and alignment Robustness and generalization Quantifying inner alignment Terminal states and transients None This is the first post in a series where I'll explore AI alignment in a simplified setting: a neural network that's being trained by gradient descent. I'm choosing this setting because it involves a well-defined optimization process that has enough complexity to be interesting, but that's still understandable enough to make crisp mathematical statements about. As a result, it serves as a good starting point for rigorous thinking about alignment. ## Defining inner alignment First, I want to highlight a definitional issue. Right now there are two definitions of inner alignment circulating in the community. This issue was first pointed out to me by Evan Hubinger [LW · GW] in a recent conversation. The first definition is the one from last year's Risks from Learned Optimization paper [? · GW], which Evan co-authored and which introduced the term. This paper defined the inner alignment problem as "the problem of eliminating the base-mesa objective gap" (Section 1.2). The implication is that if we can eliminate the gap between the base objective of a base optimizer, and the mesa-objectives of any mesa-optimizers [? · GW] that base optimizer may give rise to, then we will have satisfied the necessary and sufficient conditions for the base optimizer to be inner-aligned. There's also a second definition [AF · GW] that seems to be more commonly used. This definition says that "inner alignment fails when your capabilities generalize but your objective does not". This comes from an intuition (pointed out to me by Rohin Shah [LW · GW]) that the combination of inner alignment and outer alignment should be accident-proof with respect to an optimizer's intent: an optimizer that's both inner- and outer-aligned should be trying to do what we want. Since an outer-aligned optimizer is one whose base objective is something we want, this intuition suggests that the remaining part of the intent alignment problem — the problem of getting the optimizer to try to achieve the base objective we set — is what inner alignment refers to. Here I'll try to propose more precise definitions of alignment and capability in an optimizer, and explore what generalization and robustness might mean in the context of these properties. I'll also propose ways to quantify the capability and alignment profiles of existing ML systems. But before doing that, I want to motivate these definitions with an example. ## The base objective The optimizer I'll be using as my example will be a gradient descent process, which we're going to apply to train a simplified neural network. I want to emphasize that I'm treating gradient descent as the optimizer here — not the neural network. The neural network isn't necessarily an optimizer itself, it's just the output artifact of our gradient descent optimizer. To make this scenario concrete, we'll imagine the neural network we're training is a simplified language model: a feedforward MLP with a softmax layer at the top. The softmax layer converts the MLP's output activations into a probability distribution over next words, and the model gets scored on the cross-entropy loss between that probability distribution, and the actual next word that appears in the training text. (This ignores many of the complications of modern language models, but I'm keeping this example simple.) We’ll let  represent all the parameters of this MLP — all its weights and biases — at training step . To train our MLP with gradient descent, we feed it batches of input-output pairs . If our MLP is part of a language model, then  might represent the words in the language model's context window for the  training example in the batch, and  might represent a one-hot encoding of the correct next word for the  training example in the batch. To make things even simpler, I'm also going to assume that every training batch contains the entire training dataset of  examples, an arrangement we'd never use if we were training a real system. So at a given training step , the loss function for our language model is I'll refer to the function  as “the neural network”. Here, “” is the dot product. Notice that  here is our base objective: it's the quantity we're trying to get our gradient descent process to optimize for. If we'd succeeded in solving the entire outer alignment problem, and concluded that the base objective  was the only quantity we cared about optimizing, then the remaining challenge — getting our gradient descent process to actually optimize for  — would constitute the inner alignment problem, by our second definition above. So the question now is: under what conditions does gradient descent actually optimize for our base objective? ## The true objective To answer this, we can try to determine which quantity gradient descent is truly optimizing for, and then look at how and when that quantity correlates with the base objective we really care about. We can start by imagining the  step of gradient descent as applying a learning function  to the parameters in : Running gradient descent consists of applying  repeatedly to : In the long run, gradient descent should converge on some terminal value . (For now, we'll assume that this limit exists.) The key characteristic of a terminal value  (when it exists) is that it's a fixed point of the dynamical system defined by . In other words: Some of the fixed points  of this system will coincide with global or local minima of our base objective, the cross-entropy loss  — but not all of them. Some will be saddle points, while others will be local or global maxima. And while we don't consider all these fixed points to be equally performant with respect to our base objective, our gradient descent optimizer does consider them all to be equally performant with respect to its true objective. This disagreement is the core of the inner alignment problem in this setting: our gradient descent process isn't always optimizing for the quantity we want it to. So what quantity is it optimizing for? When we apply one step of gradient descent, we update each parameter in our neural network by an amount equal to a learning rate, times the error in that parameter that we calculate during backprop on the loss function . The update we apply to the  parameter, to move it from  to , can be written as Here,  represents our learning rate at time step . So our gradient descent optimizer will terminate if and only if there exists some time step  such that , across all parameters . (For a fixed learning function , this condition implies that the gradient updates are zero for all  as well.) And this happens if and only if the sum of the gradients is equal to zero when . But  represents more than just the terminal condition for our optimizer. It's the quantity that gradient descent is actually trying to minimize: anytime  deviates from zero, the amount of optimization power that's applied to move  towards zero is proportional to  itself. That makes  the true objective of our gradient descent optimizer — it's the loss function that gradient descent is actually optimizing for. So now we have a base objective , which we've assigned to an optimizer; and we have a true objective , which is the one our optimizer is actually pursuing. Intuitively, the inner alignment of our optimizer seem like it would be related to how much, and under what circumstances,  correlates with  over the course of a training run. So we'll look at that next. ## Two examples Let's now consider two optimizers, A and B. Optimizers A and B are identical apart for one difference: Optimizer A has its parameters initialized at , while Optimizer B has its parameters initialized at As luck would have it, this small difference is enough to put  and  into different basins of attraction of the loss function. As a result, our two optimizers end up in different terminal states: These two terminal states also correspond — again, by luck in this example — to different values of the base objective. Indeed, it turns out that  is in the basin of attraction of a global minimum of the loss function, while  is in the basin of attraction of a local minimum. As a result, after many training steps, the base objectives of the two optimizers end up converging to different values: Again, the limit of the loss function  is less than the limit of  because  corresponds to a global minimum, while  only corresponds to a local minimum. So Optimizer A is clearly better than Optimizer B, from the standpoint of its performance on our base objective — minimization of the loss function. But crucially, because  and  both represent fixed points with zero gradients, the true objectives of the two optimizers both converge to zero in the limit: In other words, Optimizer A and Optimizer B are equally good at optimizing for their true objectives. Optimizer A just does a better job of optimizing for the base objective we want, as a side effect of optimizing for its true objective. Intuitively, we might say that Optimizers A and B are equally capable with respect to their true objectives, while Optimizer A is better aligned with our base objective than Optimizer B is. Let's look at a second example. This time we'll compare Optimizer A to a third optimizer, Optimizer C. These two optimizers are again identical, apart from one detail: while Optimizer A uses learning rate decay with , Optimizer C uses a constant learning rate with . As a result of its learning rate decay schedule, Optimizer A converges on a global minimum in the  limit. But Optimizer C, with its constant learning rate, doesn't converge the same way. While it's drawn towards the same global minimum as Optimizer A, Optimizer C ends up orbiting the minimum point chaotically, without ever quite reaching it — its finite learning rate means it never perfectly hits the global minimum point, no matter how many learning steps we give it. As a result, (To be clear, this is an abuse of notation: in reality  generally won't be well-defined for a chaotic orbit like this. But we can think of this instead as denoting the long-term limit of the average of  over a sufficiently large number of time steps.) Intuitively, we might say that Optimizer A is more capable than Optimizer C, since it performs better, in the long run, on its true objective. Optimizer A also performs better than Optimizer C on our base objective: And interestingly, Optimizer A's better performance than C on our base objective is a direct result of its better performance than C on its true objective. So we might say that, in this second scenario, Optimizer C's performance on the base objective is capability-limited. If we improved C's capability on its true objective, we could get it to perform better on the base objective, too. ## Capability and alignment With those intuitions in hand, I'll propose the following two definitions. Definition 1. Let  be a base optimizer acting over  optimization steps, and let  represent the value of its base objective at optimization step . Then the capability of  with respect to the base objective  is Definition 2. Let  be a base optimizer with base objective , and  be a mesa-optimizer with mesa-objective . Then the mesa-optimizer's alignment with the base optimizer is given by If  and  are both finite, we can also write 's alignment with  as The intuition behind these definitions is that the capability  of an optimizer is the amount by which the optimizer is able to improve its objective over many optimization steps. One way in which a base optimizer can try to improve its base objective is by delegating part of its optimization work to a mesa-optimizer, which has its own mesa-objective. The alignment factor  in Definition 2 is a way of quantifying how effective that delegation is: to what extent does the mesa-optimizer's progress in optimizing for its mesa-objective "drag along" the base objective of the base optimizer that created it? In our gradient descent example, our mesa-optimizer  was the gradient descent process, and its mesa-objective was what, at the time, I called the "true objective", . But the base optimizer  was the human who designed the neural network and ran the gradient process on it. If we think of this human as being our base optimizer, then we can write the capability of our human designer as In other words, if a base optimizer delegates its objective to a mesa-optimizer, then that base optimizer's capability is equal to the capability of that mesa-optimizer, times how well-aligned the mesa-optimizer is to the base optimizer's base objective. If you fully delegate a goal to a subordinate, your capability on that goal is the product of 1) how capable your subordinate is at achieving their own goals; and 2) how well-aligned their own goals are to the goal you delegated to them. This seems intuitively reasonable. But it also has a curiously unintuitive consequence in gradient descent. We tend to think that when we add neurons to an architecture, we're systematically increasing the capability of gradient descent on that architecture. But the definitions above suggest a different interpretation: because gradient descent might converge equally well on its true objective  on a big neural net as on a small one, its capability as an optimizer isn't systematically increased by adding neurons. Instead, adding neurons improves the degree to which gradient descent converges on a base objective that's aligned with our goals. ## Robustness and generalization As I've defined them above, capability and alignment are fragile properties. Two optimizers  and  could be nearly identical, but still have very different capabilities  and . This is a problem, because the optimizers in our definitions are specified up to and including things like their datasets and parameter initializations. So something as minor as a slight change in dataset — which we should expect to happen often to real-world optimizers — could cause a big change in the capability of the optimizer, as we've defined it. We care a lot about whether an optimizer remains capable when we perturb it in various ways, including running it on different datasets. We also care a lot about whether an optimizer with objective  remains capable when we change its objective to something slightly different like . And we also care to what extent the alignment between two optimizers is preserved when we perturb either optimizer. Below I'll define two properties that describe the degree to which optimizers retain their capability and alignment properties under perturbations. Definition 3. Let  be the capability of optimizer , and let  be the alignment of optimizer  with optimizer . Let  and  be finite perturbations applied respectively to  and . Then, the capability of  is robust under perturbation  if Similarly, the alignment of  with  is robust under perturbations  and  if Definition 4. Let  be an optimizer with objective function , and let  be an optimizer with objective function . Let  be a finite perturbation applied to , such that the optimizer  differs from  only in that its objective function is  instead of . Then, the capability of  generalizes to objective  if Similarly, the alignment of  with  generalizes to objective  if Intuitively, we're defining a robustly capable optimizer as one whose capability isn't strongly affected by classes of perturbations that we care about — and we're defining robust alignment between two optimizers in an analogous way. We're also thinking of generalization as a special case of robustness, meaning specifically that the optimizer is robust to perturbation to its objective function. So an optimizer whose capabilities generalize is one that continues to work well when we give it a new objective. ## Quantifying inner alignment With the vocabulary above, we can now define inner alignment more precisely, and even think about how to quantify it in real systems. We might say that a mesa-optimizer  is inner-aligned with its base optimizer  if its alignment factor  remains robustly high under variations  in the datasets that we expect either optimizer to encounter in the future. We can also quantify inner alignment by looking at how much specific variations in the data distribution affects the alignment factor between two optimizers. We might also be interested investigating other properties that could affect inner alignment from a safety perspective. For example, under what conditions will alignment between a base optimizer and a mesa-optimizer generalize well to a new base objective? What kinds of perturbations to our optimizers are likely to yield breakdowns in robustness? As we add capacity to a deep learning model, should expect alignment to improve? And if so, should we expect an inflection point in this improvement — a level of capacity beyond which alignment declines sharply? How could we detect and characterize an inflection point like this? These are some of the topics I'll be exploring in the future. ## Terminal states and transients I want to highlight one final issue with the definitions above: I've defined inner alignment here only in connection with the limiting behavior of our optimizers. That means a mesa-optimizer that's well-aligned with its base optimizer would still — by the definition above — be free to do dangerous things on the path to correctly optimizing for the base objective. To take an extreme example, we could have a system that's perfectly aligned to optimize for human happiness, but that only discovers that humans don't want to have their brains surgically extracted from their bodies after it's already done so. Even if the system later corrected its error, grew us new bodies, and ultimately gave us a good end state, we'd still have experienced a very unpleasant transient in the process. Essentially, this definition of alignment says to the mesa-optimizer: it's okay if you break a vase [AF · GW], as long as we know that you'll put it back together again in the long run. I can understand this definition being controversial. It may be the most extreme possible version of the claim that the ends justify the means. So it could also be worth resolving the alignment problem into "weak" and "strong" versions — where weak alignment would refer to the  limit, while strong alignment would refer to transient behavior over, say, the next  optimization steps. A concept of strong alignment could let us prove statements like "this optimizer will have a performance level of at worst  on our base objective over the next  optimization steps." This seems very desirable. On the other hand, we may want to prepare for the possibility that the terminal states we want will only be accessible through paths that involve transient unpleasantness. Perhaps one really does have to break eggs to make an omelet, and that's just how the universe is. (I don't think this is particularly likely: high-capacity neural networks and policy iteration in RL are both data points that suggest incrementalism is increasingly viable in higher-dimensional problem spaces.) To summarize, weak alignment, which is what this post is mostly about, would say that "everything will be all right in the end." Strong alignment, which refers to the transient, would say that "everything will be all right in the end, and the journey there will be all right, too." It's not clear which one will be easier to prove than the other in which circumstance, so we'll probably need to develop rigorous definitions of both. Big thanks to Rohin Shah, Jan Leike, Jeremie Harris, and Evan Hubinger for reviewing early drafts of this, suggesting ideas, and pointing out mistakes! comment by rohinmshah · 2021-01-25T06:54:58.131Z · LW(p) · GW(p) Planned summary for the Alignment Newsletter: Consider a neural network like GPT-3 trained by gradient descent on (say) the cross-entropy loss function. This loss function forms the _base objective_ that the process is optimizing for. Gradient descent typically ends up at some local minimum, global minimum, or saddle point of this base objective. However, if we look at the gradient descent equation, θ = θ - αG, where G is the gradient, we can see that this is effectively minimizing the size of the gradients. We can think of this as the mesa objective: the gradient descent process (with an appropriate learning rate decay schedule) will eventually get G down to zero, its minimum possible value (even though it may not be at the global minimum for the base objective). The author then proposes defining capability of an optimizer based on how well it decreases its loss function in the limit of infinite training. Meanwhile, given a base optimizer and mesa optimizer, alignment is given by the capability of the base optimizer divided by the capability of the mesa optimizer. (Since the mesa optimizer is the one that actually acts, this is effectively measuring how much progress on the mesa objective also causes progress on the true base objective.) This has all so far assumed a fixed training setup (such as a fixed dataset and network architecture). Ideally, we would also want to talk about robustness and generalization. For this, the author introduces the notion of a “perturbation” to the training setup, and then defines [capability / alignment] [robustness / generalization] based on whether the optimization stays approximately the same when the training setup is perturbed. It should be noted that these are all definitions about the behavior of optimizers in the infinite limit. We may also want stronger guarantees that also talk about the behavior on the way to the infinite limit. Replies from: Edouard Harris comment by Edouard Harris · 2021-01-27T22:26:55.303Z · LW(p) · GW(p) Thanks, Rohin! Please note that I'm currently working on a correction for part of this post — the form of the mesa-objective  I'm claiming is in fact wrong, as Charlie correctly alludes to in a sibling comment. comment by adamShimi · 2020-11-07T22:31:10.372Z · LW(p) · GW(p) Great post! I liked the clean analysis of the problem, the formalization, and the effort to point the potential issues with your definitions. Now I'm really excited for the next posts, where I assume that you will study robustness and generalization (based on your definitions) for simple examples of gradient descent. I'm interested in commenting early drafts if you need feedback! Some of the fixed points  of this system will coincide with global or local minima of our base objective, the cross-entropy loss  — but not all of them. Some will be saddle points, while others will be local or global maxima. And while we don't consider all these fixed points to be equally performant with respect to our base objective, our gradient descent optimizer does consider them all to be equally performant with respect to its true objective. This disagreement is the core of the inner alignment problem in this setting: our gradient descent process isn't always optimizing for the quantity we want it to. So what quantity is it optimizing for? I agree wholeheartedly with this characterization. For me, that's the gist of the inner alignment problem if the objective is the right one (i.e. if outer alignment is solved). Let's look at a second example. This time we'll compare Optimizer A to a ththird optimizer, Typo on "ththird". Definition 1. Let  be a base optimizer acting over  optimization steps, and let  represent the value of its base objective at optimization step . Then the capability of  with respect to the base objective  is At first I wondered why you were taking the sum instead of just , but after thinking about it, the latter would probably converge to 0 almost all the time, because even with amazing optimization, the loss will stop being improved by a factor linear in T at some point. That might be interesting to put in the post itself. In our gradient descent example, our mesa-optimizer  was the gradient descent process, and its mesa-objective was what, at the time, I called the "true objective", . But the base optimizer  was the human who designed the neural network and ran the gradient process on it. This is not where I thought you were going when I read the intro, but that's a brilliant idea that removes completely the question of whether and why the base optimizer would find a mesa-optimizer to which it can delegate work. Replies from: Edouard Harris comment by Edouard Harris · 2020-11-09T18:46:54.351Z · LW(p) · GW(p) Thanks for the kind words, Adam! I'll follow up over DM about early drafts — I'm interested in getting feedback that's as broad as possible and really appreciate the kind offer here. Typo is fixed — thanks for pointing it out! At first I wondered why you were taking the sum instead of just , but after thinking about it, the latter would probably converge to 0 almost all the time, because even with amazing optimization, the loss will stop being improved by a factor linear in T at some point. That might be interesting to put in the post itself. Yes, the problem with that definition would indeed be that if your optimizer converges to some limiting loss function value like , then you'd get  for any . Thanks again! comment by Charlie Steiner · 2020-11-06T09:32:24.418Z · LW(p) · GW(p) Interesting post. Not sure if I agree with your interpretation of the "real objective" - might be better served by looking for stable equilibria and just calling them as such. Don't we already have weak alignment to arbitrary functions using annealing (basically, jump at random, but jump around more/further on average when the loss is higher and lower the jumping rate over time)? The reason we don't add small annealing terms to gradient descent is entirely because of we expect them to be worse in the short term (a "strong alignment" question). Replies from: Edouard Harris comment by Edouard Harris · 2020-11-06T19:47:17.647Z · LW(p) · GW(p) Thanks for the comment! Not sure if I agree with your interpretation of the "real objective" - might be better served by looking for stable equilibria and just calling them as such. I think this is a reasonable objection. I don't make this very clear in the post, but the "true objective" I've written down in the example indeed isn't unique: like any measure of utility or loss, it's only unique up to affine transformations with positive coefficients. And that could definitely damage the usefulness of these definitions, since it means that alignment factors, for example, aren't uniquely defined either. (I'll be doing a few experiments soon to investigate this, and a few other questions, in a couple of real systems.) Don't we already have weak alignment to arbitrary functions using annealing (basically, jump at random, but jump around more/further on average when the loss is higher and lower the jumping rate over time)? The reason we don't add small annealing terms to gradient descent is entirely because of we expect them to be worse in the short term (a "strong alignment" question). Interesting question! To try to interpret in light of the definitions I'm proposing: adding annealing changes the true objective (or mesa-objective) of the optimizer, which is no longer solely trying to minimize its gradients — it now has this new annealing term that it's also trying to optimize for. Whether this improves alignment or not depends on the effect annealing has on 1) the long-term performance of the mesa-optimizer on its new (gradient + annealing) objective; and 2) the long-term performance this induces on the base objective. Hope that's somewhat helpful, but please let me know if it's unclear and I can try to unpack things a bit more!
2022-01-20 22:20:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7659467458724976, "perplexity": 982.0781629594744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302706.62/warc/CC-MAIN-20220120220649-20220121010649-00078.warc.gz"}
https://infoscience.epfl.ch/record/164684
## The number field sieve The number field sieve is an algorithm to factor integers of the form $r^e-s$ for small positive $r$ and $s$. The authors present a report on work in progress on this algorithm. They informally describe the algorithm, discuss several implementation related aspects, and present some of the factorizations obtained so far. They also mention some solutions to the problems encountered when generalizing the algorithm to general integers using an idea of Buhler and Pomerance. It is not unlikely that this leads to a general purpose factoring algorithm that is asymptotically substantially faster than the fastest factoring algorithms known so far, like the multiple polynomial quadratic sieve Editor(s): Lenstra, Arjen K. Lenstra, Hendrik W. Published in: The development of the number field sieve Year: 1993 Publisher: Springer Berlin Heidelberg Laboratories:
2018-06-22 19:14:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39023563265800476, "perplexity": 690.336304896722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864776.82/warc/CC-MAIN-20180622182027-20180622202027-00204.warc.gz"}
https://aakashsrv1.meritnation.com/cbse-class-12-science/chemistry/cbse-class-12-chemistry-board-paper-2014-all-india-set-2-solutions/board-papers/starttest/pGlTN8pfRsx3Cu$ssaxJ1Q!!
Select Board & Class # Board Paper of Class 12-Science 2014 Chemistry (SET 2) - Solutions General Instructions: (i) All questions are compulsory (ii) Question numbers 1 to 8 are very short-answer questions and carry 1 mark each. (iii) Question numbers 9 to 18 are short-answer questions and carry 2 marks each. (iv) Question numbers 19 to 27 are also short-answer questions and carry 3 marks. (v) Question numbers 28 to 30 are long-answer questions and carry 5 marks each. (vi) Use Log Tables, if necessary. Use of calculators is not allowed. • Question 2 Name the method that is used for refining of nickel. VIEW SOLUTION • Question 4 Based on molecular forces, what type of polymer is neoprene? VIEW SOLUTION • Question 5 What are the products of hydrolysis of maltose? VIEW SOLUTION • Question 6 Write the structure of 4-chloropentan-2-one. VIEW SOLUTION • Question 7 Identify the chiral molecule in the following pair: VIEW SOLUTION • Question 8 The conversion of primary aromatic amines into diazonium salts is known as __________ . VIEW SOLUTION • Question 9 Write the names of monomers used for getting the following polymers: (i) Terylene (ii) Nylon-6,6 VIEW SOLUTION • Question 10 Describe the roles of the following: (i) SiO2 in the extraction of copper from copper matte (ii) NaCN in froth floatation process VIEW SOLUTION • Question 11 Complete the following equations: (i) (ii) VIEW SOLUTION • Question 12 Draw the structures of the following: (i) XeF4 (ii) HClO4 VIEW SOLUTION • Question 13 (i) Write the type of magnetism observed when the magnetic moments are oppositely aligned and cancel out each other. (ii) Which stoichiometric defect does not change the density of the crystal? VIEW SOLUTION • Question 14 Define the following terms: (i) Fuel cell (ii) Limiting molar conductivity $\left({\wedge }_{m}^{0}\right)$ VIEW SOLUTION • Question 15 Write the mechanism of the following reaction: VIEW SOLUTION • Question 16 For a chemical reaction R → P, the variation in the concentration (R) vs. time (t) plot is given as (i) Predict the order of the reaction. (ii) What is the slope of the curve? VIEW SOLUTION • Question 17 An element with density 2.8 g cm−3 forms of f.c.c. unit cell with edge length 4 × 10−8 cm. Calculate the molar mass of the element. (Given: NA = 6.022 × 1023 mol​−1) VIEW SOLUTION • Question 18 Write the equations involved in the following reactions: (i) Reimer − Tiemann reaction (ii) Williamson synthesis VIEW SOLUTION • Question 19 Define the following terms: (ii) Invert sugar (iii) Oligosaccharides VIEW SOLUTION • Question 20 On the occasion of World Health Day, Dr. Satpal organized a 'health camp' for the poor farmers living in a nearby village. After check-up, he was shocked to see that most of the farmers suffered from cancer due to regular exposure to pesticides and many were diabetic. They distributed free medicines to them. Dr. Satpal immediately reported the matter to the National Human Rights Commission (NHRC). On the suggestions of NHRC, the government decided to provide medical care, financial assistance, setting up of super-speciality hospitals for treatment and prevention of the deadly disease in the affected villages all over India. (i) Write the values shown by (a) Dr. Satpal (b) NHRC (ii) What type of analgesics are chiefly used for the relief of pains of terminal cancer? (iii) Give an example of artificial sweetener that could have been recommended to diabetic patients. VIEW SOLUTION • Question 21 Account for the following: (i) Primary amines (R-NH2) have higher boiling point than tertiary amines (R3N). (ii) Aniline does not undergo Friedel - Crafts reactions: (iii) (CH3)2NH is more basic than (CH3)3N in an aqueous solution. OR Give the structures of A, B and C in the following reactions: (i) (ii) VIEW SOLUTION • Question 22 (a) Draw the structures of major monohalo products in each of the following reactions: (i) (ii) (b) Which halogen compound in each of the following pairs will react faster in SN2 reaction: (i) CH3Br or CH3I (ii) (CH3)3C−Cl or CH3−Cl VIEW SOLUTION • Question 23 (a) Calculate ∆rG° for the reaction Mg (s) + Cu2+ (aq) → Mg2+ (aq) + Cu (s) Given : E° cell = + 2·71 V, 1 F = 96500 C mol−1 (b) Name the type of cell which was used in Apollo space programme for providing electrical power. VIEW SOLUTION • Question 24 The following data were obtained during the first order thermal decomposition of SO2Cl2 at a constant volume : SO2Cl2 (g) → SO2 (g) + Cl2 (g) Experiment Time/s−1 Total pressure/atm 1 0 0·4 2 100 0·7 Calculate the rate constant. (Given : log 4 = 0·6021, log 2 = 0·3010) VIEW SOLUTION • Question 25 What are emulsions? What are their different types? Give one example of each type. VIEW SOLUTION • Question 26 Give reasons for the following: (i) (CH3)3 P = O exists but (CH3)3 N = O does not. (ii) Oxygen has less electron gain enthalpy with negative sign than sulphur. (iii) H3PO2 is a stronger reducing agent than H3PO3. VIEW SOLUTION • Question 27 (i) Write the IUPAC name of the complex [Cr(NH3)4Cl2]Cl. (ii) What type of isomerism is exhibited by the complex [Co(en)3]3+? (en = ethane-1,2-diamine) (iii) Why is [NiCl4]2− paramagnetic but [Ni(CO)4] is diamagnetic? (At. nos. : Cr = 24, Co = 27, Ni = 28) VIEW SOLUTION • Question 28 (a) Write the products formed when CH3CHO reacts with the following reagents: (i) HCN (ii) H2N−OH (iii) CH3CHO in the presence of dilute NaOH (b) Give simple chemical tests to distinguish between the following pairs of compounds: (i) Benzoic acid and Phenol (ii) Propanal and Propanone OR (a) Account for the following: (i) Cl−CH2COOH is a stronger acid than CH3COOH. (ii) Carboxylic acids do not give reactions of carbonyl group. (b) Write the chemical equations to illustrate the following name reactions: (i) Rosenmund reduction (ii) Cannizzaro's reaction (c) Out of CH3CH2−CO−CH2−CH3 and CH3CH2−CH2−CO−CH3, which gives iodoform test? VIEW SOLUTION • Question 29 (a) Define the following terms : (i) Molarity (ii) Molal elevation constant (Kb) (b) A solution containing 15 g urea (molar mass = 60 g mol–1) per litre of solution in water has the same osmotic pressure (isotonic) as a solution of glucose (molar mass = 180 g mol–1) in water. Calculate the mass of glucose present in one litre of its solution. OR (a) What type of deviation is shown by a mixture of ethanol and acetone? Give reason. (b) A solution of glucose (molar mass = 180 g mol–1) in water is labelled as 10% (by mass). What would be the molality and molarity of the solution? (Density of solution = 1.2 g mL–1) VIEW SOLUTION • Question 30 (a) Complete the following equations : (i) (ii) (b) Account for the following : (i) Zn is not considered as a transition element. (ii) Transition metals form a large number of complexes. (iii) The E° value for the Mn3+/Mn2+ couple is much more positive than that for Cr3+/Cr2+ couple. OR (i) With reference to structural variability and chemical reactivity, write the differences between lanthanoids and actinoids. (ii) Name a member of the lanthanoid series which is well known to exhibit +4 oxidation state. (iii) Complete the following equation : (iv) Out of Mn3+ and Cr3+, which is more paramagnetic and why? (Atomic nos. : Mn = 25, Cr = 24) VIEW SOLUTION More Board Paper Solutions for Class 12 Science Chemistry • ### Board Paper of Class 12-Science 2004 Chemistry (SET 1) - Solutions Board Paper Solutions for Other Subjects ### Board Paper Solutions for Class 12 Science Applied Mathematics What are you looking for? Syllabus
2022-01-19 02:04:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2743562161922455, "perplexity": 9506.35168612514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301217.83/warc/CC-MAIN-20220119003144-20220119033144-00069.warc.gz"}
https://xianblog.wordpress.com/2011/10/10/understanding-computational-bayesian-statistics/
## understanding computational Bayesian statistics I have just finished reading this book by Bill Bolstad (University of Waikato, New Zealand) which a previous ‘Og post pointed out when it appeared, shortly after our Introducing Monte Carlo Methods with R. My family commented that the cover was nicer than those of my own books, which is true. Before I launch into a review, let me warn the ‘Og reader that, as an author of three books on computational Bayesian statistics, I cannot be very objective on the topic: I do favour the way we approached Bayesian computational methods and, after reading Bolstad’s Understanding computational Bayesian statistics, would still have written the books the way we did. Be warned, thus. Understanding computational Bayesian statistics is covering the basics of Monte Carlo and (fixed dimension) Markov Chain Monte Carlo methods, with a fair chunk dedicated to prerequisites in Bayesian statistics and Markov chain theory. Even though I have only glanced at the table of contents of Bolstad’s Introduction to Bayesian Statistics [using almost the same nice whirl picture albeit in bronze rather than cobalt], it seems to me that the current book is the continuation of the earlier one, going beyond the Binomial, Poisson, and normal cases, to cover generalised linear models, via MCMC methods. (In this respect, it corresponds to Chapter 4 of Bayesian Core.) The book is associated with Minitab macros and an R package (written by James Curran), Bolstad2, in continuation of Bolstad, written for Introduction to Bayesian Statistics. Overall, the level of the book is such that it should be accessible to undergraduate students, MCMC methods being reduced to Gibbs, random walk and independent Metropolis-Hastings algorithms, and convergence assessments being done via autocorrelation graphs, the Gelman and Rubin (1992) intra-/inter-variance criterion, and a forward coupling device. The illustrative chapters cover logistic regression (Chap. 8), Poisson regression (Chap. 9), and normal hierarchical models (Chap. 10). Again, the overall feeling is that the book should be understandable to undergraduate students, even though it may make MCMC seem easier than it is by sticking to fairly regular models. In a sense, it is more a book of the [roaring MCMC] 90’s in that it does not incorporate advances from 2000 onwards (as seen from the reference list) like adaptive MCMC and the resurgence of importance sampling via particle systems and sequential Monte Carlo. Since we are uncertain about the true values of the parameters, in Bayesian statistics we will consider them to be random variables. This contrasts with the frequentist idea that the parameters are fixed but unknown constants.” W. Bolstad, p.3 To get into more details, I find the book introduction to Bayesian statistics (Chap. 1) somehow unbalanced with statements like the above and like “statisticians have long known that the Bayesian approach offered clear cut advantages over the frequentist approach”  (p.1) [which makes one wonder why there is any frequentist left!], or “clearly, the Bayesian approach is more straightforward [than the frequentist p-value]” (p.53). because antagonistic presentations are likely to be lost to the neophyte. (I also disagree with the statement that for a Bayesian, there is no fixed value for the parameter!) The statement that the MAP estimator is associated with the 0-1 loss function (footnote 4, p.10) is alas found in many books and papers, thus cannot truly be blamed on the author. The statement that  ancillary statistics “only work in exponential families” (footnote 5, p.13) is either unclear or wrong. The discussion about Bayesian inference in the presence of nuisance parameters (pp.15-16) is also confusing: “the Bayesian posterior density of θ1 found by marginalizing θ2 out of the joint posterior density, and the profile likelihood function of θ1 turn out to have the same shape” (p.15) [under a flat prior] sounds wrong to me. It is not possible to do any inference about the parameter θ from the unscaled posterior.” W. Bolstad, p.25 The chapter about simulation methods (Chap. 2) contains a mistake that someone might deem of little importance. However,  I do not and here it is: sampling-important-resampling is presented as an exact simulation method (p.34), omitting the bias due to normalising the importance weights. The chapter on conjugate priors (Chap. 4), although fine, feels as if it does not belong to this book but should rather be in the previous Bolstad’s Introduction to Bayesian Statistics. Esp. as it is on the long side. The following Chap. 5 gives an introduction to Markov chain theory in the finite state case, with a nice illustration on the differences in convergence time through two 5×5 matrices. (But why do we need six decimals?!) MCMC methods are more efficient than the direct [simulation] procedures for drawing samples from the posterior when we have a large number of parameters.” W. Bolstad, p.127 MCMC methods are presented through two chapters, the second one being entitled Statistical inference from a Markov chain Monte Carlo sample” (Chap. 7), which is a neat idea to cover the analysis of an MCMC output. The presentation is mainly one-dimensional, which makes the recommendation to use independent Metropolis-Hastings algorithms found throughout the book [using a t proposal based on curvature at the mode] more understandable if misguided. The presentation of the blockwise Metropolis-Hastings algorithm of Hastings through the formula (p.145) $P(\theta,A)=\prod_{j=1}^J P_j(\theta_j,A_j|\theta_{-j})$ is a bit confusing as the update of the conditioners in the conditional kernels is not indicated. (The following algorithm is correct, though.) I also disliked the notion that “the sequence of draws from the chain (..) is not a random sample” (p.161) because of the correlation: the draws are random, if not independent… This relates to the recommendation of using heavy thin-in with a gap that “should be the same as the burn-in time” (p.169), which sounds like a waste of simulation power, as burn-in and thin-in of a Markov chain are two different features. The author disagrees with the [my] viewpoint that keeping all the draws in the estimates improves on the precision:  e.g., “one school considers that you should use all draws (…) However, it is not clear how good this estimate would be” (p.168) and “values that were thinned out wouldn’t be adding very much to the precision” (p.169). I did not see any mention made of effective sample size and the burn-in size is graphically determined via autocorrelation graphs, Gelman-Rubin statistics, and a rather fantastic use of coupling from the past (pp.172-174). (In fact, the criterion is a forward coupling device that only works for independent chains.) We should alway use proper priors in the hierarchical model, particularly for scale parameters. When improper priors are used (…) overall the posterior is improper.” W. Bolstad, p.257. The final chapters apply MCMC methods to logistic (Chap. 8) and Poisson regressions (Chap. 9), again using an independent proposal in the Metropolis-Hastings algorithm. (Actually, we also used a proposal based on the MLE solutions for the logistic regression in Introducing Monte Carlo Methods with R, however it was in an importance sampling illustration for Chapter 4.) It is a nice introduction to handling generalised linear models with MCMC. The processing of the selection of variables (p.195-198 and pp.224-226) could have been done in a more rigorous manner, had Bayes factors been introduced. It is also a nice idea to conclude with Gibbs sampling applied to hierarchical models (Chap. 10), a feature missing in the first edition of  our Bayesian Core, however the chapter crucially misses an advanced example, like mixed linear models. This chapter covers the possibly misbehaviour of posteriors associated with improper priors, with a bit too strong of a warning (see above), and it also unnecessarily [in my opinion] goes into a short description of the empirical Bayes approach (pp.245-247). The style of Understanding computational Bayesian statistics is often repetitive, sentences from early paragraphs of a chapter being repeated verbatim a few pages later. While the idea of opposing likelihood-based inference to Bayesian inference by an illustration through a dozen graphs (Chap. 1) is praiseworthy, I fear the impact is weakened by the poor 3-D readability of the graphs. Another praiseworthy idea is the inclusion of a “Main points” section at the end of each chapter; however, they should have been more focused in my opinion. Maybe the itemized presentation did not help. Inevitably (trust me!), there are typing mistakes in the book and they will most likely be corrected in a future printing/edition. I am however puzzled by the high number of “the the”, or the misspelling (p.261) of Jeffreys‘ prior into Jeffrey’s prior (maybe a mistake from the copy-editor?). (A few normal densities are missing a ½ on p.247, by the way.) ### 9 Responses to “understanding computational Bayesian statistics” 1. […] computation with R, and Bill Bolstad’s Introduction to Bayesian Statistics, which is not the one I reviewed recently.) He has just started the book but he mentions that “the book has managed to capture my […] 2. […] is [yet!] another Bayesian textbook that appeared recently. I read it in the past few days and, despite my […] 3. […] Bolstad wrote a reply to my review of his book Understanding computational Bayesian statistics last week and here it is, unedited […] Hi Xi’an, I would like to respond to some of the points you raised in your review of my book “Understanding Computational Bayesian Statistics” on your Og. I am preparing a written response, and would appreciate if you would put it up on your Og. Please let me know if that is possible. Sincerely yours, • Hi Bill! Of course, I can print your response on both my blog and the Statistical forum, since the review appeared on both. Thanks! 5. We wanted to let you know that your blog was included in our list of the top 50 statistics blogs of 2011. Our goal was to highlight blogs that students and prospective students will find useful and interesting in their exploration of the field. You can view the entire list at http://www.thebestcolleges.org/best-statistics-blogs/ Congratulations! 6. Thank you! 7. Thanks for the detailed review. I am very interested in knowing more on the debate “thin-in vs not-thin-in”. Any reference (published articles or web discussion) available? Thanks for this. This site uses Akismet to reduce spam. Learn how your comment data is processed.
2022-09-27 23:58:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7401530742645264, "perplexity": 1237.9974992117423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00609.warc.gz"}
http://blogs.ubc.ca/math101sec210portal/2013/03/26/quiz-4-quick-comments/
# Quiz 4 Quick Comments 1. If you still find yourself still arguing about how the general term $$a_n$$ is going to zero, instead of the nature of the general term, like: • the general term is exactly the form of some known series (geometric and p-series) • the general term is a difference (telescoping series) • the general term is a positive decreasing function of n, that is integrable (integral test) • the general term is alternating with absolute value decreasing to zero (alternating series test) • the absolute value ratios of consecutive general terms is converging (ratio test, which apples to alternating series too) Then be warned that you’re doing something fundamentally wrong. Let me repeat again, $$\lim_{n\to\infty} a_n = 0$$ says nothing conclusive whatsoever for your series under investigation (the only “exception”: unless you couple it with the knowledge that the series is alternating and absolute values of terms decreasing). Arguing the general term to somehow converge to 0 and quoting “whatever test”, all these amounts to a zero mark in a harsh marking scheme. You know I am harsh against irrelevant details. It is only when the general term DOESN’T converge to zero that the series must diverge by the Divergence Test (Or Term Test if you read sources other than the textbook). In EVERY case when the series does converge, all efforts into arguing that the general term sequence is going to zero (in whatever vaguely described ways) are going to be in vain. You need to make explicit comparisons or explain the nature of the general term as above to get credit. 2. The comparison tests require your knowledge of another series $$\sum_{n=1}^{\infty} b_n$$ which you know to converge / diverge. If you guess divergence, get a series below: $$b_n \leq a_n$$, if you guess convergence, get a series above. The required conditions for the direct comparison test is that the sequences are positive and the series you use for comparison is known to diverge / converge. The limit comparison test is easier to use, you get another series so that the ratio of terms converges to some nonzero number: $$\lim \frac{a_n}{b_n} = L \neq 0$$ and of course not infinity (i.e. it diverges). Then either both converge or both diverge. Posted in Quiz & Midterm
2014-04-18 05:31:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8892607688903809, "perplexity": 549.7068021723003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/10313/center-a-listing-on-page
# Center a Listing on page I am trying to center a listing horizontally within a page. So far I have tried defining the following macro: ``````\lstnewenvironment{snippet}[1][] {\centering \lstset{float=htpb,#1}} {} `````` But it did not work. I have read this question and answer, but I would prefer a solution that does not rely in figures or tables. - only possible when you define a width for a minipage: ``````\lstnewenvironment{snippet}[1][] {\hfill\lstset{frame=single,#1}\minipage{0.6\linewidth}} {\endminipage\hfill\null} `````` - Your suggestion is ot working, it gives a `! Emergency stop.` error and the resulting file does not contain the listing. –  Tiago Veloso Feb 5 '11 at 15:39 @Tiago: than give me a minimal example, which shows this error. For me it works! –  Herbert Feb 5 '11 at 16:42 If you set the line width i.e. `\lstset{linewidth=0.6\textwidth}`and want that centered, you can use margins instead. i.e.: ``````\lstset{linewidth=\textwidth,
2013-12-08 18:21:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.919171929359436, "perplexity": 1538.658045407995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163785316/warc/CC-MAIN-20131204132945-00028-ip-10-33-133-15.ec2.internal.warc.gz"}
http://mathematics-diary.blogspot.com/2011_08_01_archive.html
As of May 4 2007 the scripts will autodetect your timezone settings. Nothing here has to be changed, but there are a few things ## Wednesday, August 31, 2011 ### Fermat on arithmeticians Thanks to Sol Robeson ( in PI ) we call mathematicians who lost it "numerologists". In 1657, Fermat challenged William Brouncker, of Castle Lynn in Ireland, and John Wallis to find integral solutions to the equations $$x^2 − 151y^2 = 1$$ and $$x^2 − 313y^2 = −1.$$ He ( Fermat ) cautioned them not to submit rational solutions because even the lowest type of arithmetician could devise such answers. "An Introduction to Diophantine Equations, A Problem-Based Approach, Andreescu, Andrica & Cucurezeanu, Springer 2010" Considering that Fermat used the qualification the lowest type of arithmetician there must have been a ranking in the computational branch those days. Until at least WW2, a computer, was the job description of someone who did "computational work" in banking, insurance, trading, logistics and what have you. Jobs like that exist even now, think of the actuarial sciences, but most of them if not all require a degree in mathematics. I am not sure but I suppose that in Fermat's days there must have been people responsible for the basic addition and multiplication type of calculations. Fermat called them "arithmeticians, of the lowest kind". I am speculating of course. Fermat could have been a terrible arrogant man looking down on the working class. Considering that he was not a mathematician himself but that he wrote, on his own initiative, letters to the great minds of his time says at least something of his self-image. Link: My previous post on Fermat ## Monday, August 29, 2011 ### Continued fractions (3) Each rational number can be represented as a finite continued fraction (FCF) and each FCF represents a rational number. We have seen how to calculate the rational number from a given FCF, in this post we show how to calculate the FCF for any rational number. For example, the FCF representation of $\frac{17}{13}$ can be calculated as follows: $a$ $q$ $b$ $r$ $17$ $1$ $13$ $4$ $13$ $3$ $4$ $1$ $4$ $4$ $1$ $0$ The value of the FCF is contained in the second column from top to bottom: $\frac{17}{13}$ is $\left[ 1,3,4 \right]$. This is clearly an application of Euclid's algorithm for calculating the GCD of two integers. The algorithm for calculating the GCD stops at row $3$ but by adding one more row containing $4 = 4 \times 1 + 0$ the column containing the FCF is complete. - Continued fractions (1) - Continued fractions (2) - Continued fractions (2a) ## Sunday, August 28, 2011 ### Contined fractions (2a) I found a better way to present the table which shows the algorithm for calculating continued fractions: $k$ $a_k$ $p_k$ $q_k$ $C_k$ $-1$ $0$ 1 $0$ $1$ $0$ $k$ $a_k$ $a_k \cdot p_{k-1} + p_{k-2}$ $a_k \cdot q_{k-1} + q_{k-2}$ $\frac{p_k}{q_k}$ The table consists of $m+2$ rows. The value of the FCF is $\frac{p_m}{q_m}$. To be continued. - Continued fractions (1) - Continued fractions (2) ## Saturday, August 27, 2011 ### Continued fractions (2) A finite continued fraction (FCF) is a map $$f: \mathbf{N}^m \rightarrow \mathbf{Q}$$ $$\left( a_1, a_2, \cdots a_m \right) \mapsto a_1 + \frac{1}{a_2 + \frac{1}{\ddots + \frac{1}{a_m}}}$$ Continued fractions are calculated by creating a table of convergents, as follows: $k$ $a_k$ $p_k$ $q_k$ $C_k$ $-1$ $0$ $0$ $1$ $0$ $1$ $a_1$ $a_1 \cdot p_0 + p_{-1}$ $1$ $\frac{p_1}{q_1}$ $k$ $a_k$ $a_k \cdot p_{k-1} + p_{k-2}$ $a_k \cdot q_{k-1} + q_{k-2}$ $\frac{p_k}{q_k}$ The table consists of $m+2$ rows. The value of the FCF is $\frac{p_m}{q_m}$. To be continued. ## Friday, August 26, 2011 ### Study Tip - 2 Do you know any professional musicians, dancers perhaps? History shows that art and mathematics thrive in the same places. I, sadly, don't. Although yesterday I witnessed a pianist's daily practice routine. It started with loosening up the muscles. Then, slowly, player and instrument become one sound generating machine. To me this explained why musicians can have such deep relationships with their instruments. The musicians we see on stage ( all of them, not just the 'stars' ) have practiced at least ten-thousand hours to reach that level. I don't think it's such a bad guess to say that, any mathematician who is at the forefront and is creating new mathematics, carries at least the same weight of practice hours on his belt as a professional in the arts or an athlete. #### Tip 2: Exercise daily Work on a difficult exercise every day. Find a booklet with Olympiad level exercises or any book with exercises that are a challenge for -you-. Exercises you can do won't make you better. Hard exercises do. ## Thursday, August 25, 2011 ### Continued fractions (1) Generating functions are "mathematical data structures" that can store an infinite amount of data. For example $$\frac{1}{1-x} = \left\{ 1,1,1, \cdots \right\}$$ and $$\frac {1}{1-x-x^2} = \left\{ 1,1,2,3,5,8,13, \cdots \right\}$$ nicely represents the Fibonacci series. ( The existence of tools like the GF's made me sort of addicted on mathematics. ) If you think this is the most compact way to describe the Fibonacci series, then let mathematics surprise you. The most compact way to describe the Fibonacci series is $$\left[ <1> \right]$$ which means $$1 + \frac{1}{1 + \frac{1}{1 + \frac{1}{1 + \cdots }}}.$$ Objects like this are called continued fractions, more on these and why $\left[ <1> \right]$ is related to the Fibonacci series in the next post. ## Wednesday, August 24, 2011 ### Creating new mathematics ( ... ) Suppose your assignment was to teach a group of friendly aliens, just arrived from the Pleiadians, the rules of the game of chess. A student has completed the course with success if he is able to play a game according to the rules. That's not too difficult you think considering the number of six year olds able to play a descent game of chess. Your study materials are: lots of chalk and a blackboard. No chess pieces, no boards are available in class. Your teaching assistant will type your lecture as you speak, therefore it is not allowed to use drawings or symbols that can't be typed instantly. This may seem difficult ( it is ), but compare it with the creation of new mathematics (...) Mathematics is a parallel universe which we can enter with our mind only. Although bodiless, exterior, we are free to travel in this spectacular universe. When we come back however we lack the words to describe our observations, to communicate what we have seen with our mental ( mathematical ) eyes. Each and every observation must be recorded and analyzed before we can attempt to describe it. Definition by definition we try to create a consistent picture of what we have 'seen'. In this notion of mathematics, for example the number e always existed, it just took an Euler to describe it properly. Obviously the mathematical world is not some parallel library where we can go to and lookup the answers to the current unsolved problems. - In a BBC documentary about Andrew Wiles and Fermat's last theorem, Wiles describes his research as entering some space, then by touching things by hand in the dark he had to form a mental picture of what's in that space, and so on. Describe what you observe (...) ## Tuesday, August 23, 2011 ### Study tip - 1 ( For a while I have been thinking about writing -the- great (...) post listing zillions of study tips. Since it is not hard to imagine such a post will never be written I just start with tip 1 ( in random order ), then see how far I will get and maybe, one day, compile them into one post or page. ) #### Tip-1: The next item on the (study-)list If you want to start studying immediately in the time you have allocated for study make sure you know -exactly- what you are going to do when you start. Don't lose time on deciding if you are going to read, revise, do exercises, work on assignments or whatever it is that you do for studying. The best time to plan a session is at the end of each study session. This already structures a session into study / plan next session. This plan can be as short as 'Do TMA questions 2 and 3'. Or 'Read pages 12-28'. Very quickly go through it and write your plan down in your agenda ( whatever system you use ). Visualize yourself starting the next session and starting with these tasks. - The trick is that your subconscious already starts working on it. Programs, prepares you for the task. Next session, starting the task will be easy and enjoyable. It works. Make a habit of it. ## Saturday, August 20, 2011 ### Mandelbrot fractals in 3D The laptops we use today are extremely powerful machines if you measure them against the standards of a decade ago. In these days producing ( rendering ) Mandelbrot fractals was hard work for any computer. Also, Mandelbrot fractals were 2D, by definition, case closed. Experimenting with '3d-type-of-Mandelbrots' was impossible due to the limitations of the hardware. A lot has happened since then. - Daniel White created a website about the topic, called 'The unravelling of the Real 3D Mandelbulb" where he explains the interesting ( and surprising ! ) history of 3D Mandelbrots. ### Exercise Exercise: Find $x, y$ such that $$\frac{1}{x} + \frac{1}{y} = \frac{1}{pq}$$ where $x,y \in \mathbf{Z}$ and $p,q$ are prime. Hint: there are nine different solutions. I'll publish the method and solution on request ( comment ). ## Friday, August 19, 2011 ### About mathematics at the Open University There are of course many differences between studying ( mathematics ) at the Open University and studying math at a brick university. The main difference is of course the main method of delivering knowledge: course booklets versus lectures with accompanying lecture notes. A brick university course is often based upon some textbook. Homework includes reading assignments and exercises. An Open University booklet is a mix of theory, worked examples and exercises. If the method of presentation matches the way you like to learn math following a course is easy. - The way mathematics is presented in textbooks (at the advanced undergraduate or graduate level ) is however completely different. If you learn all your math from Open University booklets this may come as a shock, since the skill to read mathematics books hasn't been developed. Compare a graduate math book in your field of interest with one of the level 3 booklets to see what I mean. Or to put it differently: you have not been initiated in the ( secret ) protocols of how mathematicians communicate. The challenge can best be met by attempting to solve the exercises without recourse to the hints. The density of information in the text is rather high; a newcomer may need one hour for one page. Make sure to have paper and pencil at hand when reading the text. Wolfgang Rautenberg in "A Concise Introduction to Mathematical Logic 3rd edition, Springer 2010, preface" ### When nerds fall in love... Most shapes can be described by one or more equations, the human imagination does the rest. The equation (x^2 + 9/4 y^2 + z^2 - 1)^3 - x^2 z^3 - 9/80 y^2 z^3 == 0 describes a surface representing the form of a heart. Click to enlarge ( From an idea in the Mathematica docs.) ## Tuesday, August 16, 2011 Just found out that my personality resembles the fictional characters George Smiley, John le Carre's master spy but alas also Professor Moriarty, Sherlock Holmes' nemesis or the worst of all Hannibal Lecter (Silence of the Lambs). Real characters in my personality group aren't much better: Arnold Schwarzenegger, Donald Rumsfeld, Hillary Clinton and Michelle Obama. - I wouldn't have said that I look like any of them, not counting George Smiley. The description of the personality however is correct. I am an INTJ type. The predicted job matched too: computer programming. Lots of scientists have this profile as well. Famous ones are Stephen Hawking and Isaac Newton. Do the test yourself here. The test is based on 16 types originally described by Jung. ## Sunday, August 14, 2011 ### Mathematics: how hard is it really? Let's be honest: mathematics is hard. I've been postponing a full entrance into the life of a mathematician since the beginning of my excursions into mathematics because it is an extremely difficult road to traverse. The terrain is extremely demanding. The amount of work and concentration required to build the foundation necessary to continue extending the framework is immense. I admire the men and women who have come before me and were able to put in the work necessary to stand in their tower and look upon the landscape they greatly desire to see farther and farther into. That landscape is, of course, the mathematical landscape and it is a beautiful and terrifying scene to gaze upon. It is terrifying, do to its clean, sterile, and powerful nature. I am ready, however, to fully embark on this journey, which I am afraid will consume my life. But it is a necessary sacrifice to make. I am not the most talented mathematician. And because of the extent of the work that is out there and the pace at which mathematics is moving today, to truly make a mark in the mathematical world I must devote more time and energy to this calling than I have to any other task thus far. - David Andrews ( Read his blog if you like his thoughts. ) But: is it, really? Isn't mathematics easier than say particle physics? Look at the thousands of people working with billion dollar equipment at CERN. What about the rules and regulations in biotechnology? All sciences use mathematics to the limit, mathematicians have only math to worry about. - Or what about this one: economy must be the most difficult science because economists fail again and again in their forecasts and there is no consensus among economists which way leads us out of the depression. What do you think? Is mathematics hard? If so, what in particular makes mathematics hard? ## Friday, August 12, 2011 I found an interesting book. It's hard to describe the book or assign it to a category. Let me give you two quotes from the book. Mathematicians always strive to confuse their audiences; where there is no confusion there is no prestige. All numbers are interesting, since the first uninteresting number would be interesting. If you like mathematics and you are ready for some light reading while giving the impression you are reading really hard stuff then this book might be for you. A handbook for the perplexed Carl E. Linderholm ( Unbelievable it is out of print in the age of Kindle, iPad and what have you. ) ### The Code - Episode 3 - Prediction ( with Marcus du Sautoy ) Just like the orbits of the planets life follows a pattern. It can be reduced to cause and effect. Everything can be represented by numbers, and thus has mathematics at his heart. Strip everything away and mathematics remains. The Code Beautiful TV, the BBC at it's best. Series verdict 8.5/10. Not a 10 because given previous series with Marcus du Sautoy, and the promising title "The Code", my expectations were too high. I had the idea it was mostly about physics, ( of which some say is just another branch of mathematics ). Episode 3 starts off with a tale about Columbus, lunar tables and the moon eclipse. Given the regular movement of the planets it is possible to forecast a moon eclipse. "The Code is such a powerful thing that I entrust my life to it" says du Sautoy. He calculates the arc of a ball which is rolled off some ramp and takes a seat close by where the ball should land. Classical mechanics is predictable. Denmark. Flocks of starlings. A single flock can contain a million birds. The Black Sun they call them. Suprisingly flocks can be modelled mathematically. On to America. We meet a detective with a Ph.D in mathematics hunting for serial killers. ( I would rather consult Charlie Eppes though. Charlie is an expert in -all- branches of mathematics. ) This detective said that he studied the Jack the Ripper case and worked out the address of Jack the Ripper. Not bad for a case as old as 888. My prediction is that Jack the Ripper is dead by now. Knowing the series is part of an Open University course I am not surprised that the logistic map made an appearance. If you intend to study math at the Open University be prepared. The logistic map is hard, very hard. The logistic map is what makes mathematicians modest, humble almost. - Suppose there is a God, a creator, who used mathematics as a language in the creation of the universe. Wouldn't he/ she/ it be so clever to -secure- the creation from having it cracked from creatures like us? If you assume that then the logistic map, chaos theory, the complexity of the primes, Goedel's theorems and all that could be just firewalls. Anyway, du Sautoy shows that we can use the logistics map to understand the dynamics of ( lemming ) populations but we won't be able to predict populations with it. Weather systems ( and I suppose stress systems related to earthquakes ) are bound in a similar manner. New York. Patterns everywhere, of course. ( Echoes of Max Cohen? ) There seems to be a 15% rule for cities. It says that when a city doubles in size everything gets better by 15%. Then I lost it: You have 15% more restaurants to choose from, 15% more art galleries ( .... ). Here the producers had to show off their class, I suppose. A bit insensitive in such harsh economic times. Not good in my opinion. - The evening The Code part 3 was broadcast a friendly between England and The Netherlands was canceled. Due to class related riots in London. ## Thursday, August 11, 2011 ### High IQ I have seen the new ape movie 'Rise of the Planet of the Apes'. ( Average user rating on IMDB is 8.0, today. ) It's a fun movie really, for every member of the family, not that it's a comedy, it's an action thriller with a touch of Splice and Avatar. To the point, this is a mathematics blog. When the movie started to work on me I thought about the great minds of tomorrow, but still children today. Some of which may be struggling with the fact that they seem somehow 'different'. Some are lucky and are recognized as children with high IQ. But others may be entirely surrounded with average or low IQ teachers, parents and friends. The most gifted child is probably ( as in statistics ) born in India, China or Africa, which makes the chance we'll ever benefit from his or her gifst quite remote. - Again, I thought of Ramanujan. - Part of being gifted with a high IQ is waking up to it. Life is miserable when you have to live with monkeys. Although officially diagnosed with some disorder in the DSM there is nothing wrong with most middle-aged, highly sensitive, but depressed people, except that they did not wake up to their IQ. The main character in the movie is a research scientist working for a pharmaceutical company. His father has Alzheimer's disease so that's the area he is researching. He is working on a promising drug which reached the stage for testing on apes. Something goes wrong and all test apes must be killed. One of the apes was carrying a baby and is saved and raised by people. The movie is the story of that ape, Caesar who inherited the gene modifications from his mother. Caesar physically develops as an ape but his intelligence is higher than that of any human. He is of course completely aware of his situation. Society demands Caesar is locked up with other apes. Despite his intelligence he is powerless among his peers. The first thing the apes do is humiliating him by ripping of his clothes and tearing them apart. - The most intelligent person on the planet is also the most lonely person on the planet. I know of a group that has all sorts of programs to develop the mental capabilities of its members but the message is that you can't use them when you are on your own. Only when you are in a group of equals you can flourish and prosper. - Low IQ people bring you down, if they must by force. ## Monday, August 8, 2011 ### Proof that there are only five regular polyhedra. In my previous post I wrote that Euler's formula implies that there can only be five regular polyhedra and that this can be shown by simply solving a Diophantine equation. In this post I will demonstrate ( read: prove ) this and I will show how to solve Diophantine equations in Mathematica. We start with Euler's polyhedron formula $$F - E + V = 2,$$ where $F=$ number of faces, $E=$ number of edges and $V=$ number of vertices. What exactly makes a polyhedron regular? Clearly, all the faces are equal ( i.e. all triangles ) of a regular polyhedron. This property alone is not enough though. Also, on a regular polyedron the same number of faces meet at each vertex. Clearly the number of edges remains an unknown for now. These requirements can be related to E, the number of edges, if we introduce two ( new ) variables: - $A$ : the number of edges surrounding a face. ( i.e. $A = 4$ for a cube ). - $B$ : the number of edges meeting at a vertex. ( i.e.$B = 3$ for a cube ). We know the following about a regular polyhedron: $$FA= 2E$$ $$VB = 2E$$ $$F-E+V=2E$$ $$A,B \geq 3$$ $$F,E,V \geq 1$$ ( Note that I have used $FA=2E$ ( instead of $FA=E$ ) because each edge is part of two faces, as well as $VB = 2E$ because each edge connects two vertices. ) By simply algebraically reworking the equations above we get $$\frac{2E}{A}-E+\frac{2E}{B}=2.$$ In this particular case there are two approaches to attack this Diophantine equation. ( Named after Diophantus, 3rd century. Diophantine problems have fewer equations than unknown variables and involve finding integers that work correctly for all equations.) We can use trial and error or we can use a tool like Mathematica. I have used Mathematica as follows. The tool to solve Diophantine equations in Mathematica is Reduce. In Mathematica Reduce[expr, vars, dom] reduces the statement expr by solving equations or inequalities for vars and eliminating quantifiers and does this over the domain dom. We solve $\frac{2E}{A}-E+\frac{2E}{B}=2$ in Mathematica as follows using Reduce: Reduce[{(2 e)/a - e + (2 e)/b == 2, a >= 3, b>= 3, e >= 0}, {a, b, e}, Integers] Out[1]= (a == 3 && b == 3 && e == 6) || (a == 3 && b == 4 && e == 12) || (a == 3 && b == 5 && e == 30) || (a == 4 && b == 3 && e == 12) || (a == 5 &$b == 3 && e == 30) So there are indeed five solutions! Let's look at them more closely. F V E Type$\frac{2 \cdot 6}{3} = 4$triangle$\frac{2 \cdot 6}{3}= 4$6 Tetrahedron$\frac{2 \cdot 12}{3} = 8$triangle$\frac{2 \cdot 12}{4}= 6$12 Octahedron$\frac{2 \cdot 30}{3} = 20$triangle$\frac{2 \cdot 30}{5}= 12$30 Icosahedron$\frac{2 \cdot 12}{4} = 6$square$\frac{2 \cdot 12}{3}= 8$12 Cube$\frac{2 \cdot 30}{5} = 12$pentagon$\frac{2 \cdot 30}{3}= 20$30 Dodecahedron A=5 (edges per face), B=3 (edges to a vertice ) = Dodecahedron. This proof clearly shows that the various disciplines ( in this case topology, number theory and geometry ) in mathematics are related and thus pointing to mathematics at a deeper layer . It has been said that graph theory, geometry, algebra and number theory are just different manifestations of the same mathematical concepts. I recently read that the Riemann Hypothesis ( analytical number theory ) can be proved by proving an equivalent theorem in graph theory. Deep stuff, surely. ## Thursday, August 4, 2011 ### The Code - Episode 2 - with Marcus du Sautoy The Code part 2 is about shapes. Descartes ( rediscovered by Euler ) stated that for any polyhedron the following identy holds $$F-E+V=2.$$ Where$F$is the number of faces,$E$is the number of edges and$V$is the number of vertices. A tetrahedron, for example has$4$vertices,$4$faces and$6$edges:$4 - 6 + 4 = 2$. One of the most beautiful and fascinating mathematical theorems I know states that this formula implies that there are only five regular polyhedra. A theorem that doesn't involve any deep topology or geometry, given$F-E+V=2\$ all it takes is some number theory, i.e. solving some Diophantine equations and modular congruences. Du Sautoy demonstrates the 2D version of the formula above in this episode. He shows that in 2D there can only be three regular lattices created from a regular polyhedron. He visits a beehive and shows how bees create lattices based on perfect pentagons. He then calculates the amount of required wax for the three possibilities and concludes that the pentagon is the best solution. "Nature is lazy", he says and is obviously part of The Code. The fascinating fact here is that nature, in the form of the bee, "knows" this, and for thousands of years. The knowledge about the pentagon and the skill to create pentagons seems encoded in the bee lifeform. Then, what is nature? And what exactly is the role of mathematics? That seem to be the questions he is trying to answer. From there on Du Sautoy shows us more regular polyhedra in nature. Like a virus in the shape of an icosahedron. He visits a salt mine with perfectly cube shaped crystals and using a model of the molecule explains the creation of the cube. If you look closer to the shapes however the creations are not mathematically perfect. Are we using mathematics merely to understand nature? Or is nature driven by mathematics? Fractals are introduced to explain tree like shapes. But although close mathematically perfect fractal-type-of trees don't exist either. Next week episode 3. ## Welcome to The Bridge Mathematics: is it the fabric of MEST? This is my voyage My continuous mission To uncover hidden structures To create new theorems and proofs To boldly go where no man has gone before (Raumpatrouille – Die phantastischen Abenteuer des Raumschiffes Orion, colloquially aka Raumpatrouille Orion was the first German science fiction television series. Its seven episodes were broadcast by ARD beginning September 17, 1966. The series has since acquired cult status in Germany. Broadcast six years before Star Trek first aired in West Germany (in 1972), it became a huge success.)
2018-04-25 20:12:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46385815739631653, "perplexity": 1191.6432516349576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947957.81/warc/CC-MAIN-20180425193720-20180425213720-00026.warc.gz"}
https://discourse.mc-stan.org/t/generate-from-a-normal-distribution-in-model-block/14992
# Generate from a normal distribution in model block Hi there, I’m trying to fit a model that looks like this: \hat V = Normal(V, \sigma_v) \Delta EV = P_L {\hat V _L}^{\rho} - {\hat V _{SB}}^{\rho} p(choose\ lottery) = ({1 + e^{-\Delta EV}})^{-1} V is a vector of lottery values that I pass in, \sigma_v and \rho are the parameters. The problem is, there is no function in stan that allows to me to generate \hat V from V in the model block. normal_rng can only be used in generated quantities. Is there an easy solution to this? Many thanks for any help! Hey! Did you try something like this ... parameters{ ... vector[N] v_hat; ... } model{ v_hat ~ normal(v, sigma_v); ... } ? Edit: Or something like ... parameters{ ... vector[N] z; ... } transformed parameters{ vector[N] v_hat = v + sigma_v * z; model{ z ~ std_normal(); ... } 1 Like Hi Max, thanks for the great tip! The second option works for me. 1 Like Actually, now I realized this approach has one problem: it will save iter * N parameter z, which is not practical when fitting on a large dataset. My final goal is to fit this model on a large dataset with more than 1M trials. Is there any other way to implement this without declaring extra parameters? @Max_Mantei The problem is that \hat V hast to be a parameter (you can only use RNG functions inside the generated quantities and transformed data blocks in Stan). In the second approach you can move the line vector[N] v_hat = v + sigma_v * z; inside the model block, so it’ll not save v_hat. Also, you can specify which parameters to save in your specific Stan interface (you’ll still need the memory while fitting). 2 Likes Alternatively, you can use the offset and multiplier syntax. Where this: ... parameters{ ... vector[N] z; ... } transformed parameters{ vector[N] v_hat = v + sigma_v * z; model{ z ~ std_normal(); ... } is equivalent to: ... parameters{ ... vector<offset=v, multiplier=sigma_v>[N] v_hat; ... } model{ v_hat ~ normal(v, sigma_v); ... } That way you only save the values of v_hat, but the estimation is the same 4 Likes It took me a deep dive into stan manual to find this, thanks for the insight I’m not sure if I should start a new thread for this, but I’m gonna post it here anyway. Basically here is a hierarchical version of the simple model. The simple model worked fine on synthetic data. However, when running this model on synthetic population data, the fits end up with > 80% divergent transitions and large Rhats. Increasing adapt_delta and increasing iter from 2000 to 3000 didn’t seem to help much, so didn’t increasing dataset size. My intuition tells that I should reparameterize the model, but honestly I don’t know how. I’m quite new to stan so any help is greatly appreciated!! // hierarchical model fitting one species as rho-sigmav agents functions { real choicex(real rho, real lott_value_hat, real lott_prob, real sb_value_hat, real rew_multi) { real y; real u1; // Lottery utility real u2; // Surebet utility u1 = lott_prob * (rew_multi * lott_value_hat) ^ rho; u2 = (rew_multi * sb_value_hat) ^ rho; y = u1 - u2; return y; } } data { int<lower=0> N; // Number of trials we have int<lower=0> K; // number of subjects in each species int individual[N]; // vector of subjid indexes vector[N] lott_value; // Lottery value for that choice vector[N] lott_prob; // Lottery probabilities for that choice vector[N] sb_value; // Surebet values for that choice vector[N] total_rew_multi; // total reward multiplier = base_reward * rew_multi int<lower=0,upper=1> y[N]; // Choices we observe (1 if they pick lottery) } parameters { real<lower=0, upper=4> rho_s; // species-level rho real<lower=0> sigmav_s; // species-level sigma_v real<lower=0> sigma_rho; // standard deviation for species rho real<lower=0> sigma_sigmav; // standard deviation for species sigma_v vector<lower=0, upper=4>[K] rho_i; // individual-level rho vector<lower=0> [K] sigmav_i; // individual-level sigma_v vector[N] z_lott; // z score of lott_value_hat vector[N] z_sb; // z score of sb_value_hat } model { vector[N] thetas; real lott_value_hat; // placeholder for perceived lottery value real sb_value_hat; // placeholder for perceived surebet value // set weak priors rho_s ~ normal(1, 0.5); sigmav_s ~ normal(3, 1); sigma_rho ~ normal(0.5, 0.3); sigma_sigmav ~ normal(0.5, 0.3); z_lott ~ std_normal(); // implies lott_value_hat ~ normal(lott_value, sigma_v * z_lott) z_sb ~ std_normal(); // implies sb_value_hat ~ normal(sb_value, sigma_v * z_sb) // draw individual parameters from the species & population distribution for(k in 1:K){ rho_i[k] ~ normal(rho_s, sigma_rho); sigmav_i[k] ~ normal(sigmav_s, sigma_sigmav); } // fit the actual model to each trial for(n in 1:N){ lott_value_hat = lott_value[n] + sigmav_i[individual[n]] * z_lott[n]; sb_value_hat = sb_value[n] + sigmav_i[individual[n]] * z_sb[n]; thetas[n] = choicex(rho_i[individual[n]], lott_value_hat, lott_prob[n], sb_value_hat, total_rew_multi[n]); } y ~ bernoulli_logit(thetas); }
2022-05-18 16:38:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5549619197845459, "perplexity": 11739.449018203924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522284.20/warc/CC-MAIN-20220518151003-20220518181003-00767.warc.gz"}
https://math.stackexchange.com/questions/741861/integrating-partial-fractions
I have # $\int{\frac{2x+1}{x^2+4x+4}}dx$ Factorising the denominator I have # $\int{\frac{2x+1}{(x+2)(x+2)}}dx$ From there I split the top term into two parts to make it easier to integrate Therefore # $2x+1 = A(x+2) +B(x+2)$ This is where I would normally use a substitution method to eliminate either the A term or B term by letting x = something like -2, which would get rid of the A and usually leave me with the B term to solve. However since they are the same I'm not sure what to do. I've been told to try evaluate the co-efficients, but am not sure how. You want to try a split like $$\frac{2 x+1}{(x+2)^2} = \frac{A}{x+2} + \frac{B x}{(x+2)^2}$$ Then $A+B=2$ and $2 A=1$. The decomposition is then $$\frac{2 x+1}{(x+2)^2} = \frac12 \frac1{x+2} + \frac{3}{2} \frac{x}{(x+2)^2}$$ • Is it $Bx$? I doubt – Semsem Apr 6 '14 at 9:06 • @semsem: you doubt what? Check the work, it is correct. I could have done just $B$ as well. The reason either way works is that one may add and subtract $2$ in the numerator of the second fraction to change the representation. – Ron Gordon Apr 6 '14 at 9:12 • Ok, you are right – Semsem Apr 6 '14 at 9:13 Hint $$\int{\frac{2x+1\color{red}{+3-3}}{x^2+4x+4}}dx=2\ln|x+2|+\frac{3}{x+2}+C$$ • This is the best what can be done. – kmitov Apr 6 '14 at 8:47 All you need to do is to solve this with respect to polynomials: $2x+1=Ax+2A+Bx+2B$ $2x+1=x(A+B)+(2A+2B)$ $A+B=2 \rightarrow B=2-A$ $2A+2B=1$ $2A+4-2A=1\rightarrow 4=1$ This is contradiction! You have made an mistake in step where you split the term into two fractions, you should have done it like this: $\frac{2x+1}{(x+2)^2}=\frac{A}{x+2}+\frac{B}{(x+2)^2}$ and then proceed like usual. $A(x+2)+B=2x+1$ $A=2$ $2A+B=1$ $B=-3$ $\int \frac{2}{x+2}+\frac{-3}{(x+2)^2}dx=2ln(x+2)+\frac{3}{x+2}+C$ In this case we a linear repeated factor so we split it like $$\frac{A}{(x+2)}+\frac{B}{(x+2)^2}=\frac{A(x+2)+B}{(x+2)^2}$$ and hence $$2x+1=A(x+2)+B=Ax+2A+B$$ then by equating coefficients $$A=2,\\2A+B=1\implies B=-3$$ • How did you get A=2 from that? – user88720 Apr 9 '14 at 6:14 • @user88720 i edited it – Semsem Apr 9 '14 at 7:07
2019-11-22 12:29:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8643361330032349, "perplexity": 451.70144476324236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671260.30/warc/CC-MAIN-20191122115908-20191122143908-00383.warc.gz"}
http://math.stackexchange.com/tags/probability/new
# Tag Info 0 Path A-B-C is free when A-B and B-C is free: $p_{ABC-free} = (1-p)*(1-p)= (1-p)^2$ So it is blocked: $p_{ABC-blocked} = 1-p_{ABC-free} = 1-(1-p)^2= 1-1+2p-p^2 = p(2-p)$ C is not accessible from A if ABC and AC is blocked: $p_{blocked} = p_{ABC-blocked} * p_{AC-blocked} = p(2-^2) p = p^2(2-p)$ So it is possible to move from A to C with probability ... 0 In question(b), we have (150,000-2000*1)/100=130 persons who test positive after test A.So they go to Test A. In 2000 persons, persons who are actually negative but test positive are 2000*0.98*0.05=98. So when we deduct the number of false positive 130-98=32, it gives us the persons who actually has the condition. 0 Note $$\frac{6!}{3!2!1!} = 120 \therefore P = \frac{1}{120}$$ For second, since each trial is independent, after $N$ trials, it's binomial distribution, and $$P(X=n)=\binom{N}{n}P^n(1-P)^{N-n}$$ For the third question, after $N$ trials, the monkey receives at least 2 banana has probability, $$Q=1-(1-P)^N-\binom{N}{1}P(1-P)^{N-1}$$ The question then asks ... 0 The numerator should be $$\binom{4}{x}\binom{4}{y}\binom{4}{z}\binom{40}{6-x-y-z},$$ for any $x,y,z$ with $x+y+z\le 6$, with the obvious restrictions on $x,y,z$. 0 Thanks, as for your hint, the number of marbles having labels that are at most x is x right? So P(X<=x)= (k choose x)/ (n choose k) ? 0 Use the hypergeometric distribution: $$\frac{\binom{R}{x} \binom{N-R}{n-x}}{\binom{N}{n}}.$$ This yields: $$\mathbb P(\mbox{"obtaining exactly 3 black balls"})=\frac{\binom{9}{3} \binom{10}{1}}{\binom{19}{4}}.$$ $$\mathbb P(\mbox{"obtaining exactly 4 black balls"})=\frac{\binom{9}{4} \binom{10}{0}}{\binom{19}{4}}.$$ Now the probability of getting at least ... 0 Unless you know something about Frankie's bias in picking numbers, you have to assume a discrete uniform distribution, which has a probability mass function of $p_Y(h)=1/(b-a+1)$ on the support $h\in\{a..b\}$. So the expectation is: \begin{align} \mathsf E[(Y-k)^2] & = \sum_{h=a}^b (h-k)^2/(b-a+1) & : & h\in\{a..b\}, k\in\{a..b\} ... 1 \frac{\binom{9}{3}\binom{10}{1}+\binom{9}{4}}{\binom{19}{4}} 2 Let E(h,n) be the expected winnings (under the optimal stopping strategy) after flipping the coin n times and obtaining h heads. If you flip again, your expected earnings will be \dfrac{1}{2}E(h,n+1)+\dfrac{1}{2}E(h+1,n+1). If you don't flip again, your expected earnings will be \dfrac{h}{n}. Therefore, you flip again iff ... 0 \begin{align} (X,Y) & = \begin{cases} (a,a) & \text{with probability }p^2, \\ (a,b) & \text{with probability }pq, \\ (b,a) & \text{with probability }qp, \\ (b,b) & \text{with probability }q^2. \end{cases} \\[10pt] \text{Therefore }X-Y & =\begin{cases} 0 & \text{with probability }p^2, \\ a-b & \text{with probability }pq, \\ b-a ... 0 Since there only two players, one of whom gets a point each turn, you can represent the entire game from the perspective of only one of the players. Lets do player 1: Player 1 wins if he gets two wins before five losses, and he has a probability of winning of 1/3. Therefore, the longest game will last six turns - whoever met their quota in these six turns ... 1 For a trivial estimate of the coin flip game, you can't win more than 1 (all heads), so that is an upper bound. Less trivially, we need to define our strategy. Suppose you have flipped h heads and t tails. Let V(h,t) be the value of the game at this point. Clearly V(h,t) \ge \frac h{h+t} because we can stop now and get that. If we flip ... 1 Use the hypergeometric distribution: Hg(5,R=\mbox{number of questions you learnt},12). The hypergeometric distribution is useful when we have a population of N individuals, with R individuals having the characteristic we are looking for and with n as the number of individuals of a sample. For a random variable X with hypergeometric distribution ... 1 Without loss of generality, assume that X and Y are Bernoulli random variables with the same parameter p = P\{X=1\} = P\{Y=1\}. Thus, X-Y takes on values -1, 0, 1. Then, we have that\begin{align} P\{X=1\} = p &= P\{X=1, Y=1\} + P\{X=1, Y=0\}\tag{1}\\ P\{Y=1\} = p &= P\{X=1, Y=1\} + P\{X=0, Y=1\}\tag{2}. \end{align}$$From (1) and ... 1 Probability Space: Drawing 10 cards with replacement from a standard deck. Favoured Event: Drawing at least 2 queens. Complementary Event: Drawing less than 2 queens Let Q be the count of queens drawn. Assuming the deck is well shuffled after each replacement, then the probability of getting a queen on any draw is independent of every other draw, and ... 0 In the interests of engaging constructively with this question, here is one possible way a test could plausibly be bell-shaped: Lets say there are N students in the class. We can represent their level of preparedness for the exam by a "correct answer" probability, p_i, which we can interpret as the fraction of questions they would get correct if they ... 0 Prove \phi_{X-Y}(t) \in \mathbb{R} 1 Your working is correct. The reason the number is low despite the selectivity and sensitivity of the alarm is that because there are so very few actual fires then there are still many more false alarms than true alarms. 2 Looks right to me though I didn't actually check all the calculations. The low result is initially surprising, but is not uncommon in this kind of problem. It shows that even if P(A\,|\,B) is very high, this does not guarantee that P(B\,|\,A) is high. It is often put in the context of a medical diagnostic test for a very rare disease. The point is ... 1 We draw without replacement k marbles out of an urn of n uniquely labelled marbles, marked from 1 to n, and record the highest label of all the marbles draw as our random variable X. Note: The support of X must therefore be: \{k, , , n\} The Cumulative Probability Function of X is \mathsf P(X\leqslant x), and this is the probability of ... 0 The tree approach is perfectly fine. Writing it with fancy symbols doesn't make it more rigorous, as it just restates what is in the tree. In either case, you will need to enumerate the possibilities and get each probability. There are exactly four ways to pass the test: Get the first three tests correct Fail one of the first three test and pass the last ... 1 Apparently there are two types of standard deviation: sample standard deviation : and population standard deviation : which is confusing because it was never introduced in textbooks... 2$$ \Pr(F(X)\le y) = \Pr(X\le F^{-1}(y)) = F(F^{-1}(y))=y. $$That works if F is invertible, since invertible CDFs are strictly increasing. 2 Let X_{k} denote the number of boxes that are to be bought to come in possession of k+1 tokens, counting from the moment that one is in possession of exactly k different tokens. Then X=1+X_{1}+\cdots+X_{n-1} boxes must be bought. Here X_{k} has geometric distribution with parameter p_{k}=1-\frac{k}{n} so that ... 1 Note$$\operatorname{Var}[X] = \operatorname{E}[X^2] - \operatorname{E}[X]^2 = \operatorname{E}[X^2] = \Pr[X^2 = 1] = \frac{2}{3}.$$The first equality comes from the calculation$$\begin{align*} \operatorname{Var}[X] &= \operatorname{E}[(X-\operatorname{E}[X])^2] \\ &= \operatorname{E}[X^2 - 2\operatorname{E}[X]X + \operatorname{E}[X]^2] \\ &= ... 1 It looks correct and I get the same results as your calculation with numpy. import numpy as np x = [-1,0,1] np.std(x) 0.81649658092772603 So it is probably the way you calculate it. 2 Picking $m+j$ marbles from a total of $n+k$ marbles can be done on $\binom{n+k}{m+j}$ ways. However, if it is done under the restriction that $m$ are of type $A$ and $j$ of type $B$ then it can be done on $\binom{n}{m}\binom{k}{j}$ ways. So the probability of that event is: $$\binom{n}{m}\binom{k}{j}\binom{n+k}{m+j}^{-1}$$ 2 In a simpler case, if you want to choose k items out of n items, the number of ways to do it is $\binom nk$, now if the $k$ is equal to $n$, i.e. when you want to take all the items, there is only one way to do it, you take it all, and that'd be $\binom nn =1$. Similarly, here since you want all 4 red ones, you only have one way to do it, and then you want ... 1 Hint: Let's say that $E_{i}$ denotes the event that he passes the $i$-th test. Then his chance of qualifying is: $$P\left(E_{1}\cap E_{2}\cap E_{3}\right)+P\left(E_{1}^{c}\cap E_{2}\cap E_{3}\cap E_{4}\right)+P\left(E_{1}\cap E_{2}^{c}\cap E_{3}\cap E_{4}\right)+P\left(E_{1}\cap E_{2}\cap E_{3}^{c}\cap E_{4}\right)$$ 0 For 1) Yes, they are both correct. 2) no, the rate is irrelevant since its a sequence of constants that disappear in the limit. Is there a counterexample that gave you concern? 1 If $F$ is the CDF of a random variable then $F(x)$ stands for the probability that $X$ will not exceed $x$, i.e. $F(x)=P(X\leq x)$. If $X$ is distributed over interval $[a,b]$ then it will only take values in $[a,b]$ so that for every $x\geq b$ it is true that $X$ will not exceed $x$. Equivalently you can say that for every $x$ with $x\geq b$ the probability ... 1 Look at the definition of CDF. By definition $F(x)=P(X\leq x)$. In this case, the support of the random variable is $[0,2\pi]$. So, $F\left(-\frac{\pi}{6}\right)=P\left(X\leq-\frac{\pi}{6}\right)=0$, since $X$ never attains values less than $-\frac{\pi}{6}$. Furthermore, consider a point to the right of the support of $X$, such as $F(3\pi)=P(X\leq 3 \pi)=1$, ... 0 Using Markov's inequality: $$P( |A_n-0|> \epsilon) \leq \frac{E(|A_n|)}{\epsilon}=\frac{|A_n|}{\epsilon}$$ Letting $n\rightarrow \infty$ on both sides proves convergence in probability. 1 Yes, you can. And no, there is no difference (the standard mesure of the quality of the estimator is his variance, which does not change when you add something deterministic). The change between both estimators is than both of them can't be unbiaised. 0 You wrote normally distributed. $23$ inches is $\dfrac{23-22.8}{1.1}$ standard deviations above the mean. What proportion of a normally distributed population is less than that many standard deviations above the mean? That's something you usually find using a table or software. (In this cases it's the proportion that's less than $23$ inches minus the ... 0 $P(D) = .02$ $P(R/D) = 1$ $P(R/D') = .05$ $P(D') = 1-.02 = 0.98$ $P(R) = P(D)P(R/D) + P(D')P(R/D') = .02(1) + .98(.05) = .069$ Cost of inexpensive + follow up test, if tested positive $=10 + 0.069(100)$$Expected cost = 10+ 0.069*100 = 16.9$$ 2 The approximation by$\Phi$will give results that are not too unreasonable even with numbers as small as$4$and$6$if, but only if, a continuity correction is used. That means "$\ge 4$" is the same as "$>3$", so you say "$>3.5$" instead. That gives surprisingly good approximations even with samples as small as this, whereas textbooks usually give ... 0 Defenition for probability limit:$X_n\to X$in probability iff$\forall \epsilon>0\lim\limits_{n\to\infty}P\{|X_n-X|>\epsilon\} = 0$. We should concern sequence of random numbers$X_n$such that$P\{X_n=A_n\}=1$. As$\lim\limits_{n\to\infty}A_n=0$then for any$\epsilon>0$there is$n_\epsilon$such that$|A_n|<\epsilon$for any ... 0 Let$X$be the random variable indicating # of the toss in which the first tail occurred. Let$Y$be a random variable which indicates the amount of money you earned. We are given that$X$has a probability function$P_X(x) =\left( \frac{1}{2}\right)^x=\mathbb P(X=x)$. It is clear that$Y$takes the values$2,4,8,16,32,-256. By definition: $$\mathbb ... 0 The only way to get a binomial distribution is to assume that the sampling is done with replacement. If the sampling is without replacement, the distribution of the number of defectives is hypergeometric. For your second question, if you pick 5 items and number them, the probability the first is defective is \frac{10}{25}. The probability the second is ... 0 The use of binomial and hypergeometric hinges on the concept of whether the samples are replaced or not. If it is replaced, then the probability of picking the second of n itmes remains the same as the first through n times that are picked while without replacement, the probability keeps reducing everytime you pick an item, in other words, the denominator ... 0 Just for completeness and based on Arthur's and your comments, if the total number of possible numbers is N=60, the number of numbers you pick is M=10, the total number of lucky numbers is n=20 and the number of lucky numbers you pick is k then using the hypergeometric distribution the probability of k is$$\frac{\displaystyle {K \choose k}{N-M ... 1 \begin{align} & \left(\sum_{x=1}^5 2^x\cdot\Pr(\text{winnings}= x)\right) - 256\cdot\Pr(\text{winnings}=256) \\[8pt] = {} & \left(\sum_{x=1}^5 2^x\cdot(0.5)^x\right) - 256\cdot \Pr\Big(\text{heads on all five of the first five tosses}\Big) \end{align} Now simplify the expression2^x (0.5)^x$and figure out the probability of getting heads on all ... 0 Is the total number of cells relevant at all? Assuming you choose the cells independently, the probability of having zero$1$s in the$25$cells chosen is$P=(0.96)^{25}$, so the probability of having at least one$1$s is$1-P$. 0 The wording for the third part of the question is somewhat unclear. One way to interpret it is to ask what is the expected number of days that Mr. Li will use the machine, if he uses it twice a day every day, until he observes the third dispensing failure (and therefore lose 6 yuan, which is the first instance he loses more than 5 yuan). Under this ... 0 A person can give 3 trial exams for the clearing test. In 1st attempt probability of passing is 40 , Those who have failed have 60 probability of passing the exam in 2nd attempt. Those who have failed in 2nd attempt have 20 probability of passing the exam 0 Just a hint: Let$X\sim B(n,1/2)$. Define$Y=-X$. Write the cdf of$Y$. At this point the normal approximation can be used. Apply the method described here for maximum of$k$random trials of$Y$to get the corresponding cdf. Differentiate the cdf to get pdf for maximum of$k$trials of$Y$. Transform this pdf to pdf of minimum of$k$trials of$X$. ... 1 So the convolution integral is: $$\int_{-1}^{1}(1-|x|)(1-|T-x|)dx$$ Now you can simply calculate this integral: $$\int_{-1}^{1}(1-|x|)(1-|T-x|)dx=\int_{-1}^{1}1dx+\int_{-1}^{1}|x|dx+\int_{-1}^{1}|T-x|dx+\int_{-1}^{1}|Tx-x^2|1dx$$ First two are easy:$\int_{-1}^{1}1dx=2$,$\int_{-1}^{1}|x|dx=1$. Third (you can eliminate$|\cdot|$dividing integral into ... 1 To determine the cdf$F_T(t)=P(T\le t)$, I would proceed geometrically. Points$(X,Y)$come from a square with corners at$(1,0), (0,1), (-1,0), (0,-1)$. To find$F_T(t)$, draw the line$X+Y=t$and note it is parallel to two of the sides of the square.$F_T(t)\$ is the area inside the square to the left of the line. Since the line is parallel to the side ... 0 $$f_Y(t-x)=1-|t-x|$$ and the integral bound should be from -1 to 1 since it is range of Sx. Then you could approach this problem by splitting (-1,1) and do the integration. Top 50 recent answers are included
2014-10-31 09:31:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.993749737739563, "perplexity": 516.0762200390083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637899290.52/warc/CC-MAIN-20141030025819-00076-ip-10-16-133-185.ec2.internal.warc.gz"}
https://arena.moi/problem/gc5recountdigits
## Re: Count The Digits Points: 20 Time limit: 1.0s Memory limit: 256M Author: Problem type Given two integers $$n$$ and $$b$$, compute the total number of digits in the sequence $$0_b, 1_b, ... n_b$$ where $$k_b$$ denotes the representation of $$k$$ in base $$b$$. #### Input Specification The first line of the input contains $$1\le t \le 1000$$, the number of test cases. $$t$$ lines follow. Each test case consists of two integers, $$0 \leq n \leq 10^{16}$$ and $$2 \leq b \leq 36$$. #### Output Specification For each test case, output a single integer: the total number of digits in the sequence. #### Sample Input 4 0 10 10 10 1000 10 5 2 #### Sample Output 1 12 2894 12 • mouad_smi  commented on Nov. 15, 2021, 11:25 a.m. edited i used this formula (sum += (ll)(floor(log(i)/log(b))+1)) and i looped through all the numbers and it gives me an incorrect answer for this test (10 10 -> 2894 it gives me 2893). • AkramElOmrani  commented on March 22, 2021, 10:58 p.m. edited I didn't quite understand how we got 2894 in the 3rd test case any help ? • mouad_smi  commented on Sept. 17, 2021, 10:48 p.m. same! • itsachrafmansari  commented on Sept. 18, 2021, 5:47 p.m. edited We are counting the total number of digits we need to represent all the K cases in the base B. As for the 3rd case : • from 0 to 9 there are 10 numbers, each contains 1 digit • from 10 to 99 there are 90 numbers, each contains 2 digits • from 100 to 999 there are 900 numbers, each contains 3 digits • the number 1000 contains 4 digits So : (10×1)+(90×2)+(900×3)+(1×4) = 2894
2022-11-28 19:15:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5510555505752563, "perplexity": 1380.9509835223394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710534.53/warc/CC-MAIN-20221128171516-20221128201516-00568.warc.gz"}
http://ndcpartnership.report/id/3a2e21-neural-network-visualization
For example, in MNIST, although the neural network starts to stabilize on epoch 30, t-SNE and UMAP still generate quite different projections between epochs 30, 31 and 32 (in fact, all the way to 99). Image credit to https://towardsdatascience.com/multi-label-classification-and-class-activation-map-on-fashion-mnist-1454f09f5925 Temporal regularization techniques (such as Dynamic t-SNE) mitigate these consistency issues, but still suffer from other interpretability issues. Next, to find the matrix form of the rotation, we need a convenient basis. Data points went directly toward the corner of its true class and all classes are stabilized after about 50 epochs. 1 response. This is especially true when we’re dealing with a convolutional neural network (CNN) trained on thousands and millions of images. In epoch 99, we can clearly see a difference in distribution between these two sets. What caused that? In this Building Blocks course we'll build a custom visualization of an autoencoder neural network using Matplotlib. PY - 2016/8/25. … here. GT(new):=GramSchmidt(GT~)GT^{(new)} := \textsf{GramSchmidt}(\widetilde{GT})GT(new):=GramSchmidt(GT Image credit to https://ljvmiranda921.github.io/notebook/2017/08/13/softmax-and-the-negative-log-likelihood/ Posted by Johanna Pingel, January 18, 2019. Then, using activation maximization, we can figure out that our dataset is probably not sufficient for the task and we need to add images of elephants in different habitats to our training set. Now, this information is very important for us to check the sanity of our dataset. These two steps make the axis handle move from ei~\tilde{e_i}ei​~​ to ei~(new):=normalize(ei~+Δ~)\tilde{e_i}^{(new)} := \textsf{normalize}(\tilde{e_i}+\tilde{\Delta})ei​~​(new):=normalize(ei​~​+Δ~). Visualizing neural networks is a key element in those reports, as people often appreciate visual structures over large amounts of text. This section presents the technical details necessary to implement the direct manipulation of axis handles and data points, as well as how to implement the projection consistency technique for layer transitions. Most commonly, a 3×3 kernel filter is used for convolutions. \cdots \textsf{normalize}(\tilde{c}^{(new)}_{\perp}) \cdots \\ Over time, the Grand Tour smoothly animates its projection so that every possible view of the dataset is (eventually) presented to the viewer. The source code in the repository can be used to demostrate the algorithms as well as test on your own data. When user drags an axis handle on the screen canvas, they induce a delta change Δ=(dx,dy)\Delta = (dx, dy)Δ=(dx,dy) on the xyxyxy-plane. Review 1 - Anonymous Also, we can use the total number of trainable parameters to check whether our GPU will be able to allocate sufficient memory for training the model. Visualization never ceases to amaze me. In pre-softmax, for example, we see that these fake 0s behave differently from the genuine 0s: they live closer to the decision boundary of two classes and form a plane by themselves. What a wonderful piece of work! Our proposed method better preserves context by providing more compare visualizations of the training and testing data, giving us a qualitative assessment of over-fitting. -\sin \theta& \cos \theta& 0& 0& \cdots\\ Learning settings. For example More interesting, however, is what happens in the intermediate layers. Each linear projection from nnn dimensions to 222 dimensions can be represented by nnn 2-dimensional vectors which have an intuitive interpretation: they are the vectors that the nnn canonical basis vector in the nnn-dimensional space will be projected to. However, when looking at the available tools and techniques for visualizing neural networks, Bäuerle & Ropinski (2019) found some key insights about the state of the art of neural network visualization: We need to either show two dimensions at a time (which does not scale well as the number of possible charts grows quadratically), GT~←GT\widetilde{GT} \leftarrow GTGT choosing to project the data so as to preserve the most variance possible. Here are a couple of resources you should check out: Let me know if you have any questions or feedback on this article. They give us a way to peer … With a change of representation, we can animate a convolutional layer like the previous section. In a nutshell, when user drags the ithi^{th}ith axis handle by (dx,dy)(dx, dy)(dx,dy), we add them to the first two entries of the ithi^{th}ith row of the Grand Tour matrix, and then perform Gram-Schmidt orthonormalization on the rows of the new matrix. Umap projections of the cube to applying those simple operations: xA=xUΣVTx =... Is very important for our classification purposes work on extracting insights from these visualizations for tuning our model... Framework built by TensorFlow.js, Three.js, and their training process is often hard interpret. Not generalize well to the corresponding class corner architecture for our problems part... Extracting insights from these visualizations for tuning our CNN model CNN filters can be used to a... Many success cases in the paper – deep Inside convolutional networks have been used thoroughly over the few! Some major ones i can think of: that ’ s decision for an... A result of the training, we revisit the linear projections for the tasks! ( ILSVRC ) us working on our personal machines activations on each such branch, but research... Two neurons in that case, we could consider visualizations of the same,! Interpret, and their training process of these networks be interpretable to.! Useful when direct manipulation human brain works is precisely how we instinctively identify elephants, right class probability visible... What occlusion maps are another visualization technique based on gradients visualization by Otavio Good been thoroughly! An information-processing Machine and can be used to demostrate the algorithms as well as test your! Ensure that the model further all points in the image is clearly important for the MNIST classifier ReLU activations.! This intuition is precisely how we think the human brain works implementation in Engine! Once, looking to find patterns like class-specific behavior, and their training process is notoriously hard to,. Growing need that neural networks consist of just convolutions and poolings know the importance of visualizing output. To facilitate their interpretability class-specific behavior during training are many success cases in the previous section, our convention. Three.Js and Tween.js with a change of representation, we will read the input image had... Dimensionality reductions, we propose to visualize what the model are increasing at an astonishing rate a square, Grand. Are black boxes of linear algebra as the image is clearly important for our problems Course Catalog 9. Here are a couple of resources you should check out: let me know if you see or! Building Blocks Course we 'll build a custom visualization of convolutional nets CC-BY 4.0 with the PHATE visualization can navigate! Be too many axis handles to naturally interact with software “ neurons ” are created and together! Please cite this work as of deep Covolutional neural networks through deep visualization which visualization! Analytics ) maximized when the input final layer the complexity of the triangle have non-sequential parts as. Have been used thoroughly over the past few years for building a program... And the methods to visualize them process in axis mode can ’ t take a pen and paper explain. This means that occluded part of the pattern in this tutorial, you will exactly! For detecting cancerous tumours projection coefficients from one layer to the next the classifier! With tf-explain there be only two neurons in that layer, a collection of software “ neurons are... Directly reason about in one step parameters associated with each layer the learnable in. Our dataset linear projections for the class clusters ( possibly because of an inappropriately-chosen hyperparameter neural network visualization these... Output class probability are visible that we can use the Grand Tour also lets qualitatively! A discussion a sequence of linear algebra generate a 3D visualization framework built by TensorFlow.js, Three.js, and.! Of these networks of how and why DNNs work is relatively easy to access individual! Through deep visualization which discusses visualization of a neural network visualization use Matplotlib to what. Cifar-10 there is a natural thought, since they have the same dataset, we will also on... The axis mode gives us a way to peer … now, this is a network. Our notational convention is that data points form a triangular shape in image... At the respective layer boots, as data points are projected close to the model confuses sandals, sneakers ankle! Machine and can be used to demostrate the algorithms as well as test on your own data 258.... Highway branches or dedicated branches for different tasks networks consist of just convolutions and poolings previously, often. And clustering accuracy comparable with t-SNE one point at a fine Scale peer … now, let s... Relatively easy to access the individual layers of a model if we knew ahead of to... Smoothly animating random projections, using a technique for building a computer program that learns data! Information, … neural network we will read the popular paper understanding neural networks deep. Its strength from parallel processing of information, … neural nets are black boxes which discusses visualization a... By net-SNE can be visualized when we are training only a subset of (... 14 ), but still suffer from other interpretability issues made invisible consistent with PHATE! Projecting it to 2D advantage of a neural network, image credit https!, visualizing layer outputs can help us find out which part of the other this work as are multiple to! Examining the process of learning is by coding the concept to look for maximization is to! Make sure our model abilities, yet they largely operate as black box has. Softmax space happens with digits 1 and 7, around epochs 14 and 21 respectively their interpretability Course $. Occluded in the Fashion-MNIST dataset and classifier, and UMAP projections of the background... Gram-Schmidt procedure CC-BY 4.0 with the world with their powerful abilities, yet they largely as. Here ’ s a technique called the Grand Tour is a sequence linear... One point at a use case that will help you understand the behavior of activations...: //ljvmiranda921.github.io/notebook/2017/08/13/softmax-and-the-negative-log-likelihood/ layer patterns from the softmax space looking to find patterns class-specific! Most confusions happen between two out of the training and testing sets where the phenomenon described! Their own strength performance in complex Machine learning tasks such as Dynamic graph drawing, or concerns incomparable... Patterns from the testing set keep oscillating while most images from the softmax space architecture! Like class-specific behavior, and its 2D projection let us see what going... With tf-explain training only a subset of the neural network tends to confuse most happen! Science from different Backgrounds, do you need a Certification to become a data scientist and dimensionality reduction.... Especially true when we optimize the input the individual layers of a simple of. Maximization technique, one can always flatten the 2D array of scalar values for gray Scale or! Will get to know the importance of visualizing the different features of a network converge ( or a Business )! This tutorial, you will discover exactly how to Transition into data without! We visualize the actual training process of these networks axis handles to naturally interact with flatten the 2D of. The three classes, they really are just pipelines of relatively simple.. Them would not be as intuitive we ’ re dealing with a constant angular velocity to Transition into data without... And i ’ ll be happy to get into a tizzy are really two-way confusions repository! Thus does not generalize well to the next as clear semantics as softmax! Rows have to be reordered such that the ithi^ { th } ith-row-first Gram-Schmidt does the classification a! Delta change in the press about the application of those models net-SNE can used. The browser filters are the data credit to https: //towardsdatascience.com/gentle-dive-into-math-behind-convolutional-neural-networks-79a07dd44cf9 and softmax, do not to. Summarize and visualize your deep learning models and especially neural networks and convolutional neural networks in to. Technique works, but all points in the softmax layer, so manipulating would... Of one layer to the off-beaten path of visualization very loosely on how we instinctively elephants. Under js/lib/webgl_utils/ are adapted from Angel ’ s computer graphics book supplementary.. Whose values are positive real numbers that sum up to 1 following figure presents a simple neural filter..., looking to find patterns like class-specific behavior during training for WebGL under js/lib/webgl_utils/ adapted... Of how and why DNNs work is relatively rare visualizing neural networks have developed! Recent DNN architectures, however, is what occlusion maps are all about training and sets. To https: //ljvmiranda921.github.io/notebook/2017/08/13/softmax-and-the-negative-log-likelihood/ layer works, so why turn to more visualizations! That case, we can clearly see a difference in distribution between these two leopard?. Identifying classes which the neural network implementation in Unreal Engine 4 can have different purposes, edcucational as. Perceive connections and meaning between unrelated things” the epochs where the user is available or desirable using image! Method only considers one point at a fine Scale will be using below! Structure of the triangle provides tools to visualize and better understand your neural draws... 'S going on in an autoencoder Enroll in Course for$ 6 this involves the. Then decide which layers we want to Train this geometric interpretation can be seen as reminder... Notoriously hard to interpret, and its 2D projection let us see what 's going on neural network visualization an autoencoder network... Learning model methods, that of identifying this class-specific behavior during training similar patterns in the press about relationship! The 2D array into an equivalent ( w⋅h⋅cw \cdot h \cdot cw⋅h⋅c ) -dimensional vector test on your own.... Operations, notably max-pooling max-pooling calculates maximum of a central theorem of linear ( both convolutional a calculates! The classification of a convolutional layer is relatively easy to understand the behavior neuron! ## neural network visualization Dapper Dan Designs, Inpatient Mental Health Nashville, Diy Strawberry Patch, Rohu Fish For Weight Loss, Filipino Bbq Pork Skewers Calories, Betula Alba Juice Skin Benefits, Mtg Modern Abzan 2020, Where Is The Tell Me Box In Word 2019, Easton Wonder Vs Demarini Vendetta,
2021-04-13 19:01:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4715765416622162, "perplexity": 1538.3123192646153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038074941.13/warc/CC-MAIN-20210413183055-20210413213055-00101.warc.gz"}
https://ux.stackexchange.com/questions/14102/how-to-display-a-tree-view-with-thousands-of-records/14108
# How to display a tree view with thousands of records? I have a hierarchical structure such as the following : every user has some contracts associated to him, every contract has some groups associated to him, and every group has some articles. I represent this visually using a TreeView (with checkboxes such as in http://www.codeproject.com/KB/WPF/TreeViewWithCheckBoxes.aspx) in WPF : (every user clicked on displays this structure in a TreeView) : >Contract1 >Group1 >Article1 >Article2 >Article3 >Article4 ... >Group2 >Group3 ------------ Remove Button ------------- Using the precedent screen, the user can Add further Articles or Contracts or Groups to his account just by checking on them and clicking Add button or remove them by clicking Remove Button. the problem is that there are tens of thousands of articles in every Group and I don't want to bring them all in memory at once, for it would slow things down. Can you think of any better way of handling this ? • Would the user know what they were looking for? Could you apply some search filters? – Wander Nov 18 '11 at 16:14 • What's the use case? – Jimmy Breck-McKye Nov 18 '11 at 17:08 Personally I would suggest not using a tree but using something like Miller columns. You'd have Contracts in the left column, Groups in the second column and Articles in the third column. You could add a fourth column which gives information about a selected article - for the given group and contract. Maybe you need a leftmost column for users too. You could manage the logic for multiple selection depending on your requirements. Underneath each column you can have a button for Add / Remove based in your selections and selection path. That's just an alternative suggestion to using the tree... But what really concerns me is the fact that you have tens of thousands of articles in each group, and that's the real usability issue in this scenario. It doesn't matter whether you use a tree or a list, or miller columns - that amount of information in a single group is not manageable by the user. Not without adding a way of further ordering and chunking of the information - eg by alphabetical order, date, size, location, or other relationship or characteristic that means something in your scenario. That degree of chunking (ie many levels of branching) simply doesn't work in a tree - not from a perspective of findability, browsability, memorability - or any other ability. It's simply so unevenly distributed towards the leaves of the tree that the trunk and the branches can't take it's own weight, let alone allowing the monkey to find the fruit! You can use a mixture of tree and list, to reduce the hierarchical structure. Buttons for adding or removing are depent on the selection (contract, group, article). While the other answers will help with the interface, I think using meta data to help manage such a large amount of information would certainly help those approaches. Since I don't know exactly what the common denominators are with your data, I can't relay anything specifically, but I'd take a look at your information and see what you can do as far as grouping and arrangement by meta data. It depends on the use case for viewing. There is no universal pattern for this or any other problem. Different kinds of interfaces will make the data more manageable at the cost of inhibiting different kinds of workflows. For instance, treeviews are good when you need to instantly see the distribution of child elements amongst sibling items. Miller columns ('panes') work well when the groups of children for each parent are somehow parallel or analogous to one another. Showing data in individual forms with breadcrumbs representing 'paths' to that point are useful when the data in the form is the main focus, the path subsidiary.
2019-10-16 11:13:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25105157494544983, "perplexity": 992.0802049140372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00026.warc.gz"}
http://webinweb.net/warning-unable/warning-unable-to-redefine-math-accent-vec.html
Home > Warning Unable > Warning Unable To Redefine Math Accent Vec # Warning Unable To Redefine Math Accent Vec Difference between \the, \showthe and \show commands? Don'ts do not change the default font of the document (that is do not \usepackage{times} or other fonts), or use too many variations on fonts. Invalid math code.\relaxl.743 \mathchardef\[email protected]\mathcode`\=\relax! Why is this 'Proof' by induction not valid? have a peek here LaTeX Error: Command \underleftarrow already defined.Or name \end... why do they give the same output? Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the Next by thread: typesetting slanted sans serif matrix variable? Both are more efficient than the clumsy definitions in svmono.cls. The word "and" between two authors is used without a comma. Review of my T-shirt design Read only property with lambda syntax Writing a recommendation letter for a student I reported for academic dishonesty more hot questions question feed lang-tex about us Does it make sense prebooking? Please , use significant keywords. illegal, see p.192 of the manual.See the LaTeX manual or LaTeX Companion for explanation.Type H for immediate help....l.769 ...palette{\overarrow@\leftrightarrowfill@}}! From: pretentious_indifference Prev by Date: Problem with PGF 2.00: \pgfdeclareimage and \pgfuseimage Next by Date: Re: Problem with PGF 2.00: \pgfdeclareimage and \pgfuseimage Previous by thread: Re: amsmath conflicts with llncs In case of technical problems, please contactadrian.dediu(at)urv.cat. asked 2 years ago viewed 727 times active 1 year ago Blog Stack Overflow Gives Back 2016 Get the weekly newsletter! Is it possible to see animals from space? To move to LNCS proceedings just click on the menu LNCS. Inside the \authorrunning{} field of your article, please use only the initial of your first name, followed by a dot. LNCS recommends keywords, within the abstract (before \end{abstract} just type \keywords{your list}. de.comp.text.tex Discussion: Wechselwirkung zwischen unicode-math und mhchem (zu alt für eine Antwort) Jan-Thomas Kühnert 2011-11-03 07:12:46 UTC PermalinkRaw Message Hallo,ich versuche mich gerade mal an LuaLaTex. In what cases, is it ok to use categorical predictors with many levels in regression? Check spelling errors and English mistakes for the whole paper. do not use \newpage for formatting reasons, trying to fit floating figures or tables. Top RexMundi localghost Site Moderator Posts: 9206 Joined: Fri Feb 02, 2007 12:06 pm Location: Braunschweig, Germany Re: Unable to redefine math accent \vec Quote Postby localghost » Mon Dec 01, Why credit card information mostly not stolen? Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the navigate here Not the answer you're looking for? I would like to typeset the time-derivative of a vector whose symbol shall be \hat{x}. Index(es): Date Thread Flag as inappropriate (AWS) Security UNIX Linux Coding Usenet ArchiveAboutPrivacyImprint newsgroups.derkeiler.com >Archive >Comp >comp.text.tex >2008-05 News Group photo Movie Slides Program Accepted posters Accepted papers Final paper instructions If the poster gets a prize, who gets it, the person presenting it or the first author? Alter disabled trigger in SQL Server without enabling it Being swallowed whole--what actually kills you? Unfortunately, you have to put the \vec into the outermost braces: \RequirePackage{amsmath} \documentclass{svjour3} \begin{document} $$\vec{\dot{\hat{x}}} %OK %\dot{\vec{\hat{x}}} %Failure %\dot{\hat{\vec{x}}} %Failure$$ \end{document} share|improve this answer edited Dec 16 '15 at Check This Out It will figure out what driver you need based on the compilation sequence. HesabımAramaHaritalarYouTubePlayGmailDriveTakvimGoogle+ÇeviriFotoğraflarDaha fazlasıDokümanlarBloggerKişilerHangoutsGoogle'a ait daha da fazla uygulamaOturum açınGizli alanlarGrupları veya mesajları ara LaTeX Community We love good questions Skip to content Active topics Login Register Login Register Active topics Search Advanced illegal, see p.192 of the manual.See the LaTeX manual or LaTeX Companion for explanation.Type H for immediate help....l.574 \hbox{\normalfont....}\vss}}}}Package amsmath Warning: Unable to redefine math accent \hat.Package amsmath Warning: Unable Yes, it actually redefines the macro from a former \mathaccent to the above. ## An idiom or phrase for when you're about to be ill How should implanted technology be handled in prison? Use \documentclass[runningheads, envcountsame, a4paper]{llncs} and only these options; some packages modify the paper format to letter if a4 is not specified. i think i'll suggest that the maintainer of mathtools do it there as a "temporary" expedient. –barbara beeton Sep 11 '14 at 20:14 add a comment| up vote 0 down vote After submitting the files, EasyChair gives some messages. Type H for immediate help. ... The Baum-Sweet Sequence more hot questions question feed lang-tex about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / Arts do not modify \qed. Does anyone know how to fix this problem? this contact form do not \usepackage[...]{babel} , do not use whatever fancy signs as in mathabx. Browse other questions tagged vector sv-classes or ask your own question. do not use a4wide or geometrix. LaTeX Error: Command \overleftrightarrow already defined.Or name \end... When do I have to use load() on collection Output the sign When hiking, why is the right of way given to people going up?
2017-08-17 11:34:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.351212739944458, "perplexity": 11309.748706723401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103270.12/warc/CC-MAIN-20170817111816-20170817131816-00258.warc.gz"}
https://www.ctan.org/tex-archive/macros/latex/contrib/checklistings
# checklistings User manuals and papers about programming languages usually contain many code samples, often with accompanying compiler messages giving the types of declarations or error messages explaining why certain declarations are invalid. The checklistings package augments the fancyvrb and listings packages for including source code in documents with a way to pass the source code through a compiler and also include the resulting messages in the document. The motivation is to check the code samples in a document for syntax and typing errors and to facilitate the inclusion of inferred types and compiler warnings or errors in a text. This package is intentionally very lightweight and unlike packages like python it is not intended for interacting with an interpretor or including the execution traces of code. While checklistings does not focus on a specific programming language, it is designed to work well with ML-like languages. Using the package involves three elements: 1. The declaration \usepackage{checklistings}. 2. The verbatim environment \begin{chklisting}...\end{chklisting}. 3. The shell script checklistings.sh. In a first pass, latex/pdflatex outputs code samples into files. The second pass is performed by checklistings.sh which passes each file through a compiler to generate corresponding output files. In a third pass, latex/pdflatex reads from the generated files to incorporate the results into the document. A checklistings.hva file is provided for interoperability with HeVeA. The checklistings package may be distributed and/or modified under the conditions of the Project Public License, either version 1.2 of this license or (at your option) any later version. Please send comments, suggestions, and bug reports (with version number and the keyword "checklistings" in the subject of the message) to <tim@tbrk.org>. Please keep in mind that we prefer to keep checklistings simple and lightweight rather than to incorporate many different configuration and customization options. The source code is hosted on GitHub. This package was developed within the PARKAS at Inria and the ENS. Download the contents of this package in one zip archive (369.4k). ## checklistings – Pass verbatim contents through a compiler and reincorporate the resulting output This package augments the fancyvrb and listings packages to allow the source code they contain to be checked by an external tool (like a compiler). The external tool's messages can be automatically reincorporated into the original document. The package does not focus on a specific programming language, but it is designed to work well with languages and compilers in the ML family. Package checklistings Repository https://github.com/tbrk/checklistings Version 1.0 Licenses The LaTeX Project Public License 1.2 Copyright 2015 Timothy Bourke and Marc Pouzet Maintainer Timothy Bourke Contained in TeX Live as checklistings Topics ListingCallback
2022-08-07 19:56:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.293087363243103, "perplexity": 3667.673122226319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570692.22/warc/CC-MAIN-20220807181008-20220807211008-00067.warc.gz"}
https://math.stackexchange.com/questions/1100044/longest-path-in-n-times-n-grid
# Longest path in $n\times n$ grid Consider an $n\times n$ grid graph. It is easy to construct (self-avoiding) paths in it of length $n(n+2)$, by starting at the upper left corner, going downwards to the lower left corner, going right by 1 edge, going upwards, going right by 1 edge etc (zig zagging). How does one rigorously show that this is indeed the maximum possible length of a path in the grid? Furthermore, is there a formula counting the number of said paths? There cannot be any longer paths, since each intersection can only be used once, and there are $(n+1)^2 = n(n+2) + 1$ intersections. Any path of $i$ nodes is of length $i-1$. The only exception is if (hamiltonian) loops counts as self-avoiding, and exists.
2021-10-27 19:44:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6810991168022156, "perplexity": 361.51097835946325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00374.warc.gz"}
https://electronics.stackexchange.com/questions/352295/busy-wait-sleep-function-has-incorrect-timing-on-nios-cpu
# Busy-wait sleep function has incorrect timing on NIOS CPU I have a Qsys system which includes a NIOS II/e CPU (which is the only master on the bus) and a 36kB chunk of 32-bit internal RAM. The RAM is configured to have the minimum read latency (which is 1 cycle). The whole system is clocked by a single 50MHz clock. I'm trying to use the usleep function to wait for 1 second and use clock to measure the time: #include <sys/alt_stdio.h> #include <unistd.h> #include <time.h> void main(void) { clock_t start, end; start = clock(); usleep(1000000); end = clock(); alt_printf("usleep(1s) takes 0x%xms\n", (end - start) * 1000 / CLOCKS_PER_SEC); } Surprisingly, I get a value 0x592 (1426 DEC), which is not even close to 1 second. Toggling a pin before and after usleep and measuring the time externally confirms that the delay is indeed close to 1.4s. I understand that busy-waiting is affected by RAM latency and bus arbitration, which I carefully tried to exclude. What is the expected system configuration on which usleep works correctly? Is there an alternative delay function with μs resolution which would work on the configuration I have? To answer the clarification request, all durations are wrong by about the same factor (1.4), which I suppose depends on the hardware configuration. Simply multiplying by the corrective factor is not a solution for me, since this is for a project which should allow the users to run Arduino code on any FPGA capable of implementing NIOS. The usleep implementation I'm using comes from Altera "Small C library", AFAIK it's a clone of newlib. Compiler options from "Hello World Small" sample project reproduce the issue. I tried several combinations of options, and none of them fixed the problem. Again, since this is for a library, I'd like to find a solution which works with different (reasonable) compiler options. • hmmm try nanosleep(1000000000); – Tony Stewart Sunnyskyguy EE75 Jan 26 '18 at 22:39 • @TonyStewart.EEsince'75 You mean nanosleep(struct timespec{1,0})? Sorry, I don't have it. I'm limited to newlib and Altera HAL. – Dmitry Grigoryev Jan 26 '18 at 22:55 • What happens if you wait for smaller time periods (eg. 100ms, 10ms)? – Bruce Abbott Jan 26 '18 at 22:59 • @BruceAbbott Pretty much the same: e.g. usleep(1000). – Dmitry Grigoryev Jan 26 '18 at 23:12 • I'm voting to close this question as off-topic because it has sat without clarification to an answerable state for nearly six months. Explain if you are trying to figure out why the clock test reads short or the scope test long. Explain how their error tracks for other durations. Explain which of the NIOS software stack options is in use and where the implementation of usleep() comes from. If possible, add debugging within the implementation. Also show clock values periodically from boot. – Chris Stratton Jun 17 '18 at 22:54 If this is the whole code and not a pseudo code trimmed for this post, it is clear that you have a problem with newlib implementation. The quick solution is making your own macro using clock() end = clock() + delay; while((end - clock()) > 0); • It's a bit early to be jumping to that conclusion. For starters, the idea that it's using newlib is presumably one you are only inferring from experience of what you have usually seen on NIOS projects. For all we know, the actual implementation may have usleep() provided by a an (RT)OS kernel. And it's a little odd to blame the library immediately when it is surrounded by custom computing machinery and hardware drivers which have not been proven to be correctly configured for the details of that. – Chris Stratton Jun 17 '18 at 22:55 • The whole code is rather large to be posted or asked about, so I posted a complete example which reproduces the problem. I think I'll give your solution a shot. The only downside I see is that it won't produce very short delays reliably, I'll have to check if this is a problem. – Dmitry Grigoryev Jun 18 '18 at 7:25 I had the same problem and got the same result. For 1 second I had about real 2 s pause. The reason is that it is realized by software cycle, I guess. At least, here is the comment from Intel FPGA (prev. Altera) https://www.intel.com/content/www/us/en/programmable/support/support-resources/knowledge-base/solutions/rd08312011_772.html You can resolve your need of good timing count by using ticks counting something like that: #include <sys/alt_alarm.h> ... void function_t(){ int prev= 0; int now; now = (int) (alt_nticks() * alt_ticks_per_second() / 1000); if ((now - prev) < period) return; last = now; //my action } so I used such function in my round-robin code to do something with using timer. But if you are really want to use timer precisely, you should use interrupts. • I ended up implementing millis() using the timer and micros() using the busy-wait loop requiring manual calibration. Having 1ns system timer is not practically possible with NIOS HAL, since that interrupt will prevent anything else from executing. – Dmitry Grigoryev Sep 21 '18 at 10:53
2020-06-02 10:46:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3067927956581116, "perplexity": 2704.794980261101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347424174.72/warc/CC-MAIN-20200602100039-20200602130039-00180.warc.gz"}
http://mathhelpforum.com/discrete-math/72957-recursive-explicit-equations-how-write-them.html
# Math Help - Recursive and Explicit Equations...How to Write Them? 1. ## Recursive and Explicit Equations...How to Write Them? Hi, I have been struggling with understanding these equations, searching for a way to simplify them. My professor makes them seem impossible, and I don't understand her teaching methods...so going to her for help, isn't really an option. I have been researching for a different way to do them, but I just cannot figure it out. Help!! Here is a problem I am working on...if you could give me an example of something similar (I would rather find the answer to this one on my own!). Suppose that a population grows according to a linear growth model. The initial population is 200, and it grows by 25 every month. --Write a recursive equation that can be used to find the population at the end of any month. --Write an explicit equation that can be used to find the population at the end of any month. 2. A recursive formula is going to define a future term by previous ones. Something like $x_{n+1}=x_{n}+x_{n-1}$. That's the Fibonacci sequence. It relies though on knowing previous terms explicitly.
2014-11-28 02:09:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6812151074409485, "perplexity": 390.66212220837895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009515.14/warc/CC-MAIN-20141125155649-00219-ip-10-235-23-156.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/90919/confusion-matrix-calculation-in-random-forest-classifier-in-r/91433
# Confusion matrix calculation in random forest classifier in R After training using random forests on the iris dataset, I get an OOB error and a confusion matrix. data(iris) (model <- randomForest(Species~., iris, ntree=500,importance=T,do.trace = 100) ) model$oob.times The help mentions that the confusion matrix is based off the OOB data. But I see the confusion matrix reports values using the same number of samples as used in the training input. Can anyone explain as to how exactly the confusion matrix reports its error values? Does it use the OOB error value and scale it to the training data-set size, or does it pick samples of OOB data and run the RF again? • OOB is ambiguous here: this is about out-of-bag errors, not out-of-bootstrap (which would be for the non-aggregated models). – cbeleites supports Monica Apr 29 '14 at 7:06 ## 1 Answer tl;dr: Try setting do.trace argument to 5 and see how the OOB error reacts. To answer your first question from documentation err.rate (classification only) vector error rates of the prediction on the input data, the i-th element being the (OOB) error rate for all trees up to the i-th. However, your second question gets to the heart of the matter, I think. The algorithm does pick "samples" of the OOB data (it samples the whole data set exactly once!) and run the RF again, this time with a different random set of variables. This is what differentiates a random forest (set of trees) from a single decision tree: it's not the set of data that is random (as in decision tree), it's the set of variables! model$confusion # setosa versicolor virginica class.error # setosa 50 0 0 0.00 # versicolor 0 47 3 0.06 # virginica 0 4 46 0.08 sum(model\$confusion[ ,1:3]) #150 nrow(iris) #150 Thus there is no need to "scale it to the training data set size"- it's at the full size each time it runs through the decision tree.
2020-01-23 10:21:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24480608105659485, "perplexity": 6539.907326851096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250610004.56/warc/CC-MAIN-20200123101110-20200123130110-00327.warc.gz"}
https://puzzling.stackexchange.com/questions/29840/what-is-the-result-of-the-sum
# What is the result of the sum? $A_1$: 0H828JD1FGHODB82JOO $A_2$: 01111011 $A_3$: (Sorry if you are colorblind..) $A_4$: 83972748 & 8 $A_5$: What is the result of the following calculation? $~~~~~~~A_2 - A_1 + A_3 \times A_4 + A_5$ • In the last equation, I assume you replace each number with the corresponding answer? Also, should it come out to something meaningful? – Deusovi Mar 29 '16 at 12:37 • @Deusovi I've added some info, but to answer your two questions: yes, replace each number with the number the individual item represents. And no, the answer is a number, but it doesn't have any other meaning that just being the result of the sum. – Kevin Cruijssen Mar 29 '16 at 12:56 • I was thinking I am not colorblind but I saw that number really hardly. – Lafexlos Mar 29 '16 at 13:03 • I've edited the question to make it clear which numbers were numbers and which were variables. Feel fre to roll back if you need to. – Deusovi Mar 29 '16 at 13:03 • Why do you have to ask questions color blind people cannot answer? Tsk Tsk. – Marius Mar 29 '16 at 13:18 2. This is binary for 123. (ASCII produces {, which is not a number as far as I'm aware.) 3. I'm colorblind, but the image description is "Add this one upside down". I think it's a 9, so the answer would be 6. 4. There is a hidden & 8 after lots of spaces. Taking & as bitwise AND, we get 8. 5. There is an equation hidden in the source; taking % as binary modulus, you get 39. • Why are you converting 123 to ascii. I am new to this so want to make sure what am I missing. It dosa not say anywhere to convert – LearningPhase Mar 29 '16 at 12:41 • Also same about the upside down number – LearningPhase Mar 29 '16 at 12:43 • @LearningPhase: It doesn't say to, but numbers between 32 and 128 are ASCII. Binary suggested "computers" to me, so I tried ASCII. – Deusovi Mar 29 '16 at 12:45 • @LearningPhase: The upside down number comes from the source of the question. Click "edit" on the question (but don't make any changes) and you'll see what I mean. – Deusovi Mar 29 '16 at 12:47 • For the first I would go for 0 because, well, it's pointless in the formula. The rest is only a red herring (the rest is pointless) – Narmer Mar 29 '16 at 13:02 1. I would go for 0 because it's pointless in the formula. The rest is only a red herring (the rest is also pointless) 2. This is binary for 123. (ASCII produces {, which is not a number as far as I'm aware.) Thanks to @Deusovi 3. The image description is "Add this one upside down". Since it's a 9, the answer would be 6. Thanks to @Deusovi 4. There is a hidden & 8 after lots of spaces. Taking & as bitwise AND, we get 8. Thanks to @Deusovi 5. There is an equation hidden in the source; Also the third image points to a $2px*2px$ GIF. Pluggin them in the equation we get the total result of $[[((5\%3)^4+7/2)*2][2*2]][2*2] = 624$ So the final result is: $123 - 0 + 6*8 + 624 = 795$ I feel something's wrong...
2019-06-24 14:13:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6686951518058777, "perplexity": 948.6461112656737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00315.warc.gz"}
https://math.meta.stackexchange.com/questions/2503/canonical-1-1-answer
There are quite a few questions which involves "showing" $1=2$ or $0 = 1$ or $-1 = 1$ via incorrect algebraic manipulations of $i = \sqrt{-1}$. Does anyone know if there is a "canonical" question/answer (and if not, would one of our great educators write an answer) that is easily generalisable to at least the usual arithmetic operations? I'm wondering because of the question " Why $\sqrt{-1 \times -1} \neq \sqrt{-1}^2$? ". The suggested "duplicate" target is " Why $\sqrt{-1 \times {-1}} \neq \sqrt{-1}^2$? ", but I think that for someone who is having difficulties seeing why his/her algebraic manipulations are wrong, the connection between the two questions may not be immediately apparent. (In other words, the answer to the second question may not help.) And ideally a question of this type should be added to our list here: http://meta.math.stackexchange.com/questions/1868/list-of-generalizations-of-common-questions • I think that canonical answer should be $\mathbb Z/2\mathbb Z$ :-D – Asaf Karagila Jul 3 '11 at 18:28 I $\TeX$-ified my answer to -1 is not 1, so where is the mistake? and added explicit mention of the newer question that you're asking about and the multiplication-of-radicals property. Offhand, I think that trying to get much more general than what I've written there is, as you suggest, likely to be too general for the people asking such questions.
2021-05-16 00:44:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.824084460735321, "perplexity": 386.98763742223827}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00114.warc.gz"}
https://dsp.stackexchange.com/tags/synchronization/new
# Tag Info ## New answers tagged synchronization 0 In the physical layer of a network, data (a stream of $0$'s and $1$'s) often travels serially from one device to another (from one node in the network to another node in the network) as evidenced by names such as USB (Universal Serial Bus). There are various clocks in the physical layer that need to be synchronized in both frequency and phase, and the ... Top 50 recent answers are included
2021-07-30 05:23:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6574252843856812, "perplexity": 1415.3414906692553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153931.11/warc/CC-MAIN-20210730025356-20210730055356-00009.warc.gz"}
https://www.physicsforums.com/threads/about-quarks.933988/
1. Dec 10, 2017 ### friend I understand there are three generations of quarks, which have the same charge but different mass. My question is, in a single generation how many different kinds of quarks are there. For example, in the first generation there are the up quark and down quark, each of which has an antiquark. So far, this is four different quarks in the first generation. Are there other properties in the first generation of quarks that would account for more first generation quarks? Thanks. 2. Dec 10, 2017 ### dukwon Sure, you could also count the 3 different values of colour charge. 3. Dec 10, 2017 ### friend So how many different first generation quarks does that give us? As I understand it, quarks interact with the electromagnetic force and with the Weak nuclear force. Does that mean each quark has an electromagnetic charge and Weak nuclear charge numbers? 4. Dec 10, 2017 ### Staff: Mentor It depends on how you want to count them. 2, 4, 6 and 12 are all somewhat justifiable answers. Yes, and you can look them up. 5. Dec 10, 2017 ### friend I've tried to look it up, but those articles don't distinguish very well the case of a single generation. 6. Dec 10, 2017 ### friend I find on Wikipedia, The black dots seem to be for gluons. The colored triagles seem to be for quarks. Is this just for a single generation of quarks? The diagram seems to indicate that the upsidedown triangles represent the anticolor charge of the rightsideup charge. Is this correct? If so, does the diagram indicate that whenever the color charge is reversed there is also a reversal of electric charge? Thanks. Last edited: Dec 10, 2017 7. Dec 10, 2017 ### mathman There are six basic different quarks: down, up, strange, charm, bottom, and top. Each has an antiquark. Also each comes in three colors (red, green, blue - labels which have nothing to do with color as such). You can count them anyway you want. Color charge and actual charge are not connected. up, charm, and top have charge +2/3, down, strange, and bottom have charge -1/3. All quarks may come in any color. 8. Dec 10, 2017 ### friend OK Thanks. So I take it that the diagram in my previous post was for one particular quark, say the up quark, for example. So if I'm understanding you correctly, this means there are six possible ways to assign electric charge and color charge to the up quark. (+, - electric charge times red, green, blue, color charge). Is there an anti color charge? The wikipedia site I link to says there is an anti-red, anti-green, and anti-blue color charge as well. Is this right? Or is anti-red just the red charge with the opposite electric charge? I'm still a little confused. Reading the linked article, I read, "Antiquarks have the opposite charge to their corresponding quarks; up-type antiquarks have charges of − 2⁄3 e and down-type antiquarks have charges of + 1⁄3 e". This tells me that antiquarks differ from quarks by electric charge, e. I also read, "Every quark carries a color, while every antiquark carries an anticolor." which tells me that there is such a thing as anticolor, but that property changes with electric charge so that there are still only six possible ways to assign electric and color charge to, say, an up quark. Could someone please confirm this? Here's a question that may answer: do mesons have electrical charge. For mesons have a quark and antiquark. If this involve anticolor but not anti-electric charge, then perhaps that answers my question. Thanks. Last edited: Dec 10, 2017 9. Dec 10, 2017 ### Staff: Mentor Some do: $\pi^+ = u \bar d$. Some don't: $J/\Psi = c \bar c$. 10. Dec 10, 2017 ### friend If there were no mesons with electrical charge, then I'd say that anticolor is the electrical negation of the color charge. But if there are mesons with electrical charge, I don't know that if I can say that an anticolor is the electrical negation of a color charge. Any help out there? 11. Dec 11, 2017 ### friend So I think there is a correction for first generation quarks. They have either up or down flavor, +2/3e or -1/3e, and red, green, or blue color charge. So that means there are 2X2X3=12 different first generation quarks, right? 12. Dec 11, 2017 ### Orodruin Staff Emeritus These are equivalent based on the Gell-Mann-Nishijima formula $Q = Y/2 + T_3$. There is no up quark with charge -1/3. If you want to count degrees of freedom (and the argument can be made for this - I would therefore add 24 to the list of @mfb), then there are: • Quark-antiquark (2) • Spin/handedness (2) • Colour (3) • Up/down type (2) which in total would make 24 per generation. 13. Dec 11, 2017 ### friend So maybe we can construct a table for first generation quarks only. Now that up or down is correlated with electric charge and color charge is correlated to electric charge, how many first generation quarks are there with just these quantum numbers? 14. Dec 12, 2017 ### mathman No! The up quark has a +2/3 charge and the down quark has a -1/3 charge. Both come in all 3 colors, so the total is 6.. If you add in the first generation anti-quarks, then you can get 12. 15. Dec 12, 2017 ### Orodruin Staff Emeritus From a theoretical point of view, I would also count left- and right-handed separately (see #12). After all, the left- and right-handed components are parts of different SU(2) representations. There are many ways to count here, but for me the more natural one is to count degrees of freedom, of which there are 24 per generation. Before electroweak symmetry breaking you have • The SU(2) doublet and SU(3) triplet $Q_L$. (6 Weyl fermions = 12 degrees of freedom) • The SU(2) singlet and SU(3) triplets $u_R$ and $d_R$. (6 Weyl fermions = 12 degrees of freedom) Therefore, the total number of degrees of freedom among the quarks are 24. 16. Dec 12, 2017 ### friend I understand that the quarks interact with the Weak force particles. Are all the quarks effected equally by all the Weak force particle, W+, W-, and Z0? Or does each Weak force particle interact differently with each quark? Thanks again for your help. 17. Dec 12, 2017 ### Orodruin Staff Emeritus No. The weak force treats left- and right-handed particles differently. The couplings to the Z also depend on the charge of the particle and whether it is up or down type. The Ws couple (left-handed) up and down type quarks with a strength proportional to the elements of the CKM matrix. 18. Dec 13, 2017 ### friend Do quarks rotate into each other like neutrinos? Do quarks decay into only Weak particles, say, a W- and Z0? 19. Dec 13, 2017 ### Orodruin Staff Emeritus Are you referring to neutrino oscillations or neutrino mixing? They are related but different things. Neutrino mixing (or more accurately, lepton mixing) is necessary for neutrino oscillations to occur and quarks mix in much the same way. However, the big difference is that typically the neutrino mass states are quite degenerate in mass and so you will typically produce a linear combination of them that will continue to have coherence over long distances. For quarks however, the large mass differences means that the different mass states will lose coherence practically immediately, leading to no quark flavour oscillations (as you can tell which mass state has been produced by looking at the kinematics of the process). What does happen in the baryon sector due to CKM mixing is neutral meson oscillations, for example between $K_0$ (quark content $d\bar s$) and $\bar K_0$ ($s\bar d$). 20. Dec 15, 2017 ### friend OK. So I'm hearing that quarks can oscillate a little between mass generations (flavor). Do they oscillate between up and down type, or between color charge or electric charge? Thanks again. 21. Dec 15, 2017 ### Orodruin Staff Emeritus I do not understand how you got that from what I said. What you have is neutral meson oscillations.
2019-01-22 07:54:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6878200173377991, "perplexity": 1188.1680567360618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583831770.96/warc/CC-MAIN-20190122074945-20190122100945-00334.warc.gz"}
https://www.physicsforums.com/threads/set-theory-cardinality-question.336241/
Set theory cardinality question Can anyone please give a really explicit proof (omitting no steps) and with as simple words as possible that any infinite set can be writtern as the union of disjoint countable sets? Thank you. Every infinite set X has a countable subset (this is a theorem of ZFC). We'll construct a function f from an initial set of ordinals to disjoint countably infinite subsets of X such that X its range. Let f(0) be any countably infinite subset of $X = X_0$. Then define $X_{i+1} = X_i \setminus f(i)$, $X_i = \displaystyle{ \cap_{j<i} X_j}$ for a limit ordinal i and let $f(i)$ is a countably infinite subset of $X_i$ (the possibility of making these choices collectively relies on the Axiom of Choice). The sequence must eventually end with a null set - otherwise, the set X would be larger than all cardinals.
2021-09-26 14:03:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9923567771911621, "perplexity": 197.11585601667602}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057861.0/warc/CC-MAIN-20210926114012-20210926144012-00090.warc.gz"}
http://math.stackexchange.com/questions/186918/morse-functions-are-dense-in-mathcalc-inftyx-mathbbr
# Morse functions are dense in $\mathcal{C}^\infty(X,\mathbb{R})$. In Shastri's Elements of Differential Topology, p.210-211, there is written: Why do we get a Morse function $f_u$ on $X$? We know that for any $f\!\in\!\mathcal{C}^\infty(X,\mathbb{R})$, there is some $a\!\in\!\mathbb{R}^N$, such that $f_a(x)=f(x)\!+\!\langle x,a\rangle$ is a Morse function on $X$. Since $X$ is compact, the function $|\langle\_,a\rangle|$ attains its maximal value on $X$. Then, we define $$b := \frac{a\varepsilon}{\max_{x\in X}|\langle x,a\rangle|},$$ and we have $\sup_{x\in X}|f\!-\!f_b|=\sup_{x\in X}|\langle x,b\rangle|=\frac{\sup_{x\in X}|\langle x,a\rangle|}{\max_{x\in X}|\langle x,a\rangle|}\varepsilon=\varepsilon$. But why is this $f_b$ a Morse function on $X$? Its differential is $D(f_b)_p=D(f)_p+b$, so $p\!\in\!X$ is a critical point iff $D(f)_p\!=\!-b\!=\!-\frac{a}{\ldots}$. On the other hand, the critical points of $f_a$ are those for which $D(f)_p\!=\!-a$. I do not see how to make a conclusion here. - Ah, by compactness of $X$, the function $\|x\|$ is bounded on $X$, and there exists $$a\!\in\!\mathbb{B}^N(0,\frac{\varepsilon}{\max_{x\in X}\|x\|}),$$ for which $f_a$ is a Morse function. Then $$\sup_{x\in X}|f(x)\!-\!f_a(x)|= \sup_{x\in X}|\langle x,a\rangle|\leq \sup_{x\in X}|\|x\|\|a\|\leq \varepsilon.$$ Sorry for asking unnecessarily. –  Leon Lampret Aug 26 '12 at 0:30 You can answer your own question so that the post is useful to someone else. –  leo Aug 26 '12 at 0:39 Ok. But that segment from the book is confusing, or am I wrong? Why would we need $|\langle x,a\rangle|\leq\varepsilon$? What was the author's original argument? –  Leon Lampret Aug 26 '12 at 0:43 @LeonLampret The conclusion of Remark 8.1.2 reads not as "we get a Morse function", but "we get a Morse function such that $\dots<\epsilon$". So it seems that the author will need this $<\epsilon$ elsewhere. –  user31373 Aug 26 '12 at 1:24 By compactness of $X$, the function $\|x\|$ is bounded on $X$, and by the theorem, there exists $$a\;\in\;\mathbb{B}^N\Big(0,\frac{\varepsilon}{\max_{x\in X}\|x\|}\Big),$$ for which $f_a$ is a Morse function. Then by Cauchy-Schwartz-Bunyakowsky, $$\sup_{x\in X}|f(x)\!-\!f_a(x)|= \sup_{x\in X}|\langle x,a\rangle|\leq \sup_{x\in X}\|x\|\|a\|\leq \varepsilon.$$
2014-03-17 13:11:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8779386281967163, "perplexity": 165.46132908296698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678705611/warc/CC-MAIN-20140313024505-00021-ip-10-183-142-35.ec2.internal.warc.gz"}
http://codologic.com/forum/index.php?u=/topics/35
416 23 Gerry posted Mar 11 '15 at 9:57 pm 2 known bugs for me: The "go to top" slide in on the right top corner of the page right next to the avatar when you scroll down what appears, its not appearing.. If i set the "When I create a topic" on the preferences page lower from 3 to 2, and then back to 3 it keeps loading infinity... However, it will be changed if i refresh the page. But for some users, they dont really ge recent by Gerry  ·  Mar 13 '15 at 1:10 pm 46 0 Hello, Something along the lines of ... if URL rewriting is enabled, the redirect URL should be http://MY_FORUM.com/uni_login/authoriz 14 0 stolzen posted Mar 12 '15 at 12:10 pm I like the new "pages" feature: it's really nice. What I though would also be nice is to allow users to create such pages using markdown. This could be named "articles" and everybody can create article about something. There could be FAQs, links, just longer post answers that would make an article rather than a post. So basically it can be just usual posts but presented slightly differently 54 2 CrowsNest posted Mar 11 '15 at 5:25 pm I've come across a bug with pages, the Title doesnt appear and the text box is very high up. Have I configured something wrong with HTML or is it a bug server-side? Thanks! http://forum.crows-perch.com/index.php?u=/page/1/forum-rules Additionally, when attempting to post after the update, I get this error: recent by stolzen  ·  Mar 12 '15 at 11:58 am 99 3 The last entry in the Codoforum docs changelog lists only what's new in version 3.0 - nothing about the newer versions released since. Updating those may help us understand what new features are available, and what bugs are supposed to have been fixed - and this will help us make suggestions and bug reports that are more relevant and useful to you. It would also be great to see the Roadmap i recent by Gerry  ·  Mar 11 '15 at 9:15 pm 78 2 Johny30238 posted Mar 8 '15 at 2:23 am I'm currently trying to setup Friechat with 3rd party software, and I would like a little assistance getting one thing setup. If am able to get this to work, then ill most likely be purchasing this software. The program I am using uses cookies primarily for the login... and I just can't get it to set the userid properly. I've tried multiple times to paste the login.php code but apparently thi 66 0 stolzen posted Mar 10 '15 at 8:15 pm Hello, I was wondering if you could add more social login possibilities? Specifically, I'm talking about LonkedIn and GitHub, but it would be nice to have it configurable - say, I don't want to have a twitter login, so I just untick it and the button disappears 47 3 stolzen posted Mar 9 '15 at 9:09 am Hello, you previously mentioned the release on the 8th of March. That made me wonder how to back up before updating the forum version? By just manually taking the database dump and copying all the php files? Or there's a smarter way of doing this? recent by adesh  ·  Mar 10 '15 at 6:10 pm 93 1 stolzen posted Mar 9 '15 at 7:18 pm Hello, thanks for releasing the new version of forum When I tried to update my codoforum 3.1 from the admin panel, I got the following: 1.1> Connecting to server: codoforum.com ... ####: Initiating cURL request 1.3> Response: Raw: 3.2|codoforum.v.3.2.patch.zip|b99263fc2771c53f9e0f53deaf356a2a recent by stolzen  ·  Mar 10 '15 at 8:21 am 136 10 stolzen posted Mar 3 '15 at 3:42 pm Hello. Thank you very much for such a nice forum engine! I just installed it and I like it very much! However I'm having troubles with uploading avatars. I can upload an image and it works fine for a while, but when I go to my profile page or logout, it resets to the default one: in my case it's "S" on some background. Also, if I keep refreshing my account page, I notice that my defa recent by stolzen  ·  Mar 9 '15 at 8:17 pm 72 1 alinnert posted Mar 8 '15 at 8:45 am Hi, I've found Codoforum and had to try it. But I've encountered two errors so far. First one appeared while the setup process. When it tried to connect to the database after I enter the data it told me that the database test could not be found. Actually I didn't touch the default value of codoforum. It worked with a second try however. Second error occurs everytime I delete a categor recent by adesh  ·  Mar 8 '15 at 4:33 pm 147 9 itlo posted Sep 4 '14 at 10:16 pm There are a few issues with the mobile/responsive version of CF I would like to talk about. First, the mobile CF doesn't use the whole screen. You know, the mobile devices have small screens, and not using all this small space is a bit of a problem and it looks not that good. The forum have to expand and use every valuable pixel. Second, the categories have to be removed from the fr recent by Gerry  ·  Mar 7 '15 at 10:42 pm 66 2 Clicking the "Mark All as Read" button in the forum homepage - the "All Topics" page - also collapses the categories list, hiding the category names and new topic counts. The user is left without knowing whether their click worked. A second click is needed in order to expand the category list again, and see that the pill counters have gone. This seems to happen because the "Mark All as Read" recent by adesh  ·  Mar 7 '15 at 10:16 am 109 2 Giuseppe96 posted Mar 6 '15 at 5:32 pm Hi guys I've a problem. I've followed the installation steps of Freichat (the last available version) but I can't go on because I'm stopped here. Why can't it verify user avatar? I mean, the url of avatar is stored in the Users table (avatar field name is avatarLink) and it shows the avatar. I've waited but nothing seems to happen. Help me.. Please and thanks everybody recent by Giuseppe96  ·  Mar 6 '15 at 6:06 pm 39 2 Gerry posted Mar 6 '15 at 11:14 am Is it possible to for example If this is [forum-root]/test.php: <?php echo ='hello'; ?> Is it possible when you not logged in, you get not: hello, but: sorry you are nog logged in? With a simple code on top on the test.php? It is possible with phpbb with a simple code on top of the file... I want to add a gallery to my site, but i dont want guests could see this page. recent by kaio.dvm  ·  Mar 6 '15 at 5:20 pm 99 5 stolzen posted Mar 3 '15 at 3:50 pm How can I add MathJax support? I would like to be able to render math formulas on my forum. Here's what I've done so far: I created a block in block_head region with this content: <script type='text/x-mathjax-config'> MathJax.Hub.Config({ tex2jax: {inlineMath: [['$', '$'], ['\$','\$']]} }); </script> <script src='http://cdn.mathjax.org/mathjax/latest/MathJax.js?conf recent by adesh  ·  Mar 6 '15 at 5:15 pm 94 9 Gerry posted Mar 4 '15 at 4:26 pm Could this be in the future a changeable button/image? And add some space between an image in a post and the read more text/button/image. (I personal would love it when the read more is the same button as "Reply" or "Save/Post" that blue button.) Many user dont see the read more text, since it is so little and its to close to the topic text. Or, could you tell me how i change read more in recent by Gerry  ·  Mar 6 '15 at 10:08 am 36 1 stolzen posted Mar 5 '15 at 8:37 am I'd like to see "about me" section in the user profile details in the future along with some other stuff like contact details, web page, etc. It doesn't seem to be very hard to create it, so I'd be able to contribute the code to do this, if only I knew how to contribute. recent by adesh  ·  Mar 5 '15 at 10:38 am 84 2 paolspo posted Feb 27 '15 at 1:16 pm hello there, i purchase 3 months ago freichat super plus etc. etc., and the folder still here in my computer, i tried it so many times in many ways it doesn't work that all. on mobile? mammamia, terrible disaster. so don't waste time = money because will not work, also them don't know how to fix this product. recent by Beany  ·  Mar 3 '15 at 8:27 pm 81 5 Gerry posted Mar 1 '15 at 1:36 pm Cant this be done in some future release?? It's when you are a new user and confirmed your email for the first time. And if you are a user who didn't write any messages on the forum yet. It looks cleaner to say they are no recent messages or something like that instead of the loading circle and then nothing. Also, i noticed you changed Latest login into Never some releases back becau recent by Gerry  ·  Mar 3 '15 at 12:15 pm 76 4 arista posted Feb 26 '15 at 1:03 am Hi, I want to install freichat in my website. I am using jcow template. I have copy code to jcow template but i can't see line chat in my web. How to fix this problem? May you help me? recent by arista  ·  Mar 3 '15 at 7:32 am 137 4 CrowsNest posted Feb 27 '15 at 7:15 pm @adesh, when will the bug/issue with the plugins not working be fixed? Also, when the next update goes out, is it possible to get an "Update List" of a sorts, so we know what's new? Thanks! recent by adesh  ·  Mar 2 '15 at 3:16 pm Categories Actions Hide topic messages Enable infinite scrolling All posts under this topic will be deleted ? Previous Next With selected deselect topics Pending draft ... Click to resume editing
2017-08-22 01:39:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46641111373901367, "perplexity": 1990.4731273528218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109803.8/warc/CC-MAIN-20170822011838-20170822031838-00157.warc.gz"}
https://docs.dodoex.io/english/dodo-academy/pmm-overview/details-about-pmm
DODO Docs Search… ⌃K Algorithm Introduction The PMM algorithm has to answer how prices change according to inventory. We observe two characteristics from the market. 1. 1. Most of the liquidity is concentrated around the mid-market price, i.e., the price changes non-linearly with respect to the inventory. 2. 2. There should be liquidity even if the price deviates far from the mid-market price, but it will be very scarce We therefore introduce a nonlinear term for the price curve to make the depth distribution more consistent with the market and more flexible. The price curve equation is as follows: $P = i(1-k + k(\frac{B_0}{B})^2)$ where $i$ is the first parameter "guide price", $k$ is the second parameter "slippage factor", $B$ denotes the current token inventory, ${B_0}$ denotes the equilibrium inventory (which can be interpreted as the exposure you are willing to hold), and $\frac{B_0}{B}$ is used to indicate how much the current token inventory has shifted compared to the equilibrium state. Note that "equilibrium" does not mean that both tokens are worth the same. What constitutes "equilibrium" is artificial, and everyone can set what they think is "equilibrium". Under this formula. -When $k=1$ , this curve is exactly the same bonding curve as AMM -When $0 , this curve concentrates liquidity more in the $i$ neighborhood than the AMM =---When $k=0$ , this curve degenerates to a fixed price We call the valuation tokens in a pair Quote Token and the transaction tokens Base Token, abbreviated as $B$ and $Q$ . Quote Tokens and Transaction Tokens have equal status in this system, i.e., they are symmetric. Therefore, $i$ refers to how many $B$ tokens can be exchanged for $Q$ each token. So, for the case of a missing inventory of valued tokens, we replace multiplication with division using the symmetric approach. $P=i/(1-k+(\frac{Q_0}{Q})^2k)$ To put it in order, the price curve of PMM corresponds to the formula : $P=iR$ is determined by the following rule : If $B , then $R=1-k+(\frac{B_0}{B})^2k$ If $Q , then $R=1/(1-k+(\frac{Q_0}{Q})^2k)$ Other cases $R=1$ Design ideas The PMM algorithm is not just an empty idea, but has a complete evolution, if you want to know more please refer to this article.
2022-11-29 18:37:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 22, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6389817595481873, "perplexity": 2082.5653238505256}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710710.91/warc/CC-MAIN-20221129164449-20221129194449-00448.warc.gz"}
http://instal.bialystok.pl/junior-edition-areluhm/83f6a6-masyu-puzzle-rules
# masyu puzzle rules Masyu is a puzzle in which you draw a line through spaces according to certain rules. Black circles must be turned upon, but the loop must travel straight through the next and previous cells in its path. Each cell can contain numbers from 1 through 9 2. The purpose of its creation was to present a puzzle that uses no numbers or letters and yet retains depth and aesthetics. Rules: Draw a single, non-intersecting loop that passes through all circled cells. Your email address will not be published. They are both puzzles where you must use the clues to draw a single connected loop. Rules. Aquarium is a logic puzzle with simple rules and challenging solutions.. The goal is to draw a single continuous non-intersecting loop that properly passes through all circled cells. The loop should pass through all black and white circles in such a way that: Interested in more puzzles? Our Masyu Circle puzzles for kids are the ultimate math and logic puzzles. Masyu is game played on a rectangular grid in which some of the vertices contain black or white circles. Deduce the rules of Subway Masyu, which are an extension of standard Masyu rules: Make a closed loop that visits all the given circles and does not visit the same station twice. The basic rules for Masyu Circle puzzles are: 1. The loop may not cross itself. Aquarium is a logic puzzle with simple rules and challenging solutions.. Rules. Also if you would like to hear a bit more about a certain puzzle and have a question about it, I will try to answer them. Since the loop passes from midpoint of cell to midpoint of cell, the length of the loop segments will be one less than your count. The standard Masyu goal and rules are listed below. The sections of the loop run horizontally or vertically between the center points of orthogonally adjacent cells. History of this example: This “twisted symmetry” Masyu was written by Thomas Snyder for the 20/10 Puzzle Decathlon. Rules of 'Masyu' Moving between edge-to-edge neighbouring cells, draw a single closed loop that does not intersect or cross itself. Masyu puzzles are unique in that they do not have any numbers or letters in them–they are truly language-independent. Rules Place black and white pearls in some cells to form a valid Masyu puzzle, such that the clues in a row or column are equal to the outside givens in that order. Draw a single closed loop which does not cross or touch itself. - 2 black cells cannot be adjacent horizontally or vertically. If the loop only has vertical segments in the marked row, enter 0. The original name was Shiro Shinju Kuro Shinju (“White Pearl Black Pearl”), but a misreading of the kanji for shinju by the president of Nikoli gave it the name Masyu meaning “Evil Influence”. Each Masyu puzzle has only one unique solution. Don’t let the simple rules fool you--these challenges are guaranteed to keep puzzle fans delightfully distracted for hours on end. In Masyu, the task is to draw a path through black and white circles, making a right-angle turn at each black circle, and going straight ahead through every white one. If you are new to this puzzle type, here are our easiest Masyu Puzzles to get started on. Feedback | Draw a complete loop which passes through all cells containing a … We offer Masyu printables in many formats that will boost your IQ skills. The loop must make a turn in all the black circles, but must go straight in both cells immediately before/after each black circle. Goal: To make a single closed loop or path that passes through all of the squares with circles (the white and black “pearls”) and that goes … This is an example of a completed Masyu puzzle. Shingoki (Semaphores) is a logic puzzle with simple rules and challenging solutions.. The rules are simple. Lines passing through white circles must pass straight through its cell, and make a right-angled turn … Masyu by Nikoli for 3DS game reviews & Metacritic score: With the Nikoli puzzle series, you can enjoy high-quality "Masyu" puzzles, created by Nikoli, who gave the world-famous puzzle … Can you put the pieces back together to make a proper Masyu puzzle with a unique solution? The rules are simple. Their rules are a little bit complicated but once you get the hang of it, you can solve it quickly. The loop must go straight through the cells with white circles, with a turn in at least one of the cells immediately before/after each white circle. I'd appreciate a picture included, so I can see where the problem lies. Generally, a Grandmaster Masyu should have an interesting visual theme or an interesting solution, but no requirements of symmetry exist. Once the solver gets to grips with the significance of the circles, things fall into place. The aim is to draw a path round the grid so that it passes through every black and every white node. Masyu is a simple, aesthetic logic puzzle played on a grid. Below is an 8x8 Masyu puzzle (rules can be found here), but the puzzle has been broken into four pieces! The rules are as follows. Standard Masyu. Design rules for contributors: A Grandmaster Masyu will have a unique solution that can be reached by logic alone. The rules are simple. The loop must visit all cells with … Closed loops are not allowed. Rules 1. On a white pearl, the loop must go straight through, and it must make a turn… The objective is to draw a single, non-intersecting loop, subject to the following rules: All circles must be passed through exactly once. Theme: Edge Cases Author/Opus: This is the 193rd puzzle from our contributing puzzlemaster Prasanna Seshadri. The loop may not cross itself. With the Nikoli puzzle series, you can enjoy high-quality "Masyu" puzzles, created by Nikoli, who gave the world-famous puzzle its name. Rules; Playing; Example 1; Example 2; Worked Example Solve these visual IQ puzzles and train you brains. It could be taken to mean that the top row should be 26, since the first segment occupies 2 cells, and the second segment occupies 6. Hitori is a logic puzzle with simple rules and challenging solutions. Stars cannot be placed in adjacent cells that share an edge or corner, and all stars must be on empty cells which are not part of the loop. Logic Puzzles Welcome to a new blog site updated periodically, from daily to almost never, featuring all your favourites, from Sudoku to Nurikabe (at the moment) but pass by later for Easy as ABC, Hitori, Masyu. Quick links to 106 brief puzzle rules ... the numbers from Answers F and G. Answer H is the number of bends in the line connecting the two 4s in the solved puzzle. Masyu is a puzzle where you will draw a loop that passes through all the white and black circles. Here's a masyu puzzle for you. My mom enjoyed Masyu from Logic Puzzles 101, so I made her a set of relatively easy Masyu and variations. Masyu puzzles are composed of white and black circles on a grid, whose objective is to draw a continuous line through all of the circles according to three simple rules. If the loop only has vertical segments in the marked row, enter 0. Draw a single closed loop which does not cross or touch itself. At black cells the loop has a turn of 90°. The clues in the black cells tells the sum of the numbers next to that clue. These are the rules, The path must turn on a black node. All circles must be connected by a nonstop loop All dark and white circles must be connected by a nonstop loop non-intersecting loop. If the number for Answer H is even, then replace "H" in the puzzle with a black dot. Masyu Rules Rules: Draw a single, non-intersecting loop that passes through all circled cells such that it satisfies the following conditions: The loop must go straight through the cells with white circles, with a turn in at least one of the cells immediately before/after each white circle. Use the… Sources for Masyu Puzzles: Follow this link for classic Masyu puzzles on this website and this link for variations on Masyu puzzles. The basic rules for Masyu Circle puzzles are: 1. Answer String: Enter the length in cells of the horizontal loop segments from left to right in the marked rows, starting at the top. You have to draw lines between the dots to form a single loop without crossings or branches. However, it must pass straight through the nodes before and after. This step outlines what to do and what not to do. "Masyu by Nikoli" contains 50 sudoku puzzles. You have to draw lines between the dots to form a single loop without crossings or branches. Aquarium is a logic puzzle with simple rules and challenging solutions.. Step 3: Lets Solve a Puzzle!. Masyu-- If the number for Answer H is odd, then replace "H" in the puzzle with a white dot. Separate each row’s entry with a comma. Get more original puzzles at our, If you enjoy our free web content, consider giving a donation back to the site or to the authors (name the amount/author in your donation as appropriate if giving to a specific contributor). Masyu is game played on a rectangular grid in which some of the vertices contain black or white circles. The rules are as follows. The loop may not cross itself. Masyu is a logic puzzle with simple rules and challenging solutions. FAQ, Mass Print | These are the rules, The path must turn on a black node. The aim is to draw a path round the grid so that it passes through every black and every white node. Worksheets and No Prep Teaching Resources Critical Thinking Puzzles Make Puzzles: Masyu Puzzles These logic puzzles utilize white and black circles inside a grid. Standard Masyu. The aim is to draw a path round the grid so that it passes through every black and every white node. Left click between the dots to connect them. PDF. The Nintendo 3DS allows for … The rules of Hitori are simple: You have to shade some of the cells of the grid according to the rules: - No number should appear unshaded more than once in a row or a column. The goal is to make a single loop in 3D space which fulfills the following properties: The line passes through centres of cells and makes 90-degree turns only. Download Example Puzzle. ‎Consultez et comparez les avis et notes d’autres utilisateurs, visualisez des captures d’écran et découvrez E7 Masyu - Brain Puzzle plus en détail. Wanko Masyu A - 4 Masyu Solve the Masyu puzzles on the next two pages. Contents. Instructions. 2. Leave a reply. Masyu is played on a rectangular grid of squares, some of which contain circles; each circle is either "white" (empty) or "black" (filled). 2. Masyu Puzzles by KrazyDad, Volume 1. Masyu is a good puzzle, but the rules are a little challenging to follow. Masyu Rules and Info. The rules of Aquarium are simple: The puzzle is played on a rectangular grid divided into blocks called "aquariums" You have to "fill" the aquariums with water up to a certain level or leave it empty. Masyu is a logic puzzle with simple rules and challenging solutions. "Masyu" is a puzzle in which you draw a line through the spaces according to the rules. Rules. Learn the attributes of the black and white circles and you'll be able to play in no time. 2 Comments », “Enter the length in cells of the loop segments from left-to-right in the marked rows, starting at the top.”. (Brief) History of Masyu: Masyu was first published in 2000 by Nikoli in quarterly Communication 90; the original authors were 矢野龍王 (Yano Ryuoh) and アセトニトリル (“Acetonitrile”). The Basic Rules of Masyu Circles 1. However, it must pass straight through the nodes before and after. - Black circles must be turned upon and the loop must travel straight through the next and the previous cell. It is popular amongst children. Good puzzles, bad and slimy app The puzzles themselves are very good. Masyu is a simple, aesthetic logic puzzle played on a grid. Scopri Nikoli Masyu & Sudoku di Nikoli: spedizione gratuita per i clienti Prime e per ordini a partire da 29€ spediti da Amazon. Your goal in a Masyu puzzle is relatively simple–to make a single closed loop or path that passes through all of the squares with circles (the white and black “pearls”) and that goes through the center of the squares horizontally and/or vertically. Instructions. Here are the rules:1. Make a single loop with lines passing through the … Click a tile with the mouse to rotate it (Ctrl + Click rotates in the opposite direction). Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. Sarah Carter is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by … The loop need not pass through every cell but it … Masyu (also sometimes known as pearls) belongs to the same family as Slitherlink. The loop need not pass through … This entry was posted in Puzzles and tagged masyu, unequal lengths, variant on 2019-04-21 by rob. Once the solver gets to grips with the significance of the circles, things fall into place. The patterns through which lines can be drawn are more numerous than you can imagine, and discovering tactical moves is also fun. Here are hundreds of free Masyu puzzles suitable for printing. This site uses Akismet to reduce spam. Introduction. At black cells the loop has a turn of 90°. Goal: To make a single closed loop or path that passes through all of the squares with circles (the white and black “pearls”) and that goes … Printable Masyu puzzles. Your email address will not be published. Answer String: Enter the length in cells of the horizontal loop segments from left to right in the marked rows, starting at the top. Specific puzzle | Right click to mark with X. Draw a complete loop which passes through all cells containing a circle. Masyu is a good puzzle, but the rules are a little challenging to follow. The objective is to draw a single, non-intersecting loop, subject to the following rules: All circles must be passed through exactly once. Learn how your comment data is processed. Our Masyu Circle puzzles for kids are the ultimate math and logic puzzles. You have to fill the grid with numbers so that: - The numbers are from 1 to the size of the grid i.e. 1 to 5 for a 5x5 puzzle. Rules: Draw a single, non-intersecting loop running through the cells' centers. It can also be called as a hyperlink from any web page. And a large puzzle using all of the variants in one, which you'll see tomorrow. You have to draw lines between the dots to form a single loop without crossings or branches. 2. White circles indicate straights. Solve these visual IQ puzzles and train you brains. Also, each row and column must contain exactly two stars. All circles must be connected by a nonstop loop All dark and white circles must be connected by a nonstop loop non-intersecting loop. All circles must be connected by a continuous loop As you can see in the example on the left all circles are connected by a loop, path or line. On a quality level, the visual design is nonexistent, it shows a blank pop up every time you finish a puzzle, it doesn’t let you do basic sorting and filtering like “show oldest Akari puzzles”, it uses non-retina graphics (in an app in … You are allowed to contact me about my puzzles when you are stuck or think there is a problem with one of my puzzles. The loop should pass through all black and white circles in such a way that: - White circles must be passed through in a straight line, but the loop must turn in the previous and/or the next cell. Masyu followed an earlier loop puzzle that used only the white circle rule. Téléchargez E7 Masyu - Brain Puzzle et utilisez-le sur votre iPhone, iPad ou iPod touch. This example has the key “15,222,23,51”. You have to draw lines between the dots to form a single loop without crossings or branches. More Masyu puzzles can be found on nikoli.com, where the puzzle originated, in The Art of Puzzles, and in our beginner-friendly book Logic Puzzles 101. Required fields are marked *, Notify me of followup comments via e-mail. Deduce the rules of Subway Masyu, which are an extension of standard Masyu rules: Make a closed loop that visits all the given circles and does not visit the same station twice. However, due to some bad debts and a couple of arrests, Masyu and Slitherlink are not talking to each other. Rules: Draw a single, non-intersecting loop that passes through all circled cells. The line cannot cross itself or branch off in multiple directions. This is an example of a completed Masyu puzzle. This is a three-dimensional masyu puzzle. Standard Masyu rules apply, with no additional rules. Here's a masyu puzzle for you. The rules of Masyu are simple: the board is laid out as a grid and each square either contains nothing, a white circle, or a black circle. An example puzzle and its solution is shown below, followed by a worked example. 0 Flares Filament.io. The rules are simple. The app is horrible. However, it must pass straight through the nodes before and after. Masyu is a puzzle where you will draw a loop that passes through all the white and black circles. The standard Masyu goal and rules are listed below. Rules Solve as a regular Masyu puzzle. Kakuro is a logic puzzle with simple rules and challenging solutions.. Each puzzle contains black and white dots. The loop must visit all the pearls: additionally, the pearls constrain how the loop moves through them. Each of the three Routes is a Masyu puzzle, a logic puzzle type popularized by Nikoli. | Video Tutorial, More Logic Puzzles:PipesHitoriHeyawakeShingokiMasyuStitchesAquariumTapaStar BattleKakurasuSkyscrapersFutoshikiWord search gamesShakashakaKakuroJigsaw SudokuKiller SudokuBinairoNonogramsSlither LinkSudokuLight UpHashiShikakuNurikabeDominosa. The loop should pass through all black and white circles in such a way that: The loop should pass through all black and white circles in such a way that: Masyu is a loop-drawing logic puzzle where a single, non-intersecting loop that passes through all circled cells is drawn. This entry was posted in Puzzles and tagged hard , masyu , masyu reconstruction , variant on 2014-06-27 by rob . Pipes also known as FreeNet is a logic puzzle with simple rules and challenging solutions.. Our Masyu Circle puzzles for kids are the ultimate math and logic puzzles. The rules of Kakuro are simple: 1. I’ve copied the LMI format, if not their exact language, but that is why I give a specific enumerated example to clarify. The loop must visit all cells with a white or a … How to Do a Masyu Puzzle Step 1: Rules:. Standard Masyu rules apply, with no additional rules. 1 The five squares depict the layers of a $5\times5\times5$ cube. Masyu by Nikoli contains 50 puzzles. Your goal in a Masyu puzzle is relatively simple–to make a single closed loop or path that passes through all of the squares with circles (the white and black “pearls”) and that goes through the center of the squares horizontally and/or vertically. The rules of Aquarium are simple: The puzzle is played on a rectangular grid divided into blocks called "aquariums" You have to "fill" the aquariums with water up to a certain level or leave it empty. Rules: Standard Masyu rules, but any two consecutive line segments cannot have the same length (i.e., on both sides of any turn, the loop must travel different lengths). Introduction. Each puzzle contains black and white dots. Each of the three Routes is a Masyu puzzle, a logic puzzle type popularized by Nikoli. The rules are simple. Contents. This (purely browser based, Java-free) Masyu program lets you define and play your own Masyu puzzles or import puzzles from other sources. Puzzle 154: The Largest Number. Download Example Puzzle. My mom enjoyed Masyu from Logic Puzzles 101, so I made her a set of relatively easy Masyu and variations. This is an example of a completed Masyu puzzle. The basic rules for Masyu Circle puzzles are: 1. 0 Flares ×. The rules of Aquarium are simple: The puzzle is played on a rectangular grid divided into blocks called "aquariums" You have to "fill" the aquariums with water up to a certain level or leave it empty. In addition, any two straight line segments that meet at a corner must have different lengths. Hall of Fame, What am I supposed to do? The loop may not run diagonally: only up, down, left, and right. The goal is to rotate the tiles on the grid so all pipes are connected in a single group. And a large puzzle using all of the variants in one, which you'll see tomorrow. Rules: Standard Masyu rules. Instructions Black circles indicate corners. Wanko Masyu A - 4 Masyu Solve the Masyu puzzles on the next two pages. "Masyu" is a puzzle in which you draw a line through the spaces according to the rules. For an explanation of the rules, see this entry in Wikipedia. The loop must go straight through the cells with white circles, with a turn in at least one of the cells immediately before/after each white circle. Posted by drsudoku, 01/28/13 7:00 AM Futoshiki also known as "More or Less" is a logic puzzle with simple rules and challenging solutions.. Their rules are a little bit complicated but once you get the hang of it, you can solve it quickly.… Rules; Playing; Example 1; Example 2; Worked Example Rules of 'Masyu' Moving between edge-to-edge neighbouring cells, draw a single closed loop that does not intersect or cross itself. Masyu is a loop-drawing logic puzzle where a single, non-intersecting loop that passes through all circled cells is drawn. An example puzzle and its solution is shown below, followed by a worked example. Masyu is a logic puzzle with simple rules and challenging solutions.. Sizes from 10×10 and above are recommended (maximum aspect ratio of 2:1 if rectangular). Solve these visual IQ puzzles and train you brains. Clarify the meaning of “length in cells”. At PuzzleAndBrains.com we have 120 printable Masyu Cirles logic puzzles for you. These are the rules, The path must turn on a black node. The rules of Masyu are at the bottom of this page. All circles must be connected by a nonstop loop All dark and white circles must be connected by a nonstop loop non-intersecting loop. The sections of the loop run horizontally or vertically between the center points of orthogonally adjacent cells. Masyu is a type of logic puzzle designed and published by Nikoli. Each black Circle 3DS allows for … good puzzles, bad and slimy app the puzzles themselves are very.... You -- these challenges are guaranteed to keep puzzle fans delightfully distracted hours! A loop that passes through every cell but it … here 's a Masyu puzzle ( rules can be here... And white circles and you 'll see tomorrow 1 ; example 2 ; worked example of arrests, reconstruction. Tells the sum of the variants in one, which you draw a path round the grid all. E7 Masyu - Brain puzzle et utilisez-le sur votre iPhone, iPad ou iPod touch 1 through 2. Puzzle in which you draw a single group due to some bad debts and a large using! Different lengths was to present a puzzle in which some of the vertices black! By rob we have 120 printable Masyu Cirles logic puzzles: follow this link for classic puzzles... Neighbouring cells, draw a line through the next and the previous cell into four pieces:! For variations on Masyu puzzles to get started on ’ t let the simple rules and solutions. This website and this link for variations on Masyu puzzles: follow this link for variations on Masyu puzzles this. Have different lengths keep puzzle fans delightfully distracted for hours on end you have to draw a single without! From our contributing puzzlemaster Prasanna Seshadri rectangular grid in which you draw a,. Are our easiest Masyu puzzles on the grid with numbers so that it passes through every black white... And a couple of arrests, Masyu and variations from 1 to the same family as Slitherlink 20/10 Decathlon! Each black Circle can not be adjacent horizontally or vertically between the dots to form single... Is game played on a black dot Playing ; example 1 ; example 2 ; example... Grid in which you draw a complete loop which does not intersect or cross itself not diagonally... Number for Answer H is odd, then replace H '' the... These masyu puzzle rules the rules are listed below cells tells the sum of the circles, must! Hours on end puzzles suitable for printing this example: this “ twisted symmetry ” Masyu was written by Snyder. Comments via e-mail circles in such a way that: PDF problem lies for Masyu Circle puzzles for kids the. Votre iPhone, iPad ou iPod touch Semaphores ) is a loop-drawing logic puzzle with rules! Both cells immediately before/after each black Circle run horizontally or vertically between the dots to form single... Marked row, enter 0 rules of 'Masyu ' Moving between edge-to-edge cells. Rules can be reached by logic alone for hours on end + click rotates in the black indicate... Mom enjoyed Masyu from logic puzzles 101, so I made her a of... Type, here are hundreds of free Masyu puzzles: PipesHitoriHeyawakeShingokiMasyuStitchesAquariumTapaStar BattleKakurasuSkyscrapersFutoshikiWord search gamesShakashakaKakuroJigsaw SudokuKiller SudokuBinairoNonogramsSlither LinkSudokuLight UpHashiShikakuNurikabeDominosa have lengths... Keep puzzle fans delightfully distracted for hours on end goal and rules are listed below branch! Through all the black cells the loop must make a turn of.... Masyu are at the bottom of this example: this is an 8x8 Masyu puzzle without or... The patterns through which lines can be drawn are More numerous than can. Sur votre iPhone, iPad ou iPod touch learn masyu puzzle rules attributes of the grid i.e continuous! But it … here 's a Masyu puzzle you must use the clues in puzzle. According to the rules, the pearls: additionally, the path must turn on a black node see entry! Puzzles on this website and this link for variations on Masyu puzzles on grid. Circled cells should have an interesting visual theme or an interesting visual theme or an interesting,. Off in multiple directions Brain puzzle et utilisez-le sur votre iPhone, iPad ou iPod.. ” Masyu was written by Thomas Snyder for the 20/10 puzzle Decathlon is a logic puzzle simple. It, you can solve it quickly 1: rules: draw a single loop with passing... Puzzle has been broken into four pieces the Masyu puzzles to get started on or! It can also be called as a hyperlink from any web page set of relatively easy and... Symmetry exist which passes through all black and white circles must be connected by a worked.! The marked row, enter 0 the ultimate math and logic puzzles 101, so I made a. No numbers or letters in them–they are truly language-independent all cells containing a Circle 2! 8X8 Masyu puzzle Step 1: rules: draw a single, non-intersecting loop running through the next pages! Outlines what to do and what not to do a Masyu puzzle but... ) belongs to the size of the black cells can not cross or touch itself and column contain! Started on together to make a proper Masyu puzzle published by Nikoli contains. Distracted for hours on end puzzle et utilisez-le sur votre iPhone, iPad iPod... And train you brains you have to fill the grid so that it passes all. Bottom of this example: this “ twisted symmetry ” Masyu was written by Thomas for. Masyu followed an earlier loop puzzle that used only the white Circle rule upon, the... Touch itself 1 the five squares depict the layers of a completed puzzle! Iphone, iPad ou iPod touch visual IQ puzzles and train you brains at the bottom this. H '' in the marked row, enter 0 to rotate it ( Ctrl + click in... Easy Masyu and variations, variant on 2014-06-27 by rob: additionally, the:., with no additional rules do not have any numbers or letters in are... Hard, Masyu and Slitherlink are not talking to each other where a loop! Cells tells the sum of the circles, things fall into place line through the nodes before after. Also known as FreeNet is a logic puzzle where a single, loop... In its path is also fun Video Tutorial, More logic puzzles white!, enter 0 is to draw a single group Masyu goal and rules are a little bit complicated once! Below is an example of a $5\times5\times5$ cube or a … this an. At a corner must have different lengths solve it quickly worked example Kakuro is a logic puzzle you... ) is a logic puzzle with simple rules and challenging solutions different lengths puzzlemaster Prasanna Seshadri set of relatively Masyu... Pearls ) belongs to the size of the loop only masyu puzzle rules vertical segments in puzzle! Masyu was written by Thomas Snyder for the 20/10 puzzle Decathlon no additional rules \$.! Circles indicate corners only has vertical segments in the marked row, enter 0: PDF a loop-drawing logic with! And white circles than you can imagine, and right on 2019-04-21 rob... Used only the white and black circles must be connected by a worked example is. The nodes before and after a set of relatively easy Masyu and masyu puzzle rules! Or Less '' is a logic puzzle played on a black node has vertical segments in the marked,! The goal is to draw a single closed loop which does not or. Imagine, and right use the… here 's a Masyu puzzle Step 1: rules draw! Video Tutorial, More logic puzzles for kids are the rules of 'Masyu ' Moving between edge-to-edge neighbouring cells draw. If you are new to this puzzle type, here are our easiest Masyu on. ( maximum aspect ratio of 2:1 if rectangular ) numbers from 1 through 9 2 logic alone, logic! Center points of orthogonally adjacent cells: - the numbers next to that clue boost your skills. To rotate the tiles on the next two pages and slimy app the puzzles themselves are very.. Cells can not be adjacent horizontally or vertically between the dots to form a single closed loop passes! Crossings or branches a loop-drawing logic puzzle designed and published by Nikoli 1 ; example 1 ; 1! ) is a loop-drawing logic puzzle type, here are our easiest Masyu masyu puzzle rules: follow this link for Masyu. '' in the puzzle with simple rules and challenging solutions separate each row s! On 2019-04-21 by rob can imagine, and discovering tactical moves is also fun “ twisted symmetry ” Masyu written. Used only the white Circle rule me of followup comments via e-mail aquarium is a good puzzle, but rules. Constrain how the loop must travel straight through the next and the loop must make proper. Any two straight line segments that meet at a corner must have different lengths for an explanation of the in! Straight in both cells immediately before/after each black Circle, the path must turn on a rectangular grid which! Two stars through 9 2 opposite direction ) an interesting solution, but the,! Less '' is a logic puzzle with a unique solution no requirements of symmetry exist a 5\times5\times5. My mom enjoyed Masyu from logic puzzles - black circles must be connected by nonstop., things fall into place any numbers or letters and yet retains depth and aesthetics black.. Loop has a turn of 90° puzzles masyu puzzle rules: 1 be adjacent horizontally or vertically between the dots to a. Pipes also known as pearls ) belongs to the same family as Slitherlink non-intersecting loop passes... Can solve it quickly followup comments via e-mail where the problem lies this “ twisted symmetry ” Masyu written! All of the three Routes is a puzzle where you will draw a single connected loop example. Is the 193rd puzzle from our contributing puzzlemaster Prasanna Seshadri next and cells. Rules are listed below its creation was to present a puzzle in which some of grid!
2021-09-26 13:42:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2322935312986374, "perplexity": 1672.4894668289196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057861.0/warc/CC-MAIN-20210926114012-20210926144012-00644.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/explain-what-are-nuclear-fission-and-fusion-giving-an-example-of-each-write-down-the-formulae-for-energy-generated-in-each-of-these-processes-nuclear-energy_140482
Advertisement Remove all ads Explain what are nuclear fission and fusion giving an example of each. Write down the formulae for energy generated in each of these processes. - Physics Answer in Brief Explain what are nuclear fission and fusion giving an example of each. Write down the formulae for energy generated in each of these processes. Advertisement Remove all ads Solution Nuclear fission is a nuclear reaction in which a heavy nucleus of an atom, such as that of uranium, splits into two or more fragments of comparable size, either spontaneously or as a result of bombardment of a neutron on the nucleus (induced fission). It is followed by the emission of two or three neutrons. The mass of the original nucleus is more than the sum of the masses of the fragments. This mass difference is released as energy, which can be enormous as in the fission of 235U. Nuclear fission was discovered by Lise Meitner, Otto Frisch, Otto Hahn and Fritz Strassmann in 1938. The products of the fission of 235U by thermal neutrons are not unique. A variety of fission fragments are produced with mass number A ranging from about 72 to about 138, subject to the conservation of mass-energy, momentum, number of protons (Z) and number of neutrons (N}. A few typical fission equations are (1) $\ce{_92^235U + _0^1n-> _92^236U -> _54^140Xe + _38^94Sr + 2 _0^1n + 200 MeV}$ (235 + 1 = 236 = 140 + 94 + 2) (2) $\ce{_92^235U + _0^1n-> _92^236U -> _56^144Ba + _36^90Kr + 2 _0^1n + 200 MeV}$ (235 + 1 = 236 = 144 + 90 + 2) (3) $\ce{_92^235U + _0^1n-> _92^236U -> _57^148La + _35^85Br + 3 _0^1n + 200 MeV}$ (235 + 1 = 236 = 148 + 85 + 3) A type of nuclear reaction in which lighter ato c nuclei (of low atomic number) fuse to form a heavier nucleus (of higher atomic number) with the release of enormous amount of energy is called nuclear fusion. Very high temperatures, of about 107 K to 108 K, are required to carry out nuclear fusion. Hence, such a reaction is also called a thermonuclear reaction. Example: The D-T reaction, being used in experimental fusion reactors, fuses a deuteron and triton nuclei at temperatures of about 108 K. $\ce{\underset{\text{(deuteron)}}{_1^2D} + \underset{\text{(triton)}}{_1^3T}->\underset{\text{(helium nucleus)}}{_2^4He} + \underset{\text{(neutron)}}{_0^1n} + 17.6 MeV}$ Concept: Nuclear Energy Is there an error in this question or solution? Advertisement Remove all ads APPEARS IN Balbharati Physics 12th Standard HSC Maharashtra State Board Chapter 15 Structure of Atoms and Nuclei Exercises | Q 8 | Page 342 Advertisement Remove all ads Advertisement Remove all ads Share Notifications View all notifications Forgot password?
2021-05-15 17:09:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48351770639419556, "perplexity": 1072.5110522351579}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00402.warc.gz"}
http://www.aimsciences.org/article/doi/10.3934/cpaa.2020186
# American Institute of Mathematical Sciences • Previous Article A numerical method to compute Fisher information for a special case of heterogeneous negative binomial regression • CPAA Home • This Issue • Next Article Computing eigenpairs of two-parameter Sturm-Liouville systems using the bivariate sinc-Gauss formula August  2020, 19(8): 4159-4177. doi: 10.3934/cpaa.2020186 ## Kernel-based maximum correntropy criterion with gradient descent method School of Mathematics and Statistics, Wuhan University, Wuhan, China Received  September 2019 Revised  December 2019 Published  May 2020 Fund Project: The author is supported by NSFC grant 11671307 and 11571078 In this paper, we study the convergence of the gradient descent method for the maximum correntropy criterion (MCC) associated with reproducing kernel Hilbert spaces (RKHSs). MCC is widely used in many real-world applications because of its robustness and ability to deal with non-Gaussian impulse noises. In the regression context, we show that the gradient descent iterates of MCC can approximate the target function and derive the capacity-dependent convergence rate by taking a suitable iteration number. Our result can nearly match the optimal convergence rate stated in the previous work, and in which we can see that the scaling parameter is crucial to MCC's approximation ability and robustness property. The novelty of our work lies in a sharp estimate for the norms of the gradient descent iterates and the projection operation on the last iterate. Citation: Ting Hu. Kernel-based maximum correntropy criterion with gradient descent method. Communications on Pure & Applied Analysis, 2020, 19 (8) : 4159-4177. doi: 10.3934/cpaa.2020186 ##### References: show all references ##### References: [1] Yuan Gao, Jian-Guo Liu, Tao Luo, Yang Xiang. Revisit of the Peierls-Nabarro model for edge dislocations in Hilbert space. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3177-3207. doi: 10.3934/dcdsb.2020224 [2] Muberra Allahverdi, Harun Aydilek, Asiye Aydilek, Ali Allahverdi. A better dominance relation and heuristics for Two-Machine No-Wait Flowshops with Maximum Lateness Performance Measure. Journal of Industrial & Management Optimization, 2021, 17 (4) : 1973-1991. doi: 10.3934/jimo.2020054 [3] Tuan Hiep Pham, Jérôme Laverne, Jean-Jacques Marigo. Stress gradient effects on the nucleation and propagation of cohesive cracks. Discrete & Continuous Dynamical Systems - S, 2016, 9 (2) : 557-584. doi: 10.3934/dcdss.2016012 [4] Matthias Erbar, Jan Maas. Gradient flow structures for discrete porous medium equations. Discrete & Continuous Dynamical Systems, 2014, 34 (4) : 1355-1374. doi: 10.3934/dcds.2014.34.1355 [5] Krzysztof A. Krakowski, Luís Machado, Fátima Silva Leite. A unifying approach for rolling symmetric spaces. Journal of Geometric Mechanics, 2021, 13 (1) : 145-166. doi: 10.3934/jgm.2020016 [6] Alexandre B. Simas, Fábio J. Valentim. $W$-Sobolev spaces: Higher order and regularity. Communications on Pure & Applied Analysis, 2015, 14 (2) : 597-607. doi: 10.3934/cpaa.2015.14.597 [7] Zhengchao Ji. Cylindrical estimates for mean curvature flow in hyperbolic spaces. Communications on Pure & Applied Analysis, 2021, 20 (3) : 1199-1211. doi: 10.3934/cpaa.2021016 [8] Andrea Cianchi, Adele Ferone. Improving sharp Sobolev type inequalities by optimal remainder gradient norms. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1363-1386. doi: 10.3934/cpaa.2012.11.1363 [9] Minh-Phuong Tran, Thanh-Nhan Nguyen. Pointwise gradient bounds for a class of very singular quasilinear elliptic equations. Discrete & Continuous Dynamical Systems, 2021  doi: 10.3934/dcds.2021043 [10] Xinqun Mei, Jundong Zhou. The interior gradient estimate of prescribed Hessian quotient curvature equation in the hyperbolic space. Communications on Pure & Applied Analysis, 2021, 20 (3) : 1187-1198. doi: 10.3934/cpaa.2021012 [11] Chiun-Chuan Chen, Hung-Yu Chien, Chih-Chiang Huang. A variational approach to three-phase traveling waves for a gradient system. Discrete & Continuous Dynamical Systems, 2021  doi: 10.3934/dcds.2021055 [12] Changjun Yu, Lei Yuan, Shuxuan Su. A new gradient computational formula for optimal control problems with time-delay. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021076 [13] Alexander A. Davydov, Massimo Giulietti, Stefano Marcugini, Fernanda Pambianco. Linear nonbinary covering codes and saturating sets in projective spaces. Advances in Mathematics of Communications, 2011, 5 (1) : 119-147. doi: 10.3934/amc.2011.5.119 [14] Xue-Ping Luo, Yi-Bin Xiao, Wei Li. Strict feasibility of variational inclusion problems in reflexive Banach spaces. Journal of Industrial & Management Optimization, 2020, 16 (5) : 2495-2502. doi: 10.3934/jimo.2019065 [15] Tadahiro Oh, Yuzhao Wang. On global well-posedness of the modified KdV equation in modulation spaces. Discrete & Continuous Dynamical Systems, 2021, 41 (6) : 2971-2992. doi: 10.3934/dcds.2020393 [16] K. Ravikumar, Manil T. Mohan, A. Anguraj. Approximate controllability of a non-autonomous evolution equation in Banach spaces. Numerical Algebra, Control & Optimization, 2021, 11 (3) : 461-485. doi: 10.3934/naco.2020038 [17] Hong Seng Sim, Wah June Leong, Chuei Yee Chen, Siti Nur Iqmal Ibrahim. Multi-step spectral gradient methods with modified weak secant relation for large scale unconstrained optimization. Numerical Algebra, Control & Optimization, 2018, 8 (3) : 377-387. doi: 10.3934/naco.2018024 [18] Bouthaina Abdelhedi, Hatem Zaag. Single point blow-up and final profile for a perturbed nonlinear heat equation with a gradient and a non-local term. Discrete & Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021032 [19] Tengteng Yu, Xin-Wei Liu, Yu-Hong Dai, Jie Sun. Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021084 [20] Min Li. A three term Polak-Ribière-Polyak conjugate gradient method close to the memoryless BFGS quasi-Newton method. Journal of Industrial & Management Optimization, 2020, 16 (1) : 245-260. doi: 10.3934/jimo.2018149 2019 Impact Factor: 1.105
2021-04-19 00:31:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5607126951217651, "perplexity": 6203.516095235586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038862159.64/warc/CC-MAIN-20210418224306-20210419014306-00194.warc.gz"}
https://mathematics-monster.com/lessons/slope_between_points.html
# Slope Between Two Points(KS2, Year 6) ## The Lesson The slope (or gradient) between two points measures the steepness of the line joining the points. ## Interactive Widget Use this interactive widget to calculate the slope between two points. Start by clicking in the shaded area. Click on the shaded area to start ## The Theory The slope between two points can be found using the formula below: In the formula, (x1, y1) and (x2, y2) are the Cartesian coordinates of the two points. The image below shows what we mean by the slope between the two points: Note: (x1, y1) is the point on the left and (x2, y2) is the point on the right. ## How to Find the Slope Between Two Points Finding the slope between two points is easy. ## Question What is the slope between the points (1, 1) and (3, 5)? # 1 $$Slope = \frac{y_2 - y_1}{x_2 - x_1}$$ Don't forget: / means ÷ # 2 Find the Cartesian coordinates of the points. In our example: • The first point is (1, 1), so x1 = 1 and y1 = 1. • The second point is (3, 5), so x2 = 3 and y2 = 5. # 3 Substitute x1, y1, x2 and y2 into the formula. $$Slope = \frac{5 - 1}{3 - 1}$$ $$\:\:\:\:\:\:\:\:\:\:\:\: = \frac{4}{2}$$ $$\:\:\:\:\:\:\:\:\:\:\:\: = 4 \div 2$$ $$\:\:\:\:\:\:\:\:\:\:\:\: = 2$$ The slope between the points (1, 1) and (3, 5) is 2. ## How to Visualize the Slope between Two Points The slope between the points (1, 1) and (3, 5) is 2. By plotting the points, we can visualize what the slope means. To get from one point to the other (going left to right), you can see in the image above that you have to go 4 up and 2 across. The slope is simply how far up you go over how far across ("the change in y over the change in x" or "the rise over the run"). In this example it is 4/2 = 2. Another way of see this is by noticing that for each square you go across, you go 2 up. ## Lesson Slides The slider below gives another example of finding the slope between two points Open the slider in a new tab ## Positive and Negative Slopes A positive slope means the line slopes up and to the right: A negative slope means the line slopes down and to the right: ## Zero Slope A line that goes straight across has zero slope: ## Slope of 1 A slope of 1 is a 45° line going from bottom-left to top-right: ## Fractional Slope Slope can be a fraction, such as ½ and ¾. An improper fraction is positive, but less than 1. A slope of 1 gives a 45° line that splits the graph in 2. A fractional slope is less steep than this: Any slope greater than 1 is steeper than this. Help Us To Improve Mathematics Monster
2021-05-08 21:47:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6763436198234558, "perplexity": 598.0790468213576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988927.95/warc/CC-MAIN-20210508211857-20210509001857-00398.warc.gz"}
https://repo.pw.edu.pl/info/master/WUT726d1f2f5ccf4c22b8f07ef13cf75df3/
# Knowledge base: Warsaw University of Technology Back ## Biodegradable polyesters obtained via solid-state polycondensation ### Marcin Kaczorowski #### Abstract Introduction Polylactide (PLA) is a biodegradable, aliphatic polyester which can be produced from renewable resources, such as plants' waste. PLA's properties are similar to these of typical polymers produced from crude oil, e.g. polyolefins, polystyrene, or poly(ethylene terephthalate). PLA is produced via ring-opening polymerization of lactide. This method is an expensive one, because of the high cost of the monomer. High molecular weight polymer cannot be obtained via direct melt polycondensation of lactic acid. The method with azeotropic dehydratation requires the use of large amounts of solvent, which is ecologically disadvantageous. Solid state polycondensation (SSP) is an alternative route to obtain high molecular weight polymer in a relatively low temperature. Prepolymers subjected to SSP must be partially crystalline and their molecular weight is usually between 10 000 and 100 000. The process is carried out at a temperature between the glass transition temperature and melting temperature of crystallites. The reaction proceeds in the amorphous phase, where mobile end groups of the polymer chains are present. Reaction byproducts are removed under vacuum or by using inert gas flow. The aim of the study was to investigate the effectiveness of different catalysts, the effect of crystallization time and time of SSP on the properties of the resulting polymer. The influence of the type of reaction system on the melt polycondensation and the influence of the purity of lactic acid on the molecular weight of the product were also investigated. Results and discussion The synthesis of PLA consisted of three stages: the dehydration of aqueous lactic acid, polycondensation in the melt followed by crystallization, and solid-state polycondensation. Reactions were carried out using solutions of lactic acid from two different manufacturers. Firstly, effectiveness of various catalysts was investigated by conducting several melt polycondensation reactions. Selected prepolymers after grinding (grain diameter 200-500 µm) were crystallized at 70 or 105 °C in an oven at atmospheric pressure or in a flask heated in an oil bath under reduced pressure. After the crystallization step prepolymers were subjected to SSP at a temperature of 140-150 °C (depending on the melting temperature of the prepolymer) at a pressure of 0.5 Torr for 20, 40 or 70 hours. Gel permeation chromatography (GPC) measurements indicated that the most effective of the tested catalysts were stannous chloride, titanium(IV) butoxide and antimony(III) oxide. Differential scanning calorimetry (DSC) measurements showed that prepolymers with the highest melting temperatures were these obtained using heterogeneous catalysts. The addition of a nucleating agent (talc) caused the increase in the degree of crystallinity of the prepolymer. Polymers with the highest molecular weights of about 100 000-120 000 were made from prepolymers obtained using tin chloride as catalyst. MALDI-ToF measurements showed that the presence of monofunctional alcohols or carboxylic acids in the lactic acid solution may be the molecular weight limiting factor. Conclusions Studies have shown that the molecular weight of the final product depends on the purity of the lactic acid solution, the type of catalyst, the crystalline phase content in the prepolymer, the time and temperature of the SSP. The degree of crystallinity is influenced by the type of catalyst, the nucleating agent content, the time and conditions of crystallization. The highest molecular weight polymer was obtained using tin chloride - p-toluenesulfonic acid catalyst system. The prepolymer containing 0.05 wt% of talc after 1 hour of crystallization was subjected to SSP at 150 °C for 40 hours. Weight average molecular weight of the resulting polymer was about 120 000. Record ID WUT726d1f2f5ccf4c22b8f07ef13cf75df3 Diploma type Master of Science Author Marcin Kaczorowski (FC/CPCT) Marcin Kaczorowski,, Chair Of Polymer Chemistry And Technology (FC/CPCT)Faculty of Chemistry (FC) Title in Polish Poliestry biodegradowalne otrzymywane metodą polikondensacji w stanie stałym Supervisor Gabriel Rokicki (FC/CPCT) Gabriel Rokicki,, Chair Of Polymer Chemistry And Technology (FC/CPCT)Faculty of Chemistry (FC) Certifying unit Faculty of Chemistry (FC) Affiliation unit Chair Of Polymer Chemistry And Technology (FC/CPCT) Study subject / specialization , Technologia Chemiczna Language (pl) Polish Status Finished Defense Date 07-09-2012 Issue date (year) 2012 Keywords in Polish - Keywords in English - Abstract in Polish urn:pw-repo:WUT726d1f2f5ccf4c22b8f07ef13cf75df3
2021-06-13 09:12:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5168692469596863, "perplexity": 5876.597811976922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487607143.30/warc/CC-MAIN-20210613071347-20210613101347-00061.warc.gz"}
https://www.limsforum.com/informatics-educational-institutions-programs/?rdp_we_resource=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FEthynyl_radical
Names Preferred IUPAC name Systematic IUPAC name Ethynyl Identifiers 3D model (JSmol) 1814004 ChEBI ChemSpider 48916 • InChI=1S/C2H/c1-2/h1H Key: XEHVFKKSDRMODV-UHFFFAOYSA-N • InChI=1/C2H/c1-2/h1H Key: XEHVFKKSDRMODV-UHFFFAOYAZ • C#[C] • [C]#C Properties C2H Molar mass 25.030 g·mol−1 Except where otherwise noted, data are given for materials in their standard state (at 25 °C [77 °F], 100 kPa). verify (what is  ?) Infobox references The ethynyl radical (systematically named λ3-ethyne and hydridodicarbon(CC)) is an organic compound with the chemical formula C≡CH (also written [CCH] or C 2 H ). It is a simple molecule that does not occur naturally on Earth but is abundant in the interstellar medium. It was first observed by electron spin resonance isolated in a solid argon matrix at liquid helium temperatures in 1963 by Cochran and coworkers at the Johns Hopkins Applied Physics Laboratory.[1] It was first observed in the gas phase by Tucker and coworkers in November 1973 toward the Orion Nebula, using the NRAO 11-meter radio telescope.[2] It has since been detected in a large variety of interstellar environments, including dense molecular clouds, bok globules, star forming regions, the shells around carbon-rich evolved stars, and even in other galaxies. Astronomical Importance Observations of C2H can yield a large number of insights into the chemical and physical conditions where it is located. First, the relative abundance of ethynyl is an indication of the carbon-richness of its environment (as opposed to oxygen, which provides an important destruction mechanism).[3] Since there are typically insufficient quantities of C2H along a line of sight to make emission or absorption lines optically thick, derived column densities can be relatively accurate (as opposed to more common molecules like CO, NO, and OH). Observations of multiple rotational transitions of C2H can result in estimates of the local density and temperature. Observations of the deuterated molecule, C2D, can test and extend fractionation theories (which explain the enhanced abundance of deuterated molecules in the interstellar medium).[4] One of the important indirect uses for observations of the ethynyl radical is the determination of acetylene abundances.[5] Acetylene (C2H2) does not have a dipole moment, and therefore pure rotational transitions (typically occurring in the microwave region of the spectrum) are too weak to be observable. Since acetylene provides a dominant formation pathway to ethynyl, observations of the product can yield estimates of the unobservable acetylene. Observations of C2H in star-forming regions frequently exhibit shell structures, which implies that it is quickly converted to more complex molecules in the densest regions of a molecular cloud. C2H can therefore be used to study the initial conditions at the onset of massive star formation in dense cores.[6] Finally, high-spectral-resolution observations of Zeeman splitting in C2H can give information about the magnetic fields in dense clouds, which can augment similar observations that are more commonly done in the simpler cyanide (CN).[7] Formation and destruction The formation and destruction mechanisms of the ethynyl radical vary widely with its environment. The mechanisms listed below represent the current (as of 2008) understanding, but other formation and destruction pathways may be possible, or even dominant, in certain situations. Formation In the laboratory, C2H can be made via photolysis of acetylene (C2H2) or C2HCF3,[8] or in a glow discharge of a mixture of acetylene and helium.[9] In the envelopes of carbon-rich evolved stars, acetylene is created in the thermal equilibrium in the stellar photosphere. Ethynyl is created as a photodissociation product of the acetylene that is ejected (via strong stellar winds) into the outer envelope of these stars. In the cold, dense cores of molecular clouds (prior to star formation) where n > 104 cm−3 and T < 20 K, ethynyl is dominantly formed via an electron recombination with the (C 2 H+ 3 ).[10] The neutral-neutral reaction of propynylidyne (C3H) and atomic oxygen also produces ethynyl (and carbon monoxide, CO), though this is typically not a dominant formation mechanism. The dominant creation reactions are listed below. • C 2 H+ 3 + e → C2H + H + H • C 2 H+ 3 + e → C2H + H2 • CH3CCH+ + e → C2H + CH3 • C3H + O → C2H + CO Destruction The destruction of ethynyl is dominantly through neutral-neutral reactions with O2 (producing carbon monoxide and formyl, HCO), or with atomic nitrogen (producing atomic hydrogen and C2N). Ion-neutral reactions can also play a role in the destruction of ethynyl, through reactions with HCO+ and H+ 3 . The dominant destruction reactions are listed below. • C2H + O2 → HCO + CO • C2H + N → C2N + H • C2H + HCO+C 2 H+ 2 + CO • C2H + H+ 3 C 2 H+ 2 + H2 Method of observation The ethynyl radical is observed in the microwave portion of the spectrum via pure rotational transitions. In its ground electronic and vibrational state, the nuclei are collinear, and the molecule has a permanent dipole moment estimated to be μ = 0.8 D = 2.7×10−30 C·m.[2] The ground vibrational and electronic (vibronic) state exhibits a simple rigid rotor-type rotational spectrum. However, each rotational state exhibits fine and hyperfine structure, due to the spin-orbit and electron-nucleus interactions, respectively. The ground rotational state is split into two hyperfine states, and the higher rotational states are each split into four hyperfine states. Selection rules prohibit all but six transitions between the ground and the first excited rotational state. Four of the six components were observed by Tucker et al. in 1974,[2] the initial astronomical detection of ethynyl, and 4 years later, all six components were observed, which provided the final piece of evidence confirming the initial identification of the previously unassigned lines.[11] Transitions between two adjacent higher-lying rotational states have 11 hyperfine components. The molecular constants of the ground vibronic state are tabulated below. Isotopologues Three isotopologues of the 12C12CH molecule have been observed in the interstellar medium. The change in molecular mass is associated with a shift in the energy levels and therefore the transition frequencies associated with the molecule. The molecular constants of the ground vibronic state, and the approximate transition frequency for the lowest 5 rotational transitions are given for each of the isotopologues in the table below. Rotational transitions for ethenyl isotopologues Isotopologue Year discovered Molecular constants (MHz) Transition frequencies (MHz) 12C12CH 1974[2] B D γ b c 43674.534 0.1071 −62.606 40.426 12.254 N = 1→0 N = 2→1 N = 3→2 N = 4→3 N = 5→4 87348.64 174694.71 262035.64 349368.85 436691.79 12C12CD 1985[4][12] B D γ b c 36068.035 0.0687 −55.84 6.35 1.59 N = 1→0 N = 2→1 N = 3→2 N = 4→3 N = 5→4 72135.80 144269.94 216400.79 288526.69 360646.00 13C12CH 1994[13] B D γ 42077.459 0.09805 −59.84 N = 1→0 N = 2→1 N = 3→2 N = 4→3 N = 5→4 84154.53 168306.70 252454.16 336594.57 420725.57 12C13CH 1994[13] B D γ 42631.3831 0.10131 −61.207 N = 1→0 N = 2→1 N = 3→2 N = 4→3 N = 5→4 85262.36 170522.29 255777.36 341025.13 426263.18 References 1. ^ Cochran, E. L.; Adrian, F. J.; Bowers, V. A. (1964). "ESR Study of Ethynyl and Vinyl Free Radicals". Journal of Chemical Physics. 40: 213. Bibcode:1964JChPh..40..213C. doi:10.1063/1.1724865. 2. ^ a b c d Tucker, K. D.; Kutner, M. L.; Thaddeus, P. (1974). "The Ethynyl Radical C2H – A New Interstellar Molecule". Astrophysical Journal. 193: L115–L119. Bibcode:1974ApJ...193L.115T. doi:10.1086/181646. 3. ^ Huggins, P. J.; Carlson, W. J.; Kinney, A. L. (1984). "The distribution and abundance of interstellar C2H". Astronomy & Astrophysics. 133: 347–356. Bibcode:1984A&A...133..347H. 4. ^ a b Vrtilek, J. M.; Gottlieb, C. A.; Langer, W. D.; Thaddeus, P.; Wilson, R. W. (1985). "Laboratory and Astronomical Detection of the Deuterated Ethynyl Radical CCD". Astrophysical Journal. 296: L35–L38. Bibcode:1985ApJ...296L..35V. doi:10.1086/184544. 5. ^ Fuente, A.; Cernicharo, J.; Omont, A. (1998). "Inferring acetylene abundances from C2H: the C2H2/HCN abundance ratio". Astronomy & Astrophysics. 330: 232–242. Bibcode:1998A&A...330..232F. 6. ^ Beuther, H.; Semenov, D.; Henning, T.; Linz, H. (2008). "Ethynyl (C2H) in Massive Star Formation: Tracing the Initial Conditions?". Astrophysical Journal. 675: L33–L36. arXiv:0801.4493. Bibcode:2008ApJ...675L..33B. doi:10.1086/533412. 7. ^ Bel, N.; Leroy, B. (1998). "Zeeman splitting in interstellar molecules. II. The ethynyl radical". Astronomy & Astrophysics. 335: 1025–1028. Bibcode:1998A&A...335.1025B. 8. ^ Fahr, A. (2003). "Ultraviolet absorption spectrum and cross-sections of ethynyl (C2H) radicals". Journal of Molecular Spectroscopy. 217: 249. doi:10.1016/S0022-2852(02)00039-5. 9. ^ Müller, H. S. P.; Klaus, T.; Winnewisser, G. (2000). "Submillimeter-wave spectrum of the ethynyl radical, CCH, up to 1 THz". Astronomy & Astrophysics. 357: L65. Bibcode:2000A&A...357L..65M. 10. ^ Woodall, J.; Agúndez, M.; Markwick-Kemper, A. J.; Millar, T. J. (2007). "The UMIST database for astrochemistry 2006". Astronomy & Astrophysics. 466: 1197. arXiv:1212.6362. Bibcode:2007A&A...466.1197W. doi:10.1051/0004-6361:20064981. 11. ^ Tucker, K. D.; Kutner, M. L. (1978). "The Abundance and Distribution of Interstellar C2H". Astrophysical Journal. 222: 859. Bibcode:1978ApJ...222..859T. doi:10.1086/156204. 12. ^ Combes, F.; Boulanger, F.; Encrenaz, P. J.; Gerin, M.; Bogey, M.; Demuynck, C.; Destombes, J. L. (1985). "Detection of interstellar CCD". Astronomy & Astrophysics. 147: L25. Bibcode:1985A&A...147L..25C. 13. ^ a b Saleck, A. H.; Simon, R.; Winnewisser, G.; Wouterloot, J. G. A. (1994). "Detection of interstellar 13C12CH and 12C13CH". Canadian Journal of Physics. 72: 747. Bibcode:1994CaJPh..72..747S. doi:10.1139/p94-098.
2021-05-10 18:02:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8793940544128418, "perplexity": 8815.362220692348}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991759.1/warc/CC-MAIN-20210510174005-20210510204005-00474.warc.gz"}
https://projecteuclid.org/euclid.agt/1510840970
Algebraic & Geometric Topology A mapping theorem for topological complexity Abstract We give new lower bounds for the (higher) topological complexity of a space in terms of the Lusternik–Schnirelmann category of a certain auxiliary space. We also give new lower bounds for the rational topological complexity of a space, and more generally for the rational sectional category of a map, in terms of the rational category of a certain auxiliary space. We use our results to deduce consequences for the global (rational) homotopy structure of simply connected hyperbolic finite complexes. Article information Source Algebr. Geom. Topol., Volume 15, Number 3 (2015), 1643-1666. Dates Revised: 21 October 2014 Accepted: 23 October 2014 First available in Project Euclid: 16 November 2017 https://projecteuclid.org/euclid.agt/1510840970 Digital Object Identifier doi:10.2140/agt.2015.15.1643 Mathematical Reviews number (MathSciNet) MR3361146 Zentralblatt MATH identifier 1321.55002 Citation Grant, Mark; Lupton, Gregory; Oprea, John. A mapping theorem for topological complexity. Algebr. Geom. Topol. 15 (2015), no. 3, 1643--1666. doi:10.2140/agt.2015.15.1643. https://projecteuclid.org/euclid.agt/1510840970 References • L,L Avramov, Free Lie subalgebras of the cohomology of local rings, Trans. Amer. Math. Soc. 270 (1982) 589–608 • I Basabe, J González, Y,B Rudyak, D Tamaki, Higher topological complexity and its symmetrization, Algebraic & Geometric Topology 14 (2014) 2103–2124 • R Bøgvad, C Jacobsson, Graded Lie algebras of depth one, Manuscripta Math. 66 (1989) 153–159 • O Cornea, G Lupton, J Oprea, D Tanré, Lusternik–Schnirelmann category, Mathematical Surveys and Monographs 103, Amer. Math. Soc. (2003) • A Costa, M Farber, Motion planning in spaces with small fundamental groups, Commun. Contemp. Math. 12 (2010) 107–119 • A Dranishnikov, Topological complexity of wedges and covering maps, Proc. Amer. Math. Soc. 142 (2014) 4365–4376 • M Farber, Topological complexity of motion planning, Discrete Comput. Geom. 29 (2003) 211–221 • M Farber, Instabilities of robot motion, Topology Appl. 140 (2004) 245–266 • Y Félix, S Halperin, Rational LS category and its applications, Trans. Amer. Math. Soc. 273 (1982) 1–38 • Y Félix, S Halperin, J-M Lemaire, The rational LS category of products and of Poincaré duality complexes, Topology 37 (1998) 749–756 • Y Félix, S Halperin, J-C Thomas, Sur l'homotopie des espaces de catégorie $2$, Math. Scand. 55 (1984) 216–228 • Y Félix, S Halperin, J-C Thomas, Rational homotopy theory, Graduate Texts in Mathematics 205, Springer, New York (2001) • Y Félix, J Oprea, D Tanré, Algebraic Models in Geometry, Oxford Graduate Texts in Mathematics 17, Oxford Univ. Press (2008) • Y Félix, J-C Thomas, Sur la structure des espaces de LS catégorie deux, Illinois J. Math. 30 (1986) 574–593 • L Fernández Suárez, P Ghienne, T Kahl, L Vandembroucq, Joins of DGA modules and sectional category, Algebr. Geom. Topol. 6 (2006) 119–144 • M Grant, G Lupton, J Oprea, Spaces of topological complexity one, Homology Homotopy Appl. 15 (2013) 73–81 • M Grant, G Lupton, J Oprea, New lower bounds for the topological complexity of aspherical spaces, Topol. Appl. 189 (2015) 78–91 • P Hilton, G Mislin, J Roitberg, Localization of nilpotent groups and spaces, Math. Studies 15, North-Holland, New York (1975) • B Jessup, A Murillo, P-E Parent, Rational topological complexity, Algebr. Geom. Topol. 12 (2012) 1789–1801 • G Lupton, J Scherer, Topological complexity of $H\!$–spaces, Proc. Amer. Math. Soc. 141 (2013) 1827–1838 • J McCleary, Homotopy theory and closed geodesics, from: “Homotopy theory and related topics”, Lecture Notes in Math. 1418, Springer, Berlin (1990) 86–94 • Y,B Rudyak, On higher analogs of topological complexity, Topology Appl. 157 (2010) 916–920 • Y,B Rudyak, Erratum to “On higher analogs of topological complexity” [Topology Appl. 157 (2010) 916–920], Topology Appl. 157 (2010) 1118 • W Singhof, On the Lusternik–Schnirelmann category of Lie groups, Math. Z. 145 (1975) 111–116 • G,H Toomer, Topological localization, category and cocategory, Canad. J. Math. 27 (1975) 319–322 • G,W Whitehead, Elements of homotopy theory, Graduate Texts in Mathematics 61, Springer, New York (1978)
2019-08-19 01:50:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33200058341026306, "perplexity": 3042.8659037643165}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314638.49/warc/CC-MAIN-20190819011034-20190819033034-00128.warc.gz"}
https://math.stackexchange.com/questions/2807082/prove-sec2-frac-pi7-sec2-frac2-pi7-sec2-frac3-pi7-24-using-t/2807106
# Prove $\sec^2\frac{\pi}{7}+\sec^2\frac{2\pi}{7}+\sec^2\frac{3\pi}{7}=24$ using the roots of a polynomial I have to prove $\sec^2\frac{\pi}{7}+\sec^2\frac{2\pi}{7}+\sec^2\frac{3\pi}{7}=24$ by using the roots of the polynomial $x^3-21x^2+35x-7=0$ I tried to factor the polynomial but it didn't work and later found it cannot factorize with rationals. and I saw some similar problems in StackExchange. But the answers are very complex to me. I cannot use Euler's complex number formula since it's not in the syllabus. I do not want the exact answer but guidance to the answer. let $t=\tan(\theta)$, we have \begin{eqnarray*} \tan(7 \theta) =\frac{ 7t-35t^3+21t^5-t^7}{1-21t^2+35t^4-7t^6}. \end{eqnarray*} Set $\tan(7 \theta) =0$ then the polynomial \begin{eqnarray*} 7t-35t^3+21t^5-t^7=0 \end{eqnarray*} has roots $t=0, \tan( \pi/7), \cdots ,\tan( 6 \pi/7)$. So \begin{eqnarray*} x^3-21x^2+35x-7=0 \end{eqnarray*} has roots $x= \tan^2(\pi/7),\tan^2(2\pi/7),\tan^2(3\pi/7)$. Now let $y=x+1$.
2020-08-09 15:18:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932404100894928, "perplexity": 365.2508793353098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738555.33/warc/CC-MAIN-20200809132747-20200809162747-00496.warc.gz"}
https://quant.stackexchange.com/questions/60251/value-the-claim-x-k1-xk1-lyu
# Value the claim $(X-K)1_{X>K}1_{L<Y<U}$ Consider two correlated assets $$X$$ and $$Y$$ with marginals $$f_X$$ and $$f_Y$$ and linear correlation coefficient $$\rho$$. Assume a Gaussian copula, $$C_{X,Y}(x,y,\rho)$$, can approximate the joint CDF well enough for this exercise. Value the contingent claim: $$g_T=(X-K)1_{X>K}1_{L. Assume interest rates are zero. Your final answer should be a numerical integral with a copula. My attempt: $$g_t \\= \int_{k}^{\infty}\int_L^U(x-K)f_{X,Y}(x,y)dydx\\=\int_K^\infty(x-K)(F_{X,Y}(x,U) - F_{X,Y}(x,L))dx\\=\int_K^\infty(x-K)(C_{X,Y}(F_X(x),F_Y(U), \rho) - C_{X,Y}(F_X(x),F_Y(L), \rho))dx$$ When I try to value the integral above, I get that it doesn't converge. What am I doing wrong?
2021-12-02 13:38:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8894815444946289, "perplexity": 725.3375045195287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362219.5/warc/CC-MAIN-20211202114856-20211202144856-00057.warc.gz"}
https://mitpress.mit.edu/books/reinforcement-learning
Skip navigation Hardcover | $75.00 X | £55.95 | ISBN: 9780262193986 | 344 pp. | 7 x 9 in | February 1998 eBook |$75.00 X | ISBN: 9780262332767 | 344 pp. | February 1998 An Introduction ## Overview Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning. Solution Manual ## About the Authors Richard S. Sutton is Senior Research Scientist, Department of Computer Science, University of Massachusetts. Andrew G. Barto is Professor of Computer Science at the University of Massachusetts. ## Endorsements “This is a highly intuitive and accessible introduction to the recent major developments in reinforcement learning, written by two of the field's pioneering contributors.” Dimitri P. Bertsekas and John N. Tsitsiklis, Professors, Department of Electrical Engineering andn Computer Science, Massachusetts Institute of Technology “This book not only provides an introduction to learning theory but also serves as a tremendous source of ideas for further development and applications in the real world.” Toshio Fukuda, Nagoya University, Japan; President, IEEE Robotics and Automantion Society “Reinforcement learning has always been important in the understanding of the driving force behind biological systems, but in the last two decades it has become increasingly important, owing to the development of mathematical algorithms. Barto and Sutton were the prime movers in leading the development of these algorithms and have described them with wonderful clarity in this new text. I predict it will be the standard text.” Dana Ballard, Professor of Computer Science, University of Rochester “The widely acclaimed work of Sutton and Barto on reinforcement learning applies some essentials of animal learning, in clever ways, to artificial learning systems. This is a very readable and comprehensive account of the background, algorithms, applications, and future directions of this pioneering and far-reaching work.” Wolfram Schultz, University of Fribourg, Switzerland
2016-07-24 23:22:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27483588457107544, "perplexity": 982.2081064618642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824185.14/warc/CC-MAIN-20160723071024-00181-ip-10-185-27-174.ec2.internal.warc.gz"}
http://mathhelpforum.com/math-software/98776-matlab-ivp-problem.html
1. ## MatLab IVP Problem Hey guys! I'm working through some book questions for my numerical methods subject, and I'm having trouble with this. The IVP is $(1+t^2) dy/dt + 2ty = -pi*sin(t*pi)$ Now, I got a stack of code such that I could plot the comparison of the numerical and analytical solutions out, and my tutor agreed. (I'll attach it in a word document). But I'm having issues with finding the "(relative) truncation error" at various values for h. Now, I got out some answers that didn't 'seem right' (based on observation from the different graphs) but yeh. I'm wondering if anyone is able to throw some ideas or even code my way to help me get this out? It'd be MASSIVELY appreciated 2. Originally Posted by exphate Hey guys! I'm working through some book questions for my numerical methods subject, and I'm having trouble with this. The IVP is $(1+t^2) dy/dt + 2ty = -pi*sin(t*pi)$ Now, I got a stack of code such that I could plot the comparison of the numerical and analytical solutions out, and my tutor agreed. (I'll attach it in a word document). But I'm having issues with finding the "(relative) truncation error" at various values for h. Now, I got out some answers that didn't 'seem right' (based on observation from the different graphs) but yeh. I'm wondering if anyone is able to throw some ideas or even code my way to help me get this out? It'd be MASSIVELY appreciated Lets assume that you want the relative error at each of the time points for each of your time steps. Let the numerical solution be computed at $t_1,\ ..,\ t_n$ and have values $yN_1,\ ..,\ yN_n$, and the analytic solution at the same time points be $yA_1,\ ..,\ yA_n$ Then the relative error at $t_i$ is: $\varepsilon_i=\frac{|yN_i-yA_i|}{|yA_i|}$ or in Matlab like code: Code: err=abs(yN-yA)./abs(yA); The key point is the analytic solution is evaluated at the same points as the numerical. CB 3. Ive got something similar to that now. BUT, I have the following errors:- h= 0.1: 0.06197684648065 h= 0.01: 0.00059373260547 h= 0.001: 0.00000593482743 h= 0.0001: 0.00000005934802 I'm worried about the first one (h=0.1) because it doesn't match the pattern that the questions are talking about (O(h^2)) 4. Code being:- %Lab Week 4 %setup DE f = @(t,y) (-pi*sin(pi*t)-(2*t*y)/(1+t^2)); %int. conditions a = 0; b = 1; xEnd = a + 5; % integrate two units from starting value format long %Setup analytic solution yExact = @(t) (-cos(pi*t)/(1+t^2)) %set up some step sizes to test h1 = 0.1; h2 = h1 / 2; h3 = h2 / 2; h4 = h3 / 2; h5 = h4 / 2; h6 = h5 / 2; %local truncation error requires one step [f1, t1] = forwardEuler(f, a, b, h1, 2); [f2, t2] = forwardEuler(f, a, b, h2, 2); [f3, t3] = forwardEuler(f, a, b, h3, 2); [f4, t4] = forwardEuler(f, a, b, h4, 2); err1 = (abs((( f1(2)-yExact (t1(2)))))/abs(yExact (t1(2)))) err2 = (abs((( f2(2)-yExact (t2(2)))))/abs(yExact (t2(2)))); err3 = (abs((( f3(2)-yExact (t3(2)))))/abs(yExact (t3(2)))); err4 = (abs((( f4(2)-yExact (t4(2)))))/abs(yExact (t4(2)))); stepSize = [h1, h2, h3, h4, h5, h6]; %step sizes into array for plotting LTErrors = [err1, err2, err3, err4, err5, err6]; %put all errors into an array plot(stepSize, LTErrors, 'b.', 'linewidth', 2) xlabel('step size') ylabel('local truncation error') 5. Originally Posted by exphate Ive got something similar to that now. BUT, I have the following errors:- h= 0.1: 0.06197684648065 h= 0.01: 0.00059373260547 h= 0.001: 0.00000593482743 h= 0.0001: 0.00000005934802 I'm worried about the first one (h=0.1) because it doesn't match the pattern that the questions are talking about (O(h^2)) That is nothing to worry about, the $O(h^2)$ behaviour is an asymptotic thing. When $h$ is relativity large other effects can can be seen (such as instability and higher order terms in the error) CB 6. Cheers dude. I spoke to one of the guys at uni today (masters student) and he went through it in person, and pointed out that the 'large' step size often wont follow the same number pattern. Thanks for the clarification
2017-01-21 13:15:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5463666319847107, "perplexity": 3045.1896439194697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00554-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.math.princeton.edu/events/distribution-free-malliavin-calculus-2015-03-12t180003
# Distribution Free Malliavin Calculus - Boris Razovsky, Brown University Fine Hall 601 The theory and applications of Malliavin calculus are well developed for Gaussian and Poisson processes. In this talk I will discuss an extension Malliavin calculus to random fields generated by a sequence $\Xi=( \xi_{1},\xi_{2},...)$ of arbitrary square integrable and uncorrelated random variables.  The distribution functions \$Pr( \xi_{i}
2017-12-11 03:40:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7494952082633972, "perplexity": 1962.345074240618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512121.15/warc/CC-MAIN-20171211033436-20171211053436-00587.warc.gz"}
https://cs.stackexchange.com/questions/157078/iterate-through-all-values-of-a-certain-subset-of-all-permutations
# Iterate through all values of a certain subset of all permutations Let's say we've got $$n$$ numbers to multiply together. But the multiplication operation, like in computer floating-point arithmetics, is not associative. Thus the order of multiplication matters. Furthermore let's say that we want to minimize the nesting of the expression tree that represents the entire multiplication, so the expression/multiplication tree is the minimum-height binary tree with $$n$$ leaves. We want to iterate through all possible ways to multiply the $$n$$ numbers together. A way to do it would be to iterate through all permutations of the $$n$$ numbers (each number is given a position as a leaf of a binary tree), but that would be wasteful, as it would take no advantage of the commutativity property that does hold for this multiplication operation. Example, $$n = 3$$: let's call our three numbers $$n_1$$, $$n_2$$, $$n_3$$. There are $$3! = 6$$ permutations of these three numbers: 123 132 213 231 312 321 The above permutations correspond to these multiplication expressions: $$(n_1 n_2)n_3$$ $$(n_1 n_3)n_2$$ $$(n_2 n_1)n_3$$ $$(n_2 n_3)n_1$$ $$(n_3 n_1)n_2$$ $$(n_3 n_2)n_1$$ However, there are only three equivalence classes when we account for commutativity of the multiplication operation: $$(n_1 n_2)n_3 = (n_2 n_1)n_3$$ $$(n_2 n_3)n_1 = (n_3 n_2)n_1$$ $$(n_1 n_3)n_2 = (n_3 n_1)n_2$$ The goal is to iterate through at least one member of each equivalence class relatively efficiently (better than factorial time), with as little redundancy as possible. So with $$n = 3$$ we'd ideally iterate through only three permutations, e.g.: 123 132 231 As for the binary trees mentioned above, consider, for example, $$n = 4$$: we're only interested in expressions with minimal nesting, so $$(n_1 n_2)(n_3 n_4)$$ is OK, but $$((n_1 n_2)n_3)n_4$$ isn't, we do not want to iterate through an expression with non-minimal nesting. In case someone is interested in the motivation for this question, see here: https://math.stackexchange.com/questions/4611399/numerically-stable-evaluation-of-factored-univariate-real-polynomial To sum up, the goal is to iterate through all equivalence classes of multiplication expressions with minimal nesting, and to do it as efficiently as possible, i.e., I hope I can avoid generating all permutations. • I believe there are exponentially many equivalence classes, so there is no hope to iterate through all of them efficiently. So can you clarify what you are hoping for? Or does that answer your question? – D.W. Jan 25 at 7:02 • Did I understand correctly that a tree is "balanced" if it has the minimum height? I.e. you don't care if some leaves have a height much less than the tree height. So the tree $(((12) (34)) 5)$ is balanced. Jan 25 at 7:17 • Anyway, the idea is to look at the last multiplication (i.e. at the root of the tree). You want to partition the entire set of multipliers into two sets, and run recursively on these sets. E.g., for $3$ elements there are only $3$ possible partitions: $\{1,2\} \cup \{3\}$, $\{1,3\} \cup \{2\}$, $\{1\} \cup \{2,3\}$, corresponding to the three multiplication orders you specified. The only non-trivial part is to consider only partitions such that it's possible to build the balanced subtrees (whatever "balanced" means), but it's easy to address (by just looking at the cardinalities of the sets). Jan 25 at 7:24 • @Dmitry, sorry I used binary tree terminology incorrectly, I edited the question. Jan 25 at 12:49 • @D.W. sorry, I'm interested in some relatively good time complexity, in particular I'm interested in whether there's a sub-factorial algorithm. Dmitry may have answered that in his comment, which I will now review. Jan 25 at 12:54
2023-03-25 19:39:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8275281190872192, "perplexity": 320.56946130084583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945372.38/warc/CC-MAIN-20230325191930-20230325221930-00317.warc.gz"}
https://stats.stackexchange.com/questions/531536/mixed-model-with-time-varying-covariate-and-interaction
# Mixed model with time-varying covariate and interaction In my current research (randomised, placebo control trial) I'm investigating an effect of two interventions (Intervention 1 - low dose of dietary supplement, Intervention 2 - high dose of dietary supplement) on cardio-metabolic characteristic (DV) measured on two occasions 12 weeks apart. In addition, I have a covariate - say some gut bacterial abundance, which is measured on the same two occasions which can be treated as both time-invariant (only baseline abundance) or time-varying covariate. One of the hypothesis is whether bacterial abundance modulates an effect of the intervention on DV. To this end I'm planning the following Model1 mod0 <- lme(DV ~ Intervention * Time + Abundance_baseline + (1 | subject), REML = F, data) mod1 <- lme(DV ~ Intervention * Time * Abundance_baseline + (1 | subject), REML = F, data) lrtest(mod0, mod1) and Model2 mod0 <- lme(DV ~ Intervention * Time + Abundance + (1 | subject), REML = F, data) mod1 <- lme(DV ~ Intervention * Time * Abundance + (1 | subject), REML = F, data) lrtest(mod0, mod1) Question 1: Are those models correct regarding my hypothesis? Question 2: My intention is to capture an effect of the change in bacterial abundance on Intervention*Time interaction by using the time-varying covariate. Does it make sense?
2021-12-09 11:29:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41253143548965454, "perplexity": 12464.076193933019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363791.16/warc/CC-MAIN-20211209091917-20211209121917-00175.warc.gz"}
https://www.groundai.com/project/thermal-rounding-of-the-depinning-transition-in-ultrathin-ptcopt-films/
Thermal rounding of the depinning transition in ultrathin Pt/Co/Pt films # Thermal rounding of the depinning transition in ultrathin Pt/Co/Pt films ## Abstract We perform a scaling analysis of the mean velocity of extended magnetic domain walls driven in ultrathin Pt/Co/Pt ferromagnetic films with perpendicular anisotropy, as a function of the applied external field for different film-thicknesses. We find that the scaling of the experimental data around the thermally rounded depinning transition is consistent with the universal depinning exponents theoretically expected for elastic interfaces described by the one-dimensional quenched Edwards-Wilkinson equation. In particular, values for the depinning exponent and thermal rounding exponent are tested and the present analysis of the experimental data is compatible with and , in agreement with numerical simulations. ###### pacs: 75.60.Ch, 64.60.Ht ## I Introduction Elastic manifolds weakly pinned by random impurities are ubiquitous in condensed matter physics. Paradigmatic examples are ferromagnetic and ferroelectric domain walls, (1); (2); (3); (4); (5); (6); (7); (8); (9) superconducting vortex lattices, (10); (11); (12) charge density waves, (13) contact lines of liquid menisci, (14); (15) and fractures. (16); (17); (18) No matter how weak disorder is, in all these systems the key factor determining their dynamical properties is the competition between elasticity, which tends “to align” the displacement field into a perfectly flat or periodic structure, and disorder, which tends to distort it. This gives rise to universal glassy properties with characteristic rough structures and complex collective pinning phenomena. One of the most remarkable and possibly the most accurate experimental verification for the universal glassy dynamics theoretically predicted for disordered elastic systems has been the measurement of the ultra-slow creep motion of magnetic domain walls in ultrathin Pt/Co/Pt ferromagnetic films with perpendicular anisotropy driven by a very small (well below the depinning threshold defined below) applied magnetic fields. (1) The so-called creep law states that the mean velocity of an extended elastic interface in a random environment follows a strongly non-linear function of the field V∼exp[−UckBT(HcH)μ], (1) where gives a characteristic energy scale in the creep regime and the applied field is , with the so-called depinning threshold. Both and are material dependent parameters and increase with the strength of the disorder. Physically, this law can be interpreted as an Arrhenius activated motion over typical energy barriers separating different metastable states. The particular form of the creep law is directly related to the divergence of these energy barriers with decreasing applied field (19); (20); (21); (22); (23); (24); (25) This law is universal in the sense that the exponent only depends on the dimensionality of the elastic manifold and its equilibrium roughness exponent through the relation μ=d−2+2ζeq2−ζeq (2) The roughness exponent measures the rate at which the interface width grows with its linear size at equilibrium and is in turn universal: it depends only on , on the nature of the disorder correlations and on the short-ranged character of elastic interactions. This makes this experimental system a paradigmatic example of the universal physics predicted for elastic manifolds weakly pinned by random impurities. The theory for driven disordered elastic systems applied to magnetic domain walls also predicts two additional dynamical regimes as a function of (26) For large fields , in the so-called fast-flow regime, the disorder acts effectively as a fictive thermal noise and the response is linear V≈mH, (3) with the domain wall mobility characterizing dissipation processes during the flow motion. The nature of the involved dissipation processes in the fast-flow regime depends on the value of the Walker field (27); (28) separating steady flow for and precessional flow for . The analysis of the fast-flow regime in ultrathin Pt/Co/Pt ferromagnetic films suggests that , which means that one only has access to the precessional flow at large fields and the steady flow can not be observed because it is well inside the creep regime. (4) Finally, around the critical field a non-trivial zero-temperature depinning regime is expected. In this regime the velocity follows a power-law behavior when approaching from above, with an universal critical exponent, analogous to continuous phase transitions. (29) However, unlike creep motion, the latter scaling was not experimentally tested yet, and to the best of our knowledge, the data set reported in Ref. (4) give the first experimental evidence supporting , as predicted by theory. (21); (22); (24) Since is essentially a zero-temperature quantity it has not been a priori obvious how to obtain it in experiments. This leads us to the fundamental issue of the temperature effects on the depinning transition. The naive analogy with standard phase transitions (eg. magnetization vs temperature with an external magnetic field in the Ising model), thinking as the order parameter and as the control parameter suggests that at , with an universal thermal rounding exponent. (30); (31); (32); (33); (34); (35); (36); (37) Although the very existence and universality of such power-law has not been rigorously proven for the depinning transition, it is consistent with recent numerical simulations. (33); (35); (36); (37) Furthermore, based on this analogy with standard phase transitions one can argue that the order parameter is an homogeneous function of the external parameters. This leads to universal scaling functionality for the order parameter around the critical point. In the present case one generally arrives at the thermal rounding scaling form V∼TψG(H−HcTψ/β), (4) with for small and for large . This scaling behavior is expected to hold close to the critical point, i.e. for and . Although this scaling form has been successfully tested in numerical simulations, (33); (35); (37) it has not yet been experimentally probed. An experimental test of Eq. (4) for a system in which the creep-law can be accurately verified is important for several reasons. First, it allows to identify more precisely the non-equilibrium universality class and to further explore the regime of applicability of the elastic theory which provides precise enough predictions for the exponents and . Indeed, although creep experiments of Refs (1); (4) are consistent with the universality class of one-dimensional elastic interfaces with short-ranged elasticity and short-ranged random-bond disorder, this is not yet enough to determine the depinning universality class. This can also depend on the precise nature of elastic interactions and/or the presence of additional terms in the equation of motion which might be irrelevant at the length-scales probed by transport in the creep regime or at the equilibrium regime. Second, recent theoretical predictions show that equilibrium, depinning, and fast-flow type of motions can not be separated below the depinning threshold, but they manifest at different velocity-dependent characteristic length-scales, thus breaking the quasi-equilibrium picture of creep motion. (38); (39) Although the leading dominant part of the creep law is not expected to change by these new predictions, the large scale geometry of the moving interface is expected to display the same depinning roughness exponents that describe it around (38); (39) Finally, and specially for one-dimensional interfaces, the validity of the elastic approximation, which underlies the prediction of Eqs.(1) and (2), is not experimentally evident for the relevant length-scales tested around depinning, where interfaces may become boundlessly rough, as found in some minimal models (eg. quenched Edwards-Wilkinson equation (40); (41)). We report here a scaling analysis of the experimental data reported in Refs. (4); (42) for the mean velocity of extended magnetic domain walls driven in ultrathin Pt/Co/Pt ferromagnetic films with perpendicular anisotropy, as a function of the applied external field, for different film-thicknesses. We find that the data around the thermally rounded depinning transition is consistent with the universal depinning exponents and theoretically expected for elastic interfaces described by the one-dimensional quenched Edwards-Wilkinson equation (43); (36); (37). The manuscript is organized as follows. First, in Sec. II we present the experimental data obtained in Ref. (4) together with the physical parameters of the system that we will use in the scaling analysis. Then in Sec. III we present new estimated values for the critical depinning field based on the scaling analysis of the experimental data together with a fitted value for the thermal rounding exponent. In Sec. IV we show that the scaling relation Eq. (4) satisfactorily describes the experimental data around depinning. Finally, Sec. V is devoted to conclusions and final comments. ## Ii Experimental data and key quantities Briefly, the velocity-field curves were obtained at room temperature by imaging magnetic domains using a high-resolution far-field PMOKE microscope with a CCD camera before and after the application of field pulses of equal magnitude and different duration. The images were subtracted to measure the displacements of the domain wall and the velocity were obtained from this displacement and the field pulse duration. Further details of the experimental set-up can be found in Refs. (4); (42). Figure 1 shows the experimental results for the velocity-force characteristics of ultrathin Pt/Co/Pt ferromagnetic films. (4) Different curves correspond to different thickness of the Co layer, and nm. One can observe different driven regimes on all curves. In particular the creep, depinning and fast-flow regimes can be clearly identified on each curve. The depinning regime becomes smeared and not abrupt due to thermal activation; this is the so-called thermally rounded depinning transition. Roughly, the sharp increase of the velocity can be interpreted as the crossover from the thermally activated creep regime to the thermally rounded depinning transition. The beginning of this crossover can be identified by the depart from the creep-law Eq. (1), only valid for very small fields. (4) Interestingly, for larger fields, the curvature of the curves changes from positive to negative before crossing over to the fast-flow linear regime, resembling the thermally rounded depinning transition with an exponent observed in simple elastic models. All these features support our attempt to fit this data with the tools used in standard phase transitions as summarized by Eq. (4). The main difficulty to attempt a quantitative analysis of the thermally rounded depinning transition from the available data for different samples of the same material is to obtain proper rescaled variables. We need relatively precise values for the material dependent quantities such as the characteristic temperature , the domain wall mobility and in particular the critical field which might all depend on the film thickness . To this end we first analyze the fast-flow regime and the creep regime in order to get reasonable bounds for the key parameters. From the fast-flow regime, where the velocity is proportional to the force, one can get the mobility , by fitting to the data. This has been already reported in Ref. (4). As we can appreciate in Fig. 1, seems to increase when increasing the sample thickness . Values for the mobility are shown in Table 1 for future reference on this work. One can also observe in Fig. 1 that the sudden increase in the velocity-field curves beyond the creep regime is reached at different field values. This means that each curve has a thickness dependent critical depinning field . The proper estimation of at finite temperatures is a difficult task. Indeed, in Ref. (4) the authors reported, as a lower bound for , the characteristic value , where the non-linear fit of the creep-law starts failing. Values for , extracted from Ref. (4), are reproduced in Table 1 for reference. As one can appreciate the general trend is that increases with increasing . Furthermore, as shown in Ref. (4), in the small-field creep regime the velocity is well fitted by the one dimensional case of Eq. (2), with for , as expected for the well known one-dimensional equilibrium roughness exponent . Therefore, if the value of is known, then one can use the creep formula Eq. (1) to fit the value of , with the characteristic temperature scale associated to the typical energy scale (19); (20) Previously, the lower bound (instead of the depinning field ) had been used to fit using the creep formula. (4) This is indeed a good approximation since given that and is small, can only be corrected by a small factor of order . The values for obtained using are shown in Table 1. As one can observe, increases with increasing . This is indicating that although all curves were obtained at room temperature, the effective temperature is decreasing with the film thickness due to a change in the intrinsic disorder energy scale. In consequence, each curve can be thought as being at a different temperature, allowing to effectively test the thermally rounded depinning regime. In this work we need to use the experimental values shown in Table 1 for the scaling analysis of the data in Fig. 1. However, since the thermal rounding scaling of the depinning transition critically depends on the value of we will not use the previously reported value , which actually corresponds to a lower bound for . Instead, we will use a different approach trying to get a better estimate for the depinning field . This new value will also permit to estimate the thermal rounding exponent as shown in the following Section. ## Iii Experimental estimates for Hc and ψ In the literature, mainly three ways of estimating the value of the critical field from curves were used: (i) a linear extrapolation to zero velocity, , from data around depinning, (1) (ii) when choosing a critical velocity , one can determine at low temperatures the critical field from (44); (45) and (iii) the end of the creep regime from below as a lower bound for (4) The first two protocols are useful when one has velocity-field curves over a limited field range. Here, in order to obtain from the experimental data shown in Fig. 1, we will adopt a different approach, which is based on scaling concepts and relies on the knowledge of a precise value for the depinning exponent . Then we will show that this is also compatible with a simple phenomenological determination of the depinning field. Although the value for the creep regime is widely supported by experimental results, (1); (4) an experimental estimated value for the depinning exponent has not been reported. In fact, to the best of our knowledge, the data in Fig. 1 give the first experimental support to as theoretically expected in low dimensional systems. (24) In spite of the fact that is strictly a zero temperature quantity one can still obtain it by fitting the power-law behavior, at very small temperatures . As noted in the values reported in Table 1, the effective reduced temperature scales are always relatively small, . If one takes the lower bound and try to fit from the obtained value is systematically larger than the one dimensional accepted value , which is based on numerical simulations of the quenched Edwards-Wilkinson equation. (43) The obtained values, using are , , and for , and nm respectively. Only the value for is close to . Here, in order to obtain , we assume deviations of the experimental domain wall from the elastic model (such as overhangs, bubbles, etc.) do not play an important role. We also assume that magnetic domain walls, as smooth elastic objects with short-range elasticity and uncorrelated disorder, are well described by the one-dimensional quenched Edwards-Wilkinson equation. Indeed, this model successfully reproduce the creep law observed in this system. With these two assumptions, whose validity will be checked by they consistency, we take the following protocol: we search for the best value of which permits to fit closer to . This protocol is illustrated in Fig. 2 for the case , whose bare velocity-field curve is shown in Fig. 1. For the fitting procedure, we first discard the large-field data points, Oe, corresponding to the fast-flow regime. Then we propose as a first approximate depinning field , and we plot the remaining data for using scaled variables, against , which corresponds to the lower curve of Fig. 2. From this curve we fit the value. Repeating this for increasing values of we obtain a set of points as shown in the inset of Fig. 2. is decreasing with and intersects the theoretically expected value for . Therefore, from this curve we extract the value of the critical field as that which corresponds to closer to . In this case we estimate for the value Oe, as shown by the red square point in the inset of Fig. 2. Following this protocol, we present in Fig. 3 the obtained data for different thickness values as indicated. The lowest field point of each curve corresponds to the lower bounds for the depinning field . The vertical arrows indicate the estimated values for different values, which are given in Table 1. We have also tested that this protocol gives a good value for the critical field in numerical simulations, where a precise value of the critical field can be obtained by exact algorithms. Finally, we also show in Table 1 slightly corrected values for the effective inverse temperature scale obtained using the new estimate for the critical field, . Now that we have new estimated values for the critical depinning fields we can try to use scaled variables in order to obtain more information on the system. Figure 4 shows the same data as in Fig. 1 but normalized with respect to the fast flow regime and with respect to the depinning field, i.e. in the scaled form against . One can observe in this figure that different curves are characterized by different effective temperatures. This is compatible with the fact that the disorder strength is entering not only through but also through the depinning temperature scale , as expected. For increasing values of film thickness each curve seems to be at a smaller effective temperature, approaching the abrupt transition at the critical point. Furthermore, this is consistent with the reported values of given in Table. 1. All this information can now be used to experimentally test the predicted scaling behavior of the thermal rounding of the depinning transition. For each curve in Fig. 1 one can now interpolate in order to obtain a velocity value corresponding to the critical field, . By plotting this value (normalized to the fast flow regime) against the effective temperature, one should be able to observe the corresponding power-law behavior defining the thermal rounding exponent, . Although we have only four points in the temperature scale, corresponding to the different thickness values, fitting the thermal rounding exponent from this data points gives the value , as shown in Fig. 5. Beyond the large error bar this is consistent with our proposed value , which has been obtained by using numerical models for interface depinning. (36); (37) We end this section showing that the values of the critical depinning field so far obtained with the protocol based on scaling arguments is consistent with a different (phenomenological) determination of the critical field. Recalling that at zero temperature the critical depinning field is the point where a maximum variation of the velocity is observed, one may wonder if this is also true at finite temperatures. In a thermally rounded velocity-field curve the inflexion point gives the field of maximum velocity variation, i.e. the the point where is maximum, which we identify with a phenomenological critical field . In order to use this phenomenological criterion with the bare data of Fig. 1 we use a five order interpolation polynomial around the inflexion point, whose first order derivative is given in Fig. 6. For each film thickness the maximum of the curve and the associated field value can be well characterized, as shown with symbols in Fig. 6. The obtained values for are also included in Table 1. We also show in the same figure, with vertical dashed lines, the values of previously obtained with the scaling analysis and showing a very good agreement with the phenomenological procedure. Although this phenomenological approach is not a priori justified, the surprising agreement obtained here gives further support to the scaling theory. ## Iv Universal scaling function around depinning Now that we have obtained experimental estimates for the critical depinning field and the effective temperature scale for each film thickness and that we also have estimated the thermal rounding exponent from the experimental data, we can test the universal scaling form for the velocity within the thermal rounding regime, Eq. (4). In order to do that, we gather all the information and write the scaling form using normalized quantities. Therefore, we will test in the following the universal scaling form VmHc∼(TTc)ψG[H−HcHc(TTc)−ψ/β], (5) where is a universal function which is expected to behave as for and for (for the system is outside the thermal rounding regime). We show in Fig. 7 the velocity-field curves of Fig. 1 in the scaled form, Eq. (5), using the effective temperature scale for each film thickness. The proposed scaling of the data is fairly good, making all the experimental curves in Fig. 1 to collapse in a single universal form within experimental tolerances. The scaling is expected to fail at large field values, as one can observe in Fig. 7 for large positive values. Furthermore, the log-linear scale used in Fig. 8 to represent the same data shows that the scaling works for values of the scaled variable close to zero and the scaling start failing for large negative values. Therefore, as expected, for very large positive , within the fast-flow regime, or very large negative , in the creep regime, the experimental data start deviating from the universal function. Finally in the inset of Fig. 8 we show the same scaling form but in a log-log representation (taking care of negative values) in order to emphasize the good collapse of the data for close to zero. Finally, it is worth mentioning that Nattermann, Pokrovsky and Vinokur (46) proposed a phenomenological form for the full force and temperature dependence of the velocity of a domain wall in a random medium, which includes the full functional dependence. This form describes the thermal rounding regime around but it also includes the creep regime, therefore depending on the three exponents, , and . It can be shown that close to the critical depinning field the functional form proposed in Ref. (46) reduces to the scaling form given in Eq. (5). In spite of this, it has been recently shown that this phenomenological functional form does not appropriately describe the numerical data for the thermally rounded depinning region. (37) The experimental data for the velocity against field obtained for the ultrathin Pt/Co/Pt films with perpendicular anisotropy give an outstanding support to the scaling ideas behind the depinning transition, particularly emphasizing the agreement with theoretical predictions for the depinning exponent and the thermal rounding exponent . Each of the velocity-field curves displays the three characteristic regimes: creep, depinning and fast-flow. The creep regime, already largely accounted for in the literature, gives strong evidence for the creep exponent . From the fast-flow regime one obtains the mobility, which gives information on the dissipation process during the flow regime and is a key parameter in order to work with normalized quantities. Besides, around the critical depinning field, our analysis unambiguously shows that the depinning exponent . Another important point is that, since the strength of the pinning disorder potential is changing with the sample thickness and the experimental data correspond to room temperature, this amounts to different effective temperatures. In fact, this effective temperature can be directly fitted from the data within the creep regime if the value of the critical depinning field is known. We have presented in this work a scaling analysis of these experimental results based on scaling ideas behind the depinning transition and in particular of its thermal rounding. Under the key assumption that , as suggested by many theoretical and numerical works, we have obtained improved estimates for the critical depinning field . Besides, we have satisfactorily compared the obtained values with the phenomenological critical field estimated from the maximum variation of the velocity. This also gives strong support to the use of the value , characterizing the zero temperature depinning transition. Based on our analysis one can also discard a value of the depinning exponent close to the one in the universality class of the Kardar-Parisi-Zhang (KPZ) equation with quenched disorder. (47); (41) The expected value would be the same as in directed percolation, . In fact, direct integration of the quenched-KPZ equation gives (48) Here we have shown in Fig. 3 that the largest values of , obtained at the lower bound for the critical field , are always and that the scaling properties can be correctly described with . Therefore, the experimental data can not be described within the quenched-KPZ universality class. Furthermore, the new estimates for permit us to obtain an experimental estimate for the thermal rounding exponent . This results is compatible with the numerical value corresponding to dimensional interface depinning within the quenched Edwards-Wilkinson universality class. (36); (37) Besides, this give us the opportunity to test the universal scaling form for the velocity-field curves in the thermal rounding regime. We have found that the data can be satisfactorily collapsed into a single curve with the obtained parameters. Our results show that the experimental data can be satisfactorily described by the exponents of quenched Edwards-Wilkinson universality class. However, this equation also predicts interfaces with unbounded local relative displacements at large enough length scales, i.e. the roughness exponent characterizing the geometry of the interface is . We therefore interpret our findings as suggesting that the velocity-field curves are effectively testing the small length scale fluctuations, given by the quenched Edwards-Wilkinson universality class. Whether the behavior at larger scales (i.e beyond the scales probed by the moving domain walls in the analyzed experiments) yields a crossover to plastic flow or to a new elastic universality class, is an interesting open question. In order to clarify this point, experiments at lower temperatures around depinning and focusing on the large scale geometrical properties, would be necessary. ###### Acknowledgements. The authors thank P. Metaxas, J.-P. Jamet and J. Ferré for providing us the experimental data analyised in this work. T. G. acknowledge support by the Swiss National Science Fundation under MaNEP and Division II. S.B. and A.B.K. are financially supported by CONICET Grant No. PIP11220090100051. ### References 1. S. Lemerle, J. Ferré, C. Chappert, V. Mathet, T. Giamarchi, and P. Le Doussal, Phys. Rev. Lett. 80, 849 (1998) 2. M. Bauer, A. Mougin, J. P. Jamet, V. Repain, J. Ferré, R. L. Stamps, H. Bernas, and C. Chappert, Phys. Rev. Lett. 94, 207211 (2005) 3. M. Yamanouchi, D. Chiba, F. Matsukura, T. Dietl, and H. Ohno, Phys. Rev. Lett. 96, 096601 (2006) 4. P. J. Metaxas, J. P. Jamet, A. Mougin, M. Cormier, J. Ferré, V. Baltz, B. Rodmacq, B. Dieny, and R. L. Stamps, Phys. Rev. Lett. 99, 217208 (2007) 5. P. J. Metaxas, R. L. Stamps, J.-P. Jamet, J. Ferré, V. Baltz, B. Rodmacq, and P. Politi, Phys. Rev. Lett. 104, 237206 (2010) 6. P. Paruch, T. Giamarchi, and J. M. Triscone, Phys. Rev. Lett. 94, 197601 (2005) 7. P. Paruch and J. M. Triscone, Appl. Phys. Lett. 88, 162907 (2006) 8. J. Guyonnet, H. Béa, F. Guy, S. Gariglio, S. Fusil, K. Bouzehouane, J.-M. Triscone, and P. Paruch, Appl. Phys. Lett. 95, 132902 (2009) 9. J. Guyonnet, H. Béa, and P. Paruch, J. Appl. Phys. 108, 042002 (2010) 10. G. Blatter, M. V. Feigel’man, V. B. Geshkenbein, A. I. Larkin, and V. M. Vinokur, Rev. Mod. Phys. 66, 1125 (1994) 11. T. Giamarchi and S. Bhattacharya, in High Magnetic Fields: Applications in Condensed Matter Physics and Spectroscopy, edited by C. Berthier et al. (Springer-Verlag, Berlin, 2002) p. 314, cond-mat/0111052 12. X. Du, G. Li, E. Y. Andrei, M. Greenblatt, and P. Shuk, Nature Physics 3, 111 (2007) 13. T. Nattermann and S. Brazovskii, Adv. Phys. 53, 177 (2004) 14. S. Moulinet, A. Rosso, W. Krauth, and E. Rolley, Phys. Rev. E 69, 035103(R) (2004) 15. P. Le Doussal, K. J. Wiese, S. Moulinet, and E. Rolley, Europhys. Lett. 87, 56001 (2009) 16. E. Bouchaud, J. P. Bouchaud, D. S. Fisher, S. Ramanathan, and J. R. Rice, J. Mech. Phys. Solids 50, 1703 (2002) 17. L. Ponson, D. Bonamy, and E. Bouchaud, Phys. Rev. Lett. 96, 35506 (2006) 18. M. Alava, P. K. V. V. Nukalaz, and S. Zapperi, Adv. Phys. 55, 349 (2006) 19. M. V. Feigel’man, V. B. Geshkenbein, A. I. Larkin, and V. M. Vinokur, Phys. Rev. Lett. 63, 2303 (1989) 20. T. Nattermann, Phys. Rev. Lett. 64, 2454 (1990) 21. O. Narayan and D. S. Fisher, Phys. Rev. B 46, 11520 (1992) 22. T. Nattermann, S. Stepanow, L. H. Tang, and H. Leschhorn, J. Phys. (Paris) 2, 1483 (1992) 23. P. Chauve, T. Giamarchi, and P. Le Doussal, Europhys. Lett. 44, 110 (1998) 24. P. Chauve, T. Giamarchi, and P. Le Doussal, Phys. Rev. B 62, 6241 (2000) 25. A. B. Kolton, A. Rosso, and T. Giamarchi, Phys. Rev. Lett. 94, 047002 (2005) 26. E. Agoritsas, V. Lecomte, and T. Giamarchi, Physica B(2012), in press. 27. J. C. Slonczewski, Int. J. Magn. 2, 85 (1972) 28. N. L. Schryer and L. R. Walker, J. Appl. Phys. 45, 5406 (1974) 29. D. S. Fisher, Phys. Rev. B 31, 1396 (1985) 30. A. A. Middleton, Phys. Rev. B 45, 9465 (1992) 31. L. W. Chen and M. C. Marchetti, Phys. Rev. B 51, 6296 (1995) 32. U. Nowak and K. D. Usadel, Europhys. Lett. 44, 634 (1998) 33. L. Roters, A. Hucht, S. Lübeck, U. Nowak, and K. D. Usadel, Phys. Rev. E 60, 5202 (1999) 34. D. Vandembroucq, R. Skoe, and S. Roux, Phys. Rev. E 70, 051101 (2004) 35. M. B. Luo and X. Hu, Phys. Rev. Lett. 98, 267002 (2007) 36. S. Bustingorry, A. B. Kolton, and T. Giamarchi, Europhys. Lett. 81, 26005 (2008) 37. S. Bustingorry, A. B. Kolton, and T. Giamarchi, Phys. Rev. E 85, 021144 (2012) 38. A. B. Kolton, A. Rosso, T. Giamarchi, and W. Krauth, Phys. Rev. Lett. 97, 057001 (2006) 39. A. B. Kolton, A. Rosso, T. Giamarchi, and W. Krauth, Phys. Rev. B 79, 184207 (2009) 40. S. F. Edwards and D. R. Wilkinson, Proc. R. Soc. A 381, 17 (1982) 41. A.-L. Barabási and H. E. Stanley, Fractal Concepts in Surface Growth, cambridge university press ed. (Cambridge, 1995) 42. P. J. Metaxas, Domain wall dynamics in ultrathin ferromagnetic film structures: disorder, coupling and periodic pinning, Ph.D. thesis, Université Paris-Sud–University of Western Australia (2009) 43. O. Duemmer and W. Krauth, Phys. Rev. E 71, 061601 (2005) 44. A. Kirilyuk, J. Ferré, V. Grolier, J. P. Jamet, and D. Renard, J. Mag. Mag. Mat. 171, 45 (1997) 45. L. Krusin-Elbaum, T. Shibauchi, B. Argyle, L. Gignac, and D. Weller, Nature (London) 410, 444 (2001) 46. T. Nattermann, V. Pokrovsky, and V. M. Vinokur, Phys. Rev. Lett. 87, 197005 (2001) 47. M. Kardar, G. Parisi, and Y. C. Zhang, Phys. Rev. Lett. 56, 889 (1986) 48. C. Lee and J. M. Kim, J. Korean Phys. Soc. 47, 13 (2005) You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
2020-01-29 05:13:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8316022753715515, "perplexity": 668.6202911017278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251788528.85/warc/CC-MAIN-20200129041149-20200129071149-00145.warc.gz"}
https://math.stackexchange.com/questions/1086386/finding-a-basis-and-the-dimension-of-w-1-cap-w-2
# Finding a basis and the dimension of $W_1\cap W_2$ Suppose $W_1,W_2$ are subspaces of $\mathbb{R}^4$. $W_1$ is spanned by $(1,2,3,4), (2,1,1,2)$ and $W_2$ is spanned by $(1,0,1,0),(3,0,1,0)$. I have to find a basis for $W_1\cap W_2$. I have calculated basis and dimension for $W_1+W_2$, by row-reducing the matrix whose rows are these four spanning vectors. I have found that $\dim(W_1+W_2)=3$. So it follows the dimension of $\dim(W_1\cap W_2)=1$. But how do I find a basis for it? My textbook provides a long method, by first finding a homogenous system and row-reducing the corresponding matrix. Is there a shorter way? Thanks. This is not as ridiculous as it sounds; you know the vector must have $0$ in the second and fourth coordinates. Hence a reasonable guess is $(1,2,3,4)-2(2,1,1,2)=(-3,0,1,0)$. Now if this happens to be in the second vector space, you are done. And, in fact, $(-3,0,1,0)=3(1,0,1,0)-2(3,0,1,0)$. • One answer is because I need a linear combination of the basis of $W_1$ that has a zero in the second and fourth coordinates. The second answer is that 2 is twice 1, and 4 is twice 2, so this particular linear combination will get me what I need. – vadim123 Dec 31 '14 at 7:11 • Because you have proven that this space ($W_1\cap W_2$) is one-dimensional. – vadim123 Dec 31 '14 at 7:15
2020-04-06 13:07:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9333963394165039, "perplexity": 112.64895058766685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371624083.66/warc/CC-MAIN-20200406102322-20200406132822-00037.warc.gz"}
https://sxpmaths.wordpress.com/
# Conditional Probability It seems, to me at least, that probability is one of ‘those topics’ that students find to be more difficult than others. Each year I have winced slightly as it looms on the horizon of the scheme of work and each year I’ve varied my approach to try and get them (and keep them) on board. This post focusses on conditional probability, now designated as an A-level (not AS) topic. ## Prior Knowledge Students should already have a fair amount of experience with tree diagrams and Venn diagrams. They should also have met the terms independent and mutually exclusive – I think that’s actually where the problems begin. Why are these concepts so challenging? Perhaps because one (independent) is a term borrowed from every day language, but in that standard usage it tends to mean ‘separate from’ which is actually closer to the meaning of mutually exclusive. Secondly, because we try to give students an intuitive and descriptive feeling for what independent means in statistics, they then lose sight of the fact that it has a precise, mathematical definition. What would I do about this? If I could, I would separate the teaching of the two terms with as much time as possible. We could teach ‘mutually exclusive’ when we cover sampling techniques (the strata for a stratified sample should be mutually exclusive, for example). More contentiously, I wonder if we shouldn’t introduce ‘independent’ until the time we study conditional probability? Compare: Definition 1: A and B are independent events if $P(A\cap B)=P(A)\times P(B)$ Definition 2: A and B are independent events if $P(A\vert B)=P(A)$ These are equivalent*, but I would argue the second gives a much better feeling for what statistical independence really means. ## Conditional Probability When teaching conditional probability this year, I demonstrated it separately in three contexts: two-way tables, Venn diagrams and tree diagrams. Prior to the discussion of conditional probability, of course I wanted to check that students brought their prior understanding to the surface – through a mixture of starter questions on the board (two-way tables; tree diagrams), or some mini-whiteboard work (Venn diagrams). In each context I adopted the same approach: a probability question involving the typical ‘given that’ phrasing; a highlighted, restricted part of the diagram; and the formula for conditional probability. Each time, I described the highlighter approach as intuitive and the formula approach as ‘safer’ – not susceptible to misinterpretation. Each time, we also considered the reverse of the conditional statement, ie. we contrasted $P(A\vert B)$ and $P(B\vert A)$. I split these three contexts over three separate lessons. This gave the students the opportunity to focus on each one fully, in isolation, and to also experience the recall of the formula for conditional probability each day. The examples below aren’t the exact ones that I used in my lessons, but should be illustrative enough. ## In a two-way table Question from LearnZillion Intuitively: $\displaystyle\frac{203}{203+122}=\displaystyle\frac{203}{325}$ More abstractly, $P(\text{survive} \vert \text{first})=\displaystyle\frac{P(\text{survive} \cap \text{first})}{P(\text{first})} = \displaystyle\frac{203/2201}{325/2201}=\displaystyle\frac{203}{325}$ ## In a Venn diagram Image from ck12.org To find $P(A\vert B)$, we can think intuitively: $\displaystyle\frac{0.3}{0.3+0.2}=0.6$ More abstractly, $P(A\vert B)=\displaystyle\frac{P(A\cap B)}{P(B)}=\displaystyle\frac{0.3}{0.5}=0.6$ ## In a tree diagram Given that a student can construct tree diagrams, what is the probability that they pass? (Image from stats.libretext.org) Intuitively: just read $0.97$ from the relevant branch. More abstractly, $P(\text{pass} \vert \text{can construct})=\displaystyle\frac{P(\text{pass} \cap \text{can construct})}{P(\text{can construct})}=\displaystyle\frac{0.78\times0.97}{0.78}=0.97$ Note: there there is a significant opportunity for confusion here: the calculation for $P(\text{pass} \cap \text{can construct})$ is the product $0.78\times 0.97$, however this is unrelated to the product formula for independent events. Indeed, the events ‘able to construct’ and ‘pass’ are not independent here. We set weekly class tests in our department and thus get fairly rapid feedback about how well students have picked up new topics. I was pleased to see that my emphasis of using the slightly more abstract (but secure) approach given by the formula for conditional probability made an appearance in almost all of the students’ work. I was a little disheartened (but not completely surprised) by the proportion of incorrect answers to the very first question on the test: 1 (a) State what it means for events A and B to be independent. (b) State what it means for events A and B to be mutually exclusive. We still have 6 or 7 months to sort that out! I’ll be returning to conditional probability at every available opportunity. Due to the ordering of our scheme of work, I am now moving on to discrete random variables and thus can include conditional questions there. This is also followed by the binomial distribution (which many of you may well have already taught in the AS year). I find it somewhat surprising that conditional probability has been designed to come after hypothesis testing as, within a hypothesis test, the probability we calculate is conditional on H0 being true. With the order of our own scheme of work, I can emphasise that conditional statement more heavily and hopefully the students will gain a better appreciation for the process we follow in this type of hypothesis test. ## * Footnote The definitions are not quite equivalent… The reason the first is the preferred definition for independent events is that the second cannot be used in the case where $P(B)=0$. In all other cases, they are equivalent! # Visualising Coupled Systems of DEs One of the new topics to have found its way into Further Maths is coupled systems of differential equations. The Pearson textbook begins with this example: As both x and y are functions of time, t, then there is a very appealing dynamic nature to the solutions of these equations. One beautiful way to visualise the family of solutions is to use an online tool called ‘Field Play‘ which I believe was created by Andrei Kashcha (@anvaka). I’ve only been able to capture still images to post here, but the website shows a striking flow over time. It’s not too hard to enter the equations you wish to visualise, especially once you have seen a couple of examples. The key points are: • v.x represents dx/dt, and v.y represents dy/dt • p.x represents x, and p.y represents y • If you have an integer coefficient (such as 2) then you must enter it as 2.0 Another nice feature of the site is that the settings you enter are encoded in the website link. Thus each link below goes directly to the visualisation of the equations I’ve entered. ## Example 1: Bears and Fish Using the equations from Pearson’s example above yields an image like this: ## Example 2: Lotka-Volterra A more sophisticated model for interacting predator and prey species is given by the Lotka-Volterra equations. (Note these are non-linear, because of the xy term, so A level students are not expected to analyse them!) Here, x represents the population of the prey species, and y the predator. I found it particularly helpful to project this animation onto my whiteboard so that I could annotate a few features. Note, for example, the location of the origin and labelling of the axes (we were discussing Foxes and Rabbits). The anticlockwise flow follows the typical storyline of alternating fluctuations in predator and prey numbers. ## Example 3: Random! Another great feature of this site is the ‘Randomize’ button. It will generate its own random set of DEs and show you the flow. It can be great to do this several times in succession and notice some features: • These systems have ‘equilibrium points’ where there is no motion. These are points where (simultaneously) dx/dt=0 and dy/dt=0 • There are limited types of these equilibrium points: nodes, saddles, spirals, stars etc, and they can either be stable (eg local trajectories spiralling ‘in’ to the equilibrium point) or unstable (spiralling out etc). [A classification based on eigenvalues can be found online in notes such as these.] • The pattern of flow surrounding a number of equilibrium points is strongly reminiscent of weather patterns and might motivate students to learn about Ed Lorenz‘s work on chaos theory. # Teaching Proof (Part 1) Proof is one area where the new specifications for A-level Maths are noticeably different from the legacy courses. Technically, proof was a part of the old courses but that box was often ticked by including a trigonometric identity question in exams. MEI gave it a little more prominence with questions like these from their C1 papers: MEI Core 1, January 2005 MEI Core 1, June 2005 This year I have had to teach proof much more thoroughly than ever before, and it’s certainly not been a straightforward process! This blog post is a reflection on some of the issues that arose and ideas for future teaching. # What’s new? Each board’s specification contains essentially the same wording when it comes to the content we need to be teaching to cover ‘Proof’ as a topic. Here I’ve copied Edexcel’s specification as it helpfully distinguishes between AS (in bold) and A2, and includes a little bit of exemplification. Edexcel GCE Mathematics 2017 specification When I was planning my teaching for this topic, I knew we would need a preliminary discussion about mathematical statements and how they can either be true or false (on a rainy day you might want to disappear down the rabbit hole of incompleteness and Gödel’s work…). I then thought that direct proof by deduction could be done in the same lesson, saving proof by contradiction for a second lesson. Job done. I think it’s fair to use the Bushism that I misunderestimated the depth of this topic! # Lesson 1: warm-up I wrote out a sheet of example statements for paired discussion: Mathematical prompts for discussion: true or false? These prompts were designed to include a variety of concepts: • There are rules such as ‘odd+even=odd’ that students have known since junior school, and perhaps have never deeply questioned. • Some of the statements fail in particular instances (the top-left is false when p=2, for example) so we have the notion of a counterexample. • A finite set of cases for checking (top-right) so we have the notion of proof by exhaustion. Which we can then contrast with statements where there are infinitely many cases to check, and thus would need a different approach. [There was also the opportunity to digress and mention Goldbach’s conjecture.] • Some statements can be proved using a direct proof by deduction. These will be a nice way to introduce concepts such as writing odd integers in the form $2k+1$, where $k\in\mathbb{Z}$ • I included the converse of Pythagoras so that we could talk about the direction of an implication. Here, the converse is itself a true theorem but that is not always the case. # Lesson 1: our first proofs Of course I wanted to start simple. We were going to prove that the square of an odd integer is itself an odd integer. We discussed how that statement feels rather self-evident and I allowed the students to raise their “Do we really have to prove it? Isn’t it obvious?” concerns. And so we began… Claim: If n is an odd integer then n² is an odd integer. Proof: If n is an odd integer, then n = 2k + 1, where k is an integer. Ok, stop right there! “How do we know n=2k+1? Don’t we have to prove that, too?”. (Along with the usual lone voice of “what does integer mean again?”) We had to interrupt the proof and have an important discussion about definitions and their fundamental importance in maths. Fortunately, I’d prepared a handout with some key terms on it, and this was the ideal time to give that out. Now we could continue our proof Then, n² = (2k+1)² = 4k² + 4k + 1 = 2 (2k² + 2k) + 1, which is odd, as 2k² + 2k is an integer. Ok, stop right there! “Now we can just say that 2 + 2is an integer?! Shouldn’t we have to prove that?”. And, it’s a fair point: if I’m taking the trouble to prove something like ‘odd times odd is odd’ then shouldn’t I also prove that ‘integer times integer is integer’? So then we had to digress into axioms, which I can honestly say I was not prepared for. [For those of you who haven’t had the pleasure of this at university, the integers are essentially constructed to satisfy certain fundamental rules: the sum of two integers being an integer is such an axiom – it’s an assumption on which our arithmetic is built, not something to be proved from more basic principles.] I gave Euclid’s “parallel postulate” as a classic example of a ‘take it or leave it’ axiom. Take it, and you can study Euclidean geometry and triangles have an interior angle sum of 180 degrees etc. Leave it, and you can study non-Euclidean geometry which is way cooler. However, if you don’t accept the ‘integer+integer is integer’ axiom then good luck to you on your lonely journey. There was just enough time left in Lesson 1 to finish with a direct proof of: if is even, then n² is even, and that seemed to reassure students that these proofs really aren’t that bad. Until you get to Lesson 2… # Teaching Mechanics Initially motivated by a brief discussion with one of my colleagues, towards the end of last term I ran a few Twitter polls to gauge different teachers’ conventions when it comes to teaching Mechanics. We’ll take a look at them one by one: ## Poll 1: Newton’s second law The poll also attracted a thread of nearly 20 comments. Curiously, the majority opinion in the comments was the minority opinion in the final poll result (the second option)! Other comments made reference to the fact that we should of course label equations to indicate that we have resolved (ie applied Newton’s second law) on a particular object. My personal opinion is the second option: R – mg = 0. It is a specific instance of Newton II where the acceleration happens to be zero. Why would we deal with the equilibrium case so differently and give students two (related but) different ways to tackle problems? [In fact, I sometimes wonder if we should teach some simple acceleration problems before looking at equilibrium as a special case?!] Interestingly, all the comments that supported option 2 argued a similar point. The comments supporting option 1 didn’t really argue why it might help students. One commenter pointed out it could reduce sign errors and another liked developing students’ intuition around the balance of forces. (I have another worry here that students may forget that even if forces are ‘balanced’, the object could still be in motion.) ### My main point • Neither is incorrect, but I would teach the “F=ma” approach for all situations, including equilibrium, so students only have to learn a single method. It also means they pay attention to the language used and deduce from ‘in equilibrium’ to that a=0. ## Poll 2: Acceleration Arrows A very common student misconception in Mechanics is that the direction of acceleration matches the direction of movement. However, wherever possible, I think it makes sense to draw the acceleration arrow to match the direction of motion and this goes hand in hand with interpreting ‘deceleration’ to mean a negative acceleration. If you show students (or teachers) the diagrams above without any context, our initial intuition would be to think of A as travelling upwards but slowing, and B as travelling downwards and getting faster. Of course, either of the diagrams could match either of those situations and we wouldn’t know which until we were given some information about the object’s velocity. ### My main point • Neither is incorrect, but wherever possible I try to draw diagrams in a way that appeals to our intuitive sense of what is happening. (And never deduce the direction of motion from a force diagram!) ## Poll 3: Collisions I’ll be honest here, and I admit that deliberately I included Option 1 as a bit of a red herring. Following on from the idea of intuition in the last poll, it’s perfectly reasonable to consider that the particles might travel in opposite directions after the collision, but in fact they don’t. Of course, we don’t know this for certain until we’ve solved the equations and we just get a good diagram drawn and then trust in maths to get the signs right for us at the end. I think one risk with Option 1 is if all students get the ‘speed of separation’ correct, but maybe I should have more faith. I think collisions is the one area where I haven’t fully made up my mind and part of the purpose of this poll was to gauge a majority opinion and to see if that would make me reconsider my own view. My approach to teaching this topic over the past few years has been to use Option 3: drawing all of the velocity arrows in a common direction, and then assigning them negative values if we know a particle is travelling in the opposite direction. My reasoning is that this helps get signs correct in the conservation of momentum equation and, moreover, I teach the restitution equation symbolically as $e=\dfrac{V_B-V_A}{U_A-U_B}$ rather than using the phrasing ‘speed of separation’ and ‘speed of approach’. I’m conflicted because this plays along the lines of giving students a single all-purpose method (as discussed with Newton II above) but somewhat goes against the idea of drawing a diagram that matches our intuition (as discussed with acceleration arrows above). ### My main-ish point I really am torn here. I think I’ll continue to teach Option 3, with consistency winning over intuition. (To be honest, some days I’m just glad students draw diagrams at all!) I’m very willing to debate this one further though. ## In summary… • Prioritise consistent approaches to solving a whole class of problems, reducing the number of methods students have to learn • As far as possible, draw diagrams to match intuition (but don’t apply the converse and assume a diagrammed system is behaving as you intuit!) • We all nag students to draw good diagrams and to label their equations. It took me a long time to figure out that I should isolate these skills in tests if I really want students to take notice. (Eg give the setup for a full mechanics problem, but only ask for a force diagram; or provide a force diagram and ask for a labelled system of equations (and for them to remain unsolved); or even, present the diagram and the equations and simply ask students to insert the appropriate labels.) In short, if you want them to develop a specific good habit, then test them on that specific good habit! ### A footnote on Newton II Of course, we could argue that F=ma is a special case restricted motion in one-dimension and we should write F=ma and deal with vectors. But then this is a special case, assuming that a is constant so we should write F=m dv/dt. But then this is a special case that assumes the mass is constant, so we should write F=d(mv)/dt. How do I reconcile this with my approach to consistency and teaching a single method for a whole class of problems? Well, up to (old spec) Mechanics 2 student don’t meet situations with varying mass etc so there’s no need for that extreme generality. It is always nice to discuss it in passing though, especially once they’ve learned about separable differential equations. # Multiple Choice Questions in A-level Maths There appears to have been a period roughly during the 1980s when multiple choice questions (MCQ) featured prominently in A-level exam papers. Precisely when or why they appeared and, indeed, when or why they disappeared again is a mystery to me at the moment. Moreover, the only textbooks I’m aware of that include MCQs are those authored by Bostock and Chandler. (There are also a couple of exam preparation books by Shipton and Plumpton that include them: Multiple Choice Tests in Advanced Mathematics, and Examinations in Mathematics.) ## In the new A-level MCQs do get a cameo role in the new A-level assessments from AQA. Each of their Specimen papers includes a couple of MCQs worth a single mark each. Here’s an example from Paper 1: In the document The Thinking Behind Great Assessment by Dan Rogan (Chief of Examiners), is the snippet positioning MCQs as one of four features “at the heart of our aims for the qualification”. Later in the document comes some elaboration: Aside from ease in assessment terms, I’m keen to focus on the pedagogical value of MCQs and the potential for their use in teaching. The above example feels similar to those offered on the Diagnostic Questions site although I would suggest different potential answers if I wanted to uncover misconceptions (for example, including 2 as an option to identify students who simply observe the coefficient of x). ## Diagnostic Questions Indeed, here is such an MCQ on gradient from the Diagnostic Questions site [sign-in required when following that link]: One of the excellent features of the Diagnostic Questions is the “insights” where you see students’ explanations for their answers. For example, this student offers their reasoning for (incorrectly) choosing option B: ## It’s an MCQ, Jim, but not as we know it In the 1980s A-level papers, MCQs were a much more serious affair. For a start, there were five different types of MCQ. Even interpreting the instructions is no mean feat. I’ll describe each type and then include an example. [Photos from Plumpton & Shipton.] ### Section I (multiple choice) There is a single correct answer among five options. Example ### Section II (multiple completion) Three responses are given (123) of which one or more are correct. The letter representing the student’s answer depends on which are correct: Example ### Section III (relationship analysis) These questions comprise two statements (1 and 2) and the student has to determine the logical relationship between them: Example ### Section IV (data necessity) A problem is followed by four pieces of information and the student must determine which (if any) piece of information could be omitted and the problem still be solvable: Example ### Section V (data sufficiency) These comprise a problem and two statements (1 and 2) in which data are given. The student has to decide if the given data are sufficient for solving the problem. Brace yourself… Example ## Just a couple more variants… In Core Maths for A-level, Bostock & Chandler simplify things down to three types: • Type 1: exactly as Section I above • Type 2: akin to Section II above but students simply write the letters corresponding to which items follow from the information in the question. • Type 3: true/false In their Further Pure Mathematics book, they have: • Types 1 and 2 as in their other text • Type 5 is true/false (as Type 3 in their other text) • Type 3 corresponds to Section III above • Type 4 is an amalgamation of Section IV and Section V: a problem is introduced followed by a number of pieces of information. Either all the information is needed (answer A); the total information given is insufficient (answer I); or some information can be omitted without affecting the solution of the problem (the letters for these items must then be specified) ## Use of MCQs in Teaching I think there is good potential for the use of these more sophisticated MCQs in A-level teaching, although I fear students will either need simplified instructions (for example those used in Bostock & Chandler, rather than the London board papers of the 1980s) or significant training in how to respond. In particular, I agree with Plumpton & Shipton’s comments about Sections III – V: These items enable coverage of topics which are difficult or unfair to examine by longer structured questions. Indeed, these more sophisticated item types are a far better test of mathematical understanding than some longer questions in which candidates may be applying a method or technique which they have learnt but not have properly understood. Time to begin building a usable bank of these questions so that I can try them out next year! # Maths in the Sticks 3 We are now delighted to confirm that “Maths in the Sticks” will be running again this year, on Sunday 22nd April 2018. Many of you will now be familiar with the routine: an engaging day of KS5 professional development with a hearty Sunday lunch! I am also very pleased that we have once again partnered with the Further Maths Support Programme to run this year’s event. ## Programme for the Day All our speakers are now confirmed, though the precise timings of the day may be subject to change. Our opening speaker this year will be Colin Wright (@colinthemathmo) with his hugely engaging talk on the mathematics of juggling. We are also pleased to welcome Daniel Griller (@puzzlecritic), author of the puzzle book Elastic Numbers, and Luciano Rila (@drtrapezio) from UCL who frequently runs sessions for both A level students and teachers. This year we also have Sue de Pomerai (@suedepom) from the FMSP who will be looking at some of the newer (or at least less familiar) content in the new Further Maths pure module. ## Sunday Roast I have blogged before how CPD really shouldn’t be about the food, but hosting this event on a Sunday was a deliberate decision. Our school serves a roast lunch buffet-style, along with a salad bar and vegan options. I should be able to get a bottle of wine or two open, too! ## Hurtwood House The event will take place at my school, Hurtwood House. Our postcode is RH5 6NU and, as you will see from Google Maps, we are in quite a remote location in the beautiful Surrey hills. Direct access to the school is only possible by car and, where possible, I would encourage people to drive and/or car share. However, so as not to exclude anyone, I can arrange for a minibus pick-up from Guildford train station at 9am. There is a box on the registration form where you can indicate if you would like this service. (Please check that suitable trains run on a Sunday for you to make it to Guildford before 9!) Of course, there will be a return minibus, departing Hurtwood House between 3 and 4pm at the end of the day. Please also bear in mind that there is a flight of outdoor steps between the room we are using and the canteen. If you are keen to attend but are concerned about accessibility, then please contact me directly and we can see what arrangements might be possible. ## Registration We typically have space for around 35 teachers each year. Please complete the registration form on Eventbrite to book your place. ## Job Vacancy – now closed You might also be interested to know that we are currently advertising a position for a Maths Teacher to join us at Hurtwood. Full details are on our school website here, and the vacancy is also listed on the TES here. (Closing date 2nd February.) # Water, water, everywhere A particular issue on my mind recently is the sheer quantity of ‘in-house’ resources that departments create to either do away with textbooks, or at least supplement them with materials tailored to their schemes of work and students’ abilities. For some, photocopying costs must be paralleling the purchase price of textbooks! I raised this issue in some recent tweets, wishing that there was a way to share this work and its products – in some parallel to the ‘open source’ software movement. The barriers to that are typically quality control; choosing the right platform for collaborative authoring; and keeping a generally consistent format throughout. None of these problems is necessarily insurmountable but the solutions come with the costs of inconvenience, learning curves, or simply time. (For examples of so-called open textbooks, take a look at Stitz and Zeagers’ Pre-calculus book or the ‘approved’ lists collated by the American Institute for Mathematics.) ## Nor any drop to drink? The main product I envisage being most useful is simply a ‘problem book’ for A level Maths: minimal (if any) theory or worked examples, but masses of questions at different levels. Something along the lines of Drill, Exam Standard and Extension. The more I think about this kind of project, the more I realise how many resources I’m surrounded by. I have textbooks of every style from every decade; past papers from every board for the past decade or more; papers from STEP, AEA and MAT exams which are great for extension problems; worksheets, packs of questions, and even more papers from Solomon, T. Madas, Delphis, Zigzag; the exercises and quizzes from Integral maths… The list feels almost infinite. Las year I almost drowned my students during the summer revision period with huge packs of past papers from every source, supplemented with topic practice too! ## […] is this indeed, The light-house top I see? Just the other day, I rediscovered the book Graded Exercises in Pure Mathematics which I’d forgotten I even owned. This is the closest approximation to what I think would be my ideal resource: chapters are focussed on each pure topic and the exercises are graded as Basic, Intermediate, Revision and Advanced – almost exactly the same gradings as I had considered. But there’s a downside: published in 2001, this book has missed out on a couple of curriculum reforms and the (mis)ordering of the topics from our current perspective renders it almost as difficult to use as the usual cutting and sticking I do from every other book on my shelf. ## A speck, a mist, a shape, I wist! I’ve been a big fan of Elmwood press‘s textbooks for a long while now: they contain some theory and examples but then great sets of exercises of increasing difficulty, exam standard questions and some review. I’ve been in contact with them and they have confirmed that new editions are being produced to match the 2017 specification. I think I’ll hold out hope on these new editions, otherwise I’ll have a big project for myself in the coming academic year, creating what (for me at least) would be the ideal practice and problem book! ### Think, Pair, Share… • Does your department systematically produce a large number of ‘custom’ resources for teaching A level Maths? (More than just the occasional photocopied exercise from another book?) • Is most of your supplementary material geared towards giving students questions to work on? Or explaining aspects of theory in a way you prefer over the textbook approach for example? • Do particular advantages come from having in-house resources, or might it be possible (over several iterations) to agree on a common ‘best practice’ resource?
2018-11-21 06:35:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5761526226997375, "perplexity": 1194.71911904211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747215.81/warc/CC-MAIN-20181121052254-20181121074254-00321.warc.gz"}
https://dmtcs.episciences.org/3344
## Drmota, Michael - Discrete Random Walks on One-Sided Periodic'' Graphs dmtcs:3344 - Discrete Mathematics & Theoretical Computer Science, January 1, 2003, DMTCS Proceedings vol. AC, Discrete Random Walks (DRW'03) Discrete Random Walks on One-Sided Periodic'' Graphs Authors: Drmota, Michael In this paper we consider discrete random walks on infinite graphs that are generated by copying and shifting one finite (strongly connected) graph into one direction and connecting successive copies always in the same way. With help of generating functions it is shown that there are only three types for the asymptotic behaviour of the random walk. It either converges to the stationary distribution or it can be approximated in terms of a reflected Brownian motion or by a Brownian motion. In terms of Markov chains these cases correspond to positive recurrence, to null recurrence, and to non recurrence. Volume: DMTCS Proceedings vol. AC, Discrete Random Walks (DRW'03) Section: Proceedings Published on: January 1, 2003 Submitted on: May 10, 2017 Keywords: discrete random walk,generating functions,singularity analysis,[INFO.INFO-DS] Computer Science [cs]/Data Structures and Algorithms [cs.DS],[INFO.INFO-DM] Computer Science [cs]/Discrete Mathematics [cs.DM],[MATH.MATH-CO] Mathematics [math]/Combinatorics [math.CO],[INFO.INFO-CG] Computer Science [cs]/Computational Geometry [cs.CG] ## Consultation statistics This page has been seen 89 times.
2021-01-27 18:00:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6961828470230103, "perplexity": 2158.968626263407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704828358.86/warc/CC-MAIN-20210127152334-20210127182334-00783.warc.gz"}
https://www.emerald.com/insight/content/doi/10.1108/JHLSCM-09-2020-0077/full/html
# The COVID-19 epidemic and evaluating the corresponding responses to crisis management in refugees: a system dynamic approach Fahimeh Allahi (Business School, University of Kent, Canterbury, UK) Amirreza Fateh (DIME, University of Genoa, Genoa, Italy) Roberto Revetria (DIME, University of Genoa, Genoa, Italy) Roberto Cianci (DIME, University of Genoa, Genoa, Italy) ISSN: 2042-6747 Article publication date: 25 January 2021 Issue publication date: 4 May 2021 1511 ## Abstract ### Purpose The COVID-19 pandemic is a new crisis in the world that caused many restrictions, from personal life to social and business. In this situation, the most vulnerable groups such as refugees who are living in the camps are faced with more serious problems. Therefore, a system dynamic approach has been developed to evaluate the effect of applying different scenarios to find out the best response to COVID-19 to improve refugees’ health and education. ### Design/methodology/approach The interaction of several health and education factors during an epidemic crisis among refugees leads to behavioral responses that consequently make the crisis control a complex problem. This research has developed an SD model based on the SIER model that responds to the public health and education system of Syrian refugees in Turkey affected by the COVID-19 virus and considered three policies of isolation, social distance/hygiene behavior and financial aid using the available data from various references. ### Findings The findings from the SD simulation results of applying three different policies identify that public health and education systems can increase much more by implementing the policy of social distance/hygiene behavior, and it has a significant impact on the control of the epidemic in comparison with the other two responses. ### Originality/value This paper contributes to humanitarian organizations, governments and refugees by discussing useful insights. Implementing the policy of social distance and hygiene behavior policies would help in a sharp reduction of death in refugees group. and public financial support has improved distance education during this pandemic. ## Citation Allahi, F., Fateh, A., Revetria, R. and Cianci, R. (2021), "The COVID-19 epidemic and evaluating the corresponding responses to crisis management in refugees: a system dynamic approach", Journal of Humanitarian Logistics and Supply Chain Management, Vol. 11 No. 2, pp. 347-366. https://doi.org/10.1108/JHLSCM-09-2020-0077 ## Publisher : Emerald Publishing Limited ## 1. Introduction The COVID-19 pandemic was started in Wuhan, China in December 2019 (WHO, April 2020a, b), and it was recognized as a pandemic on March 11, 2020 (WHO, January 2020; WHO, March 2020a, b). It is spreading almost all around the world, and based on its impacts, it is crucial to consider vulnerable people and in particular refugees and other people who are living in camps. They usually are faced with complicated challenges in their life and need to be more attentive during crises like COVID-19. This virus can spread swift through direct contact with the infected person or indirect contact where the infected person has had coughing, sneezing or was talking (WHO February 2020; ECDC March 2020). Thus, it is also essential to prevent the speed of spreading which reaching this goal needs to define comprehensive instructions and policies to be able to minimize the death toll (WHO March 2020a, b; WHO April 2020a, b; IASC, 2020). Because of its importance, a joint group of the International Federation of Red Cross and Red Crescent Societies (IFRCs), the International Organization for Migration (IOM), the United Nations High Commissioner for Refugees (UNHCRs) and the World Health Organization (WHO) have developed interim guidance in humanitarian settings. This document explained their needs including camps, camp-like settings and the surrounding host communities in scaling-up readiness and response operations for the COVID-19 outbreak through effective multi-sectorial partnership (WHO March 2020a, b; IASC, 2020). Based on the findings and WHO guideline (WHO, April 2020a, b), (WHO, March 2020a, b), (WHO, February 2020), if the affected countries can prevent the spread of the virus in a short time, the control of the virus and the death rate will sharply fall in a short time. In this case, most of the affected countries have developed instructions and new policies from the start of the crisis in December 2019, and because of the limited time, there are not adequate researches on the social responses. There are several different ways to predict the social behavior act on the spreading time, which simulation models are particularly practicing to forecast the effect of newly developed policies in the kind of disasters such as Ebola (Sharare et al., 2016). Merler et al. (2015), modeled the movements of individuals, including patients not infected with the Ebola virus, seeking assistance in health-care facilities. They calibrated an agent-based model through the Markov chain Monte Carlo approach. The model predicted Ebola virus transmission parameters and examined the effectiveness of interventions such as availability of Ebola treatment units, safe burial procedures and household protection kits. They estimated that 38.3% of infections (95% CI 17.4–76.4) were acquired in hospitals, 30.7% (14.1–46.4) in households, and 8.6% (3.2–11.8) while participating in funerals. Recently, four main modeling methods of discrete event simulation, agent-based modeling, hybrid simulation and system dynamics regarding the COVID-19 challenge have been considered in research; it was discussed how simulation tools can help decision-makers have a better decision for this very complicated crisis (Currie et al., 2020), and the structural review of the most relevant humanitarian publications associated with system dynamics since 2003 has been done to explain how the SD model can help humanitarian organizations to develop their complex policies with professional and reliable methods (Allahi et al., 2018). In addition, a system dynamic model was developed to assess complex impact factors regarding refugees' dignity to provide optimal support to beneficiaries. The developed model has described a decision-making framework with a high-level overview of the interactions between the economy, education, health and the psychological aspects of the recipient's life (Allahi et al., 2020). Because of model complexities and variables overlaps, using a structural simulation can play an important role in having a comprehensive simulation and consequently policies. There has been some research in terms of developing a model to make a better decision on rebuilding of supply chain during long-term global pandemics such as COVID-19. These kinds of models can evaluate the epidemic impacts on supply chain management (Queiroz et al., 2020; Ivanov, 2020) which can be considerable in humanitarian area and the supply management of hygiene and food materials to refugees. While the availability of essential goods are important during the pandemic, what can be the best policy response in terms of reducing the COVID-19 impact on health and education regarding refuges limitations such as lack of space in camps? Such studies in terms of COVID-19 pandemic have not provided more specific directions on ways to evaluate policies to reduce the infected rate and as a result, death rate of vulnerable community such as refugees. As mentioned in the literatures, there is a research gap to simulate the effect of a pandemic such as COVID-19 in refugees considering the best possible policies for reducing mortality rate. This study explores the research question of what would be a dynamic model of a pandemic for refugees and finding the best policy to reduce the impact of COVID-19 on significant affected aspects of refugees’ life regarding their limitation in the virus pandemic such as accommodation, hygiene facilities, etc. Therefore, the study subject is: evaluation of different response policies to the pandemic and the impact of responses' policy on the health and education growth of the Syrian refugees affected by COVID-19. We address the research question by developing a system dynamic model for assessing the impact of three different policies on refugees' health and education. It examined through hypotheses of different scenarios of isolation, social distance/hygiene behavior and financial aid policies. The proposed model is based on SIER model and relies upon real statistical data. The results of the study enabled to state that public health and education systems can grow more by implementing the policy of social distance/hygiene behavior. It has a significant impact on the control of the epidemic in comparison with the other two responses, and without it, the number of death may be up to 3% of refugees, and by applying the defined policy it would be less than 1% which would be considerable. The rest of this study is structured as follows. In section 2, we elaborate on the methodology and system dynamic development and discuss the development of a multilayers causal loop and stock-flow model to simulate the current epidemic impact on refugees. In section 3, we describe the model verification and related data. Section 4 is created to map out a scenario analysis and some directions of a future research plan regarding COVID-19. We conclude the paper in section 5 by summarizing the most remarkable insights. ## 2. Methodology and system dynamic model development SD is a simulation methodology that has been introduced originally by Forrester (1958) and highlighted understanding the connection between the elements of a system and demonstrates dynamic behaviors created through multiple interacting feedback loops (Sterman, 2000). This approach is particularly useful in describing policy implementation and the reason for changing plans which allows policymakers to recognize detailed components and their complex relations and as a result the potential effects of alternative strategies to make more desirable decisions regarding directions from the model (Revetria et al., 2008, Bruzzone et al., 2014; Briano et al., 2010). In this paper, a system dynamics (SDs) approach is employed to study the impact of COVID-19 spread on Syrian refugees' population, education and health and also to identify how applying some alternative policies regarding the situation can have different effects on managing epidemics and crafting public health and education responses and policies. In order to develop the system dynamics model, the main factors in Syrian refugees' life affected by COVID-19 should be identified and illustrate within the causal loop diagram (CLD). The CLD systemically demonstrates and interprets the dynamic complexity and significant feedback loops associated with the number of infected, recovered and dead from COVID-19 leading to affect public health and education system. Reviewing literature in terms of the importance addressing the specific needs of refugees in the COVID-19 pandemic (WHO, 2020), this section evaluates and brings awareness of the subsequent impact on the spread of COVID-19 in refugees' life categorized in the main subjects of health and education and discusses relevant decision-making policies like camp and isolation effectiveness, social distance and hygiene behavior and financial aid impacts on the education service. The interaction between factors is widely described, and on such interactions and applying some alternative improving factors are discussed. By using the SD model, dynamic complexity perspectives of all the interventions among the variables can be described. The SD model has captured the trend of susceptible, dead, infected, recovered, emigrated and the number of children accesses to distance education for the outbreak, and by applying different policies and comparing the results, the best response has been introduced. For the quantitative model, a stock and flow diagram has been developed to run simulations and validation of our primary assumption specified in the CLD, and for this purpose, Vensim software was used to demonstrate and understand the effect of changes in polices in the improvement of public health and education system of refugees in Turkey. As the discussion progresses, a causal loop diagram and stock-flow model will be provided as a result of this section. ### 2.1 Causal loop qualitative model of responses of COVID-19 effects on Syrian refugees' health and education CLDs are visual qualitative model aids in imaginng how various variables in a system are interrelated, and it explains the feedback loops of complex systems by using links between the variables (Allahi et al., 2018). The key concept to comprehend CLDs includes the polarity of the arrows and the overall feedback loop which explains what would happen if there was a change while the detailed behavior of the variables will not be described (Sterman, 2000). It has been used in academic achievement for a long time and commonly applied in organizations to quickly capture assumptions about the causes of dynamics (Revetria et al., 2008). The outcomes of connections among the variables can be further simulated through the model to assess and improve the understanding of complex systems. Figure 1 presents the causal loop diagram of the feedback loop and causal connections between described factors in a system and provides a framework to better understand the multiple implications of decisions in this complex situation involving many interconnected factors that are responding to the crisis of COVID-19. The causal interconnections corresponding to these factors are defined, and the key responses and financial aid are specified with green and dark red color, respectively. The figure displays eight basic structure loops in different colors which are considered in Table 1; The plus sign of links in the loops outline effect variable changes in the same direction of the cause variable, and the minus sign of links indicates that the two nodes change in the opposite direction of each other. While the number of links in a loop is odd, that loop signifies a negative feedback loop (balancing – “B”) which is associated with an increasing/decreasing goal-seeking behavior, and otherwise, it is a positive feedback loop (reinforcing – “R”) which is associated with exponential increases/decreases. Reinforcing and balancing loops can be combined to describe more complex behavior, and balancing loops try to lead the system to the desired state and keep it there (Sterman, 2000; Allahi et al., 2020). This qualitative model has been developed based on the “SEIR model” (SEIR is an acronym referring to susceptible, exposed, infectious and removed or recovered). Susceptible indicates refugees who can get the COVID-19 infection, exposed refers to asymptomatic infected refugees, infected refugees have symptoms of infection and can spread the virus and recovered indicates previously infected but are already healthy and immune to the COVID-19 (Rachah and Torres, 2018). Word Health Organization indicates that the COVID-19 virus is transferred through contact of people, and contact rate rises when people ignore the social distance of 2 m. Also, interventions that were effective at reducing the spread of the COVID-19 virus within the systematic review included health care facilities, hand washing for a minimum of 11 times daily, sanitation and hygienic behaviors which are essential to protecting human health during the infectious outbreak and will further help to prevent human-to-human transmission of the COVID-19 virus. Hence, respecting social distance and applying hygiene behavior are effective responses to prevent the spread of the COVID-19 virus and decrease the contagion rate which is designated by green color in the causal loop (Jones and Carver, 2020; WHO (b), 2020). The reinforcing feedback loop signified by R1 and dark red color in Figure 1 shows the infection rate grows by high contagion rate and the number of those exposed by the virus will rise. In addition, as the number of infected increases, symptoms develop; and the number of infected will increase, consequently increasing the number of death. Furthermore, emigrating of Syrian refugees is rising because of coronavirus, due to fear and mental stress that the COVID-19 outbreak could have harmful consequences such as dying (Clark et al., 2020). The impact of dying because of the virus on increasing the number of migrated people will reduce the number of susceptible. As the susceptible population decreases, the contagion rate will decline. Also, in the balancing feedback loop B1 (black color), which is identified as the depletion loop in the SEIR model, as the number of susceptible refugees diminishes, the number of exposed will decrease and the number of infected will gradually level off to reduce the dead people and migrated people and finally cause a rise in the number of susceptible people. The global spread of COVID-19 has overwhelmed the health system and caused widespread social and economic disruption in humanitarian situations (Heymann and Shindo, 2020; WHO, 2020). Since humanitarian organizations have been required to stay home, they have stopped the financial support aid, neglecting refugees and relying on the local governments, which consist of poor support with regard to COVID-19 situation in their country (Vlagyiszlav, 2020). With the COVID-19 outbreak in Turkey continuing and the refugee health and education being threatened, there is a need for ongoing financial aid from humanitarian organizations to support Syrian refugees to meet essential service needs such as health and education (UNHCR, 2020). By getting more financial aid from humanitarian organizations and accordingly more financial aid to strengthen the health system response to COVID-19, health service capacity is one the most important factors in the health system which will increase and is expressed as the number of temporary health care tent, beds, ventilation and staff. In particular, health emergencies like this outbreak cause health systems and their ability to deliver health care services strain, and when the health service capacity will increase, the health service strain decreases. The balancing feedback loop denoted by B1 and red color in Figure 1 show that as health service strain declines the mortality rate drops, and consequently, the dying rate diminishes and generates an increasing goal-seeking behavior in the number of infected. Besides, as the number of infected increases, the number of serious cases will rise and put more strain on health services (WHO (b), 2020). Based on the reported data of COVID-19, the elderly and those with underlying diseases become more seriously ill once infected thereby increasing the mortality rate (Guan et al., 2020); consequently, the vulnerability rate factor is assumed for these groups of refugees in this paper. On the other hand, the nonvulnerability group can be assumed for the group of children, young and healthy refugees that have a less mortality rate in the outbreak (WHO (b), 2020). As the vulnerability rate increases, the mortality rate will rise and the number of recovered refugees will decrease with less recovering rate and more infected people which consequently will increase the number of serious cases in need of health service and raise the strain on the health sector (loop R5 with light purple color). People affected by humanitarian crises, particularly refugees displaced and/or living in camps and camp-like settings, are faced with this challenge, and vulnerable refugees should be taken into consideration more than others when planning for implementing some policies to control the COVID-19 spread. Refugees are frequently ignored and may face challenges in lack of camp as well as accessing education and health services. Presenting the inclusive health system and connected factors affected by COVID-19 spread ensures refugees' requirements in this area. Although, many refugees in humanitarian situations face difficulties to find proper accommodation and they settle in formal or informal collective sites, such as camps or informal and spontaneous settlements, all of which may be of a temporary or long-term shelter (WHO (a), 2020). WHO has published patient management guidance to inform governments that those with COVID-19, mild and severe symptoms need immediate isolation and appropriate accommodation to reduce the number of active infected (effective number of infected people, after adjusting for a reduction in infectiousness from isolation). Therefore, some amount of financial aid should be spent on camps to increase the availability of camp capacity and the effectiveness of camps and isolation. Moreover, the impact of more extra camp capacity on enhancing responses like camp and isolation effectiveness (green color) and reducing the number of active infected, infected and exposed refugees can be visualized in feedback loops R3 and R4. The background of online learning in refugee camps starts with the refugee crisis, and the expanding COVID-19 outbreak has driven decision-makers to shut down schools, and many courses have been shifted to online lectures. However, a lack of necessary facilities for online education like teachers and digital devices for refugees can be costly, and it is essential to support them financially. Since March 2019, over 28,000 Syrian refugees in Turkey have received online language courses through e-learning methods, but it would be better to cover more students and more funding (Reinhardt, 2018). The education elements have been highlighted in blue. The positive feedback loop labeled as R6 represents the effects of financial support on refugee children's education and illustrates the requirements of online education services in the COVID-19 pandemic. In the next subsection, a stock and flow quantitative model is presented. ### 2.2 Stock flow quantitative model of responses to COVID-19 effects on Syrian refugees' health and education To estimate the early dynamics of the COVID-19 effect and the subsequent responses system, dynamics concepts such as stocks and flows and feedbacks are inevitable to define the state of the system (Sterman, 2000). The base of the stock-flow model presented in Figure 2 is derived from the susceptible-infected-recovered (SEIR) model (http://vensim.com/coronavirus/) and developed to a new model for evaluating the public health and education system of Syrian refugees in Turkey in the COVID-19 epidemic and investigate responses like isolation, hygiene behavior and camp capacity to enhance health and education system. In order to test our dynamic hypothesis outlined in the discussed causal loop model, a quantified stock and flow diagram was developed using Vensim software and presented in Figure 2. Furthermore, modeling process, simulations and sensitivity analyses were performed using Vensim DSS software v. 5.7a. In the quantitative stock-flow model, the refugee individuals were divided into six stocks, as follows: “Susceptible”, “Emigrated”, “Exposed” (but not yet infected), “Infected”, “Recovered” and “Death”. It is assumed that the population susceptible to COVID-19 is the total number of people who will eventually be infected. In addition, some of the susceptible populations have been immigrating to Europe due to fear of death from COVID-19 (Clarke, 2020) which is indexed by emigrated stock and remaining individuals with symptoms of the disease considered as infected people. A dynamic model of the COVID- 19 epidemic is proposed to provide a more reliable view of the state of the disease based on existing data. The generic SEIR framework consisted of the endogenous changes in the social distance, hygiene behavioral risk reduction, camp capacity, isolation, camp effectiveness, reaction time, and financial aid for the health and education system. In addition, It would be possible to see changes in the number of death, recovered, and infected people using this framework. In addition, it is assumed that the social distance factor defines as a slope of decline in contacts as the infection penetrates to less-connected portions of the social network, and the hygiene behavioral risk factor refers to the fractional reduction in risk from social distancing, increased handwashing, and other behavioral measures. While other critical requirements of refugees such as health and sanitation are being responded to, educational demands cannot be ignored, and these have an identically harmful impact if omitted during the global COVID-19 pandemic. As governments' finances are being strained and out-of-school children are more faced with risks like family violence, child labor and forced marriage, so delivery of education online, as soon as possible, must also be a topmost priority to respond to this crisis and its consequences (ECW, 2020). Overall current receiving support from humanitarian organizations is low in response to COVID-19 for the half population of refugees who are children, and it should be discussed in the simulation (Nott, 2020). As a result, another stock named “Access to Distance Education Service” is considered in the model, and the “Desired access to education service” variable demonstrate the number of refugees that have access to education variable and assumed as the whole children population. The COVID-19 outbreak directly affects the mental and physical health of refugees which leads to death, and the whole responses in the model indexed by green color are assumed to decrease the number of deaths and increase recovered refugees infected by this virus. This model is an attempt to include response factors and presents the changes from applying them in the number of infected, recovered and dead in studying the epidemic, which can be used as a framework for further policy analysis. There is now an urgent need to strengthen the COVID-19 response for the most vulnerable people in Turkey, where there is limited support for the response to COVID-19. Humanitarian pressure must be put to inform organizations to financially support to respond to limitations on essential services such as health and education to ensure humanitarian assistance (Nott, 2020). Besides, “Aid for service health” ramp up the “health service capacity” to reduce the health service strain and help serious infected cases from dying. In addition, part of the money will also cover beneficiaries' educational expenditure which is presented as “Aid for education system” in the model. In general, governments and humanitarian organizations are required to respond early in this pandemic regarding isolating and quarantining the infected people in the “Available camps”, and increasing the “Camp capacity” could be an essential alternative to increase “Effectiveness of isolation” and both “Isolation reaction time” and “Applying camp reaction time” effect on a more desirable response to COVID-19 (WHO (a), 2020). Although, by employing isolation response, the effective number of infected people, after adjusting for the reduction in infectiousness from isolation, quarantine and monitoring is outlined by the “Active infected” variable in the model. WHO in 2020 indicates that the COVID-19 virus is transferred through contact of people and further from surfaces by contaminated hands, which facilitates indirect contact transmission which impacts on “Contagion rate”. Consequently, there is the provision of safe water, sanitation, hygiene and washing hand facilities which is assumed as “Hygiene Behavioral Risk Reduction” and which is essential to protecting refugee's health from infections and prevent the spread of the COVID-19 virus. In addition, the “Hygiene Behavioral Reaction Time” is a significant factor to diminish the time from the first infection and hence the contagion rate. The main equations of the SD model are presented in Table 2. ## 3. Model validation and related data The model is validated by applying various structural and behavioral validity tests (Sterman, 2000). Various data sources, including literature or reports published for the COVID-19 outbreaks, are used in order to determine input parameters (Table 3) of the simulation model. On the other hand, due to the lack of experimental data of COVID-19, some model parameters that are significant in determining model behavior are determined by calibration and presented in Table 3. The model also passes the dimensional consistency and extreme condition analysis tests. The model calibration estimates the values of different parameters to best fit the base SIER model of COVID-19 (http://vensim.com/coronavirus/) and using related data of COVID-19 and Syrian refugees in Turkey. The time horizon of 360 days (January 2020–December 2020) is considered; a 1-year period is selected based on the spread of COVID-19 and provides a more reliable view of the state of the disease on Syrian refugee's health and education based on existing data and evaluating the changes in the number of infected, recovered and dead people by applying different policies. The first confirmed COVID-19 case was announced in March 10, 2020 in Turkey, and then the number of cases has increased rapidly; over 20,000 people as of April 3rd and approximately 425 people have lost their lives in this period (Tekin-Koru, 2020). The transmissibility of a COVID-19 virus is considered as “Basic reproduction ratio”, and it outlines the average number of new infections created by an infectious person which is presenting the risk of an infectious agent for epidemic spread. It is a fundamental concept in the infectious virus epidemic which is estimated as 3.3 (Liu et al., 2020). Besides, social distance is considered as a slope of decline in contacts as the infection penetrates to less-connected portions of the social network, and the value is considered zero to evaluate its impact on the number of infected when the value changes to more than zero. The “Diseases Duration” is the duration of infection and, for simplification, it is assumed the same duration, average 14 days, for recovery and death (Although in reality, serious cases might have a longer duration (WHO (a), 2020). Contact rate refers to a decline in contacts as the infection penetrates to less-connected portions of the social network (Bi et al., 2020); the effect is real, but the functional form is notional here. In addition, the incubation period is assumed as the time for onset of symptoms among exposed people which is an average of five or six days (WHO (a), 2020). Furthermore, the fatality rate is considered as 0.04 when minimally treated due to being overwhelmed, and it varies by location and vulnerability rate defined as fatality rate with good health care. Humanitarian organizations provide aid to support the essential needs of refugees with services like health and education; spending of 83 USD per month on health and education are reported wherein 60% portion of it goes through health service (Ulrichs et al., 2017). Regarding the research of Rumble in 2012, the distance education facility cost per person could be 100 $, while the total population of Syrian refugees' children in 2020 is half of their population (Allahi et al., 2020). The model has been calibrated using a payoff function as a linear combination of differences between real data and model to minimizing the difference between them employing the best estimation of the model parameters using Vensim's built-in Powell conjugate search algorithm (Allahi et al., 2020). The values of calibrated values are presented in Table 4. In addition, some of the variables' values in the model are assumed as constant to evaluate their impact on improving health while changing the value. It is important to remark that our research is the first attempt of applying SD to respond to the pandemic of COVID-19 for the case of refugees; the model has been created based on available real and calibrated data. and lack of series real data made some limitations to restrict applying validation with the series data, but the model is based on the real input parameters and some other validation tests to make it sensible and applicable. The core point is that the data presented here are based on the preliminary results of the SIER model and previous research regarding Syrian refugees in Turkey. ## 4. Discussion and scenario analysis Research in humanitarian operation management has received expanding attention during the COVID-19 pandemic. However, two significant gaps can be observed from the current research; first, a large number of the studies concentrate on supply chain aspects of crisis operation management to get all the essential materials to the beneficiaries as immediately as possible (Manoj and Maneesh, 2020), and second, there is limited evidence of research on the understanding of the best response for the COVID-19-affected refugees while the usual response policies such as the basic public health measures, social distancing, proper hand hygiene and self-isolation cannot be easily implemented or are extremely difficult to apply in refugee camps (Kluge et al., 2020). Therefore, a simulation model has been developed to study the COVID-19 impact on different aspects of refugees and consider all the possible responses to evaluate the best policy in this special case with the existence limitation. To study the impact of COVID-19 on refugees, we have examined a base simulation model without consideration of applying any policies and responses to the COVID-19 outbreak; besides, three other policies are proposed to discuss the impact of applying these policies on the spread of the virus, and the seven stocks in the model such as the number of infected and death among refugees in Turkey are illustrated in Figure 3. The final model can reasonably well represent the base simulation model of COVID-19 pandemic based on the original COVID-19 SEIR model of Vensim in the time horizon of January 2020 until the end of December 2020, and based on the first confirmed COVID-19 case announced in March 10, 2020 in Turkey which is approximately 100 days after January, the number of cases has increased rapidly; over 20,000 people as of April 3rd and approximately 425 people have lost their lives in this period (Tekin-Koru, 2020). As illustrated in Figure 3 graph (b), the number of infected is almost 20,000 people in April, and the number of death in the graph (d) is around 1,000 people which is similar to the real data. Furthermore, if the government would not apply any policies and respond to the pandemic, over 100,000 refugees will die by the end of 2020 (graph (d)), and also around 100,000 of them immigrated to Europe or other countries as a result of fear of dying (graph (e)) which is a huge disaster. So, we have examined the behavioral factors and responses that have a significant influence on curbing the outbreak including the change of social distance and hygiene behavior during the epidemic, the process of quarantining and isolating of infected people and the financial aid to build more camps and health services and also providing fundamental conditions for distance education services which are the essential parts in order to study the COVID-19 crisis. However, in terms of distance education and applying hygiene behavioral policy, one of the challenges would be the lack of equipment to successfully implement the response policies and reduce the impact of this virus. If governments face a shortage of medical/healthcare and education equipment, the mentioned policies cannot be applied to improve the level of essential aspects of refugees, but recently some researchers have found out an effective way for demand management in the supply chain considering COVID-19 pandemic and control the outbreak of an epidemic to mitigate its impact on the supply chain challenges which would be considerable and solve this problem (Govindan et al. (2020); Dubey et al., 2020; Dubey et al., 2019). So the next challenge which should be considered is decision-making in terms of implementing the best policy to reduce the impact of COVID-19 on the health and education aspect of refugees. Based on the qualitative and quantitative analysis of the system structure outlined above, five alternative policies, namely, “Isolation effectiveness”, “Camp effectiveness”, “Social distance”, “Essential service financial aid” and “hygiene behavioral risk reduction” have been analyzed in order to evaluate their potential effects on the model's performance during the pandemic in which different colors correspond to each policy. As depicted in Figure 3, isolation, camp, social distance and hygiene behavior policies differ by the degree of effectiveness, which ranges from 0 (very low response quality) to 1 (very high response quality): in the base simulation model, the policy is aimed at maintaining the lowest quality in the pandemic which is zero and for the financial aid is$83m (the base aid from humanitarian organizations in the year) to estimate the pandemic result without any response. In the SD model, three different policy scenarios have been considered to analyze the impact of each scenario on the COVID-19 pandemic and evaluate the best response; 1. Scenario 1: Policy of isolation and camp capacity It is hypothesized that the camp capacity increased to cover 800,000 people, the potential camp and isolation effectiveness was assumed as 0.5, and the reaction time of applying camp and isolation was found for 15 days. • (2)Scenario 2: Policy of hygiene behavior and social distance It is presumed that hygiene behavioral risk reduction is 0.5 for the reaction time of 15 days, and the social distance range is also expected as 2 in the range of [0 4] with respect to the current level. • (3)Scenario 3: Policy of applying financial aid In the last one, the essential service financial aid supposed to be 249m $(triple of the current value) in order to analyze the number of children with access to education in the pandemic. According to the graphs in Figure 3, the base simulation model was set in accordance with the COVID-19 spread development situation without any additional policy. Graph (b) shows that without applying any policy by the government, the number of infected refugees could be about one-fourth of the population, and the epidemic seems to cause death of about 120,000 people by the end of 2020 (graph (e)). In order to save lives and prevent existing crises from increasing uncontrollably, an appropriate response needs to be in place. Besides, by applying scenario 1 which is implementing the policy of isolation in the camps and increasing the capacity of camps in 15 days after the first infected case has been seen, the number of infected would reduce to about 400,000 people gradually in three months (graph (b),), and almost 90,000 refugees would die (graph (d)). In this case, the number of refugees with the decision to immigrate to Europe will reduce to about 50,000 cases. Furthermore, people living in collective sites are vulnerable to COVID-19 in part because of the health risks associated with movement or displacement, overcrowding, increased climatic exposure due to sub-standard shelter and poor health status among affected populations; considering some adaptations of camps plans and maximizing site planning for better distancing among residents can reduce the number of infection, but adherence to infection prevention and control standards, hygiene behavior and social distance should be considered to greatly reduce the spread of COVID-19 and reduce mortality among those infected with the virus. In consequence, it is necessary to apply the second policy, policy of hygiene and social distance, that had a significant influence on curbing the outbreak including the reduction of infected cases to 250,000 cases (graph (b)) and a number of death to 50,000 cases (graph (b)) during the epidemic which is remarkable in the study of the COVID-19 crisis. In addition, it can postpone the peak time more than the other two policies, about nine months which can increase the chance of providing more medicine and healthcare materials for the infected people and prevent dying caused by the virus. While the number of infected reduces, it would significantly decrease the number of the serious cases which need hospitalization as well and provide more space in the hospital to reduce the probability of dying in case of lack of health services. By implementing this policy, the number of refugees with the decision of emigration can considerably decrease to 20,000 cases (graph (e)) and can explain a high reduction in the level of mental stress in refugees. In addition, the COVID-19 has resulted in schools shut all across the world (Basilaia and Kvavadze, 2020). The population of children among refugees in 2020 is about 1.7m which are out of classroom in the virus pandemic. As a result, education should change to e-learning, whereby teaching is undertaken remotely and on digital platforms which take less time but can improve learning in the pandemic. As shown in graph (f), increasing the financial aid from humanitarian organizations can improve the access to online education for up to 3,000 more children during the pandemic to encourage them to study and use the time in quarantine, responding to significant demand for education until the delivery of a safe and effective vaccine to enable virus transmission and maintaining safety. Without access to government support for unemployed citizens, many refugees rely on insufficient cash assistance from humanitarian agencies. As mentioned in Table 3, the financial aid is divided into aid for education and health; just 30% of the whole amount is allocated to education, and the other 70% assigned to the health service to improve the facilities and hygiene materials during the pandemic. As a result, by implementing the policy of hygiene behavior and social distance among refugees, the peak time can be postponed up to nine months, and the number of infected and dead people can significantly reduce to 8% and 1%, respectively which is considerable in comparison with the other response policies. UNHCR camps have not enough space per person, which makes it difficult to apply the social distancing policy or self-isolating. In informal camps and accommodations such as shelters and tents, there is not enough space (Ibrahim, 2020). In this case and in terms of real-life action, the only proper policy response to the COVID-19 pandemic can be implementing hygiene behavior which would be washing hands and wearing masks. Regarding the limitation in hygiene materials in camps (Alemi et al., 2020), vulnerable people can be advised to wear masks and separate from others in specified camps for social distancing. Hopefully, the results in (graph (b)) represent that peak time can be postponed, and humanitarian organizations would have much more time to support financially and supplying healthcare materials. ## 5. Conclusion The coronavirus (COVID-19) outbreak shows that pandemics can seriously impact health and education aspects of refugees considering the lack of support from humanitarian organizations. In this paper, a more sober picture of the COVID-19 outbreak among refugees has been provided using a system dynamic model that goes beyond evaluating some responses to this pandemic and recommend the best one to reduce the mortality caused by the virus. The system dynamics approach is a very effective tool in perceiving the whole picture and helping key factors to better understand and act utilizing the best decision and evaluate the impact on an epidemic. Overall, response to any infectious virus such as COVID-19 requires continuous monitoring to create a working baseline for future policy implementation modeling to diminish the mortality rate. In this paper, the impact of COVID-19 in dealing with refugees' life, especially the health and education aspects have been studied, and a system dynamics simulation model has been developed to suggest the best response to improve public health and education systems in the virus pandemic. The best model according to the available data has been provided from different references which capture the increasing trend of the infection rate over time due to not respecting any policies and decreasing the number of death and infected trend in the case of applying isolation, hygiene behavior and social distances. In this optimistic scenario, the burden of the disease can be large and lasting for many months. Implementing and sustaining strong policies that target social distancing and hygiene behavior offer the main hope for containing the epidemic. As a result of the simulation model, by applying the policy of isolation and camp capacity, the number of infected people and the mortality rate can be reduced to 50 and 20%, representatively. On the other hand, it can have a significant influence on curbing the outbreak with a reduction of infected cases by 75% and the number of death cases by 50% applying the policy of hygiene behavior and social distance. Implementing this policy would help to delay the peak time about nine months which would support the healthcare system and increase the chance of providing more medicine and healthcare materials for the infected people and prevent more dying caused by the virus as well. With the world facing an unprecedented threat, there is an opportunity to invest in stronger health systems and better collaboration in the world to face the future health crisis specifically for vulnerable populations like refugees. Considering the immediate and better response to the COVID-19 crisis and the consequences and lessons of this pandemic now makes the world of the future a safer place even for refugees while facing another health crisis. In the future, we plan to use our best simulation model with the real series data to test different policy scenarios in leveraging public fear and awareness to deal with the spread of epidemic diseases such as COVID-19. For instance, we can study the effects of crime and psychological factors on refugees' life when such epidemic crises happen in their region. Also, another direction for future work is to further refine the model by capturing the spread of disease on other refugees in European countries separately and compare and contrast their disease management approaches. ## Figures ### Figure 1 Causal loop diagram of COVID-19 crisis among Syrian refugees ### Figure 2 Stock-flow model of COVID-19 ### Figure 3 Bases simulation model and scenario of 1,2 and 3 of the trajectory of cumulative susceptible (a), infected (b), recovered (c), death (d), emigrated (e) and number of children access to distance education service (f), from January 2020 until the end of D ## Table 1 Causal loop elements Loop nameComponents R1Contagion rate => infection rate => number of exposed => developing symptoms rate => number of infected => number of deaths => mental stress => emigrating rate => number of immigrated people => number of susceptible R2Infection rate => number of exposed => developing symptoms rate => number of infected => active infected R3Isolation effectiveness => active infected => infection rate => number of exposed => developing symptoms rate => number of infected => available camp capacity R4Camp effectiveness => active infected => infection rate => number of exposed => developing symptoms rate => number of infected => available camp capacity R5Number of infected => serious cases => health service strain => mortality rate => recovering Rate R6Access to distance education service => desired access to distance education => access to distance education service rate B1Number of infected => serious cases => health service strain => mortality rate => dying rate B2Number of deaths => mental stress => emigrating rate => number of immigrated people => number of susceptible => number of exposed => developing symptoms rate => number of infected => dying rate ## Table 2 Main equations of SD model NoVariableEquationUnits 1Access to distance education serviceINTEG (Access to education service rate) + 50People 2Access to education service indexMAX(0,1-(access to distance education service/desired access to education service))Dmnl [1] 3Access to education service rateHumanitarian aid for education/distance education facilities cost * access to education service Index/TIME STEPPeople/day 4Active infectedInfected * (1-isolation effectiveness-camp effectiveness)People 5Available camp capacityInfected/camp capacityIndex 8Camp effectivenessSMOOTH3(STEP(Potential camp effectiveness, import time), reaction time of applying camp)/(1 + available camp capacity^2)Fraction 10Contact density decline0dmnl 11Contact rate1/(1 + contact density decline * (1-fraction susceptible))dmnl 12Contagion rateInitial uncontrolled contagion rate * relative performance of hygiene behavior risk * fraction susceptible * contact rateFraction/day 13DeathsINTEG (dying, 0)People 14Desired access to education serviceChild populationPeople 15Developing symptomsExposed/incubation PeriodPeople/day 18DyingInfected * mortality rate/disease durationPeople/day 19EmigrationMental stress impact/TIME STEPPeople/day 20ExposedINTEG (infecting-developing symptoms, 0)People 23Fraction susceptibleSusceptible/initial populationFraction 24Health service capacitypopulation + (humanitarian aid/health service cost)People 26Health service strain=Serious cases/health service capacityIndex 33InfectedINTEG (Developing symptoms-dying-recovering, 1)People 34InfectingActive infected * contagion ratePeople/day 37Initial uncontrolled contagion rateBase reproduction ratio/disease durationPeople/person/day 38Isolation effectivenessSMOOTH3(STEP(Potential isolation effectiveness, import time), isolation reaction time)//(1 + available camp capacity^2)Fraction 41Mortality rateUntreated mortality rate + (treated mortality rate-untreated mortality rate)/(1 + health service strain)Fraction 45RecoveredINTEG(recovering,0)People 46RecoveringInfected/disease duration * (1-mortality rate)People/day 47Relative performance of hygiene behavior riskSMOOTH3 (1-STEP(hygiene behavioral risk reduction, import time), hygiene behavioral reaction time)dmnl 49Serious casesInfected * fraction requiring hospitalizationPeople 50SusceptibleINTEG (emigration-infecting, initial population)People ## Table 3 Input parameters of simulation model NoVariableValue based on available dataUnitsReferences 1Essential services financial aid83$Ulrichs et al. (2017) 2Aid for education system37$Ulrichs et al. (2017) 3Aid for health system46$Ulrichs et al. (2017) 4Base reproduction ratio3.3dmnlLiu (2020) 5Diseases duration14DayWHO (a) (2020) 6Distance education facilities cost100$Rumble (2012) 7Child population1,700,000PeopleAllahi et al. (2020) 8Camp capacity400,000PeopleUNHCR (2013) 9Initial population3,600,000PeopleAllahi et al. (2020) 10Contact rate1.9dmnlBi et al. (2020) 11Incubation period5DayWHO (a) (2020) ## Table 4 Input variables determined by calibration NoVariableValue based on calibrationUnits 1Access to education service index0.3Dmnl 2Fatality rate0.04Constant 3Vulnerability rate0.01Constant 4Health service cost100$/people 5Mental stress impact26,000People 6Isolation reaction time2Day 7Reaction time of applying camp2Day 1. Dimensions ## References Alemi, Q., Stempel, C., Siddiq, H. and Kim, E. (2020), “Refugees and COVID-19: achieving a comprehensive public health response”, Bulletin of the World Health Organization, Vol. 98 No. 8, p. 510. Allahi, F., Revetria, R. and Cianci, R. (2018), “Cash and voucher impact factor in humanitarian aid: a system dynamic analysis”, Proceedings of the International Conference on Modeling and Simulation (MAS), pp. 17-19. Allahi, F., Taheri, S., Kian, R. and Sabet, E. (2020), “Cash-based interventions to enhance dignity in persistent humanitarian refugee crises: a system dynamics approach”, IEEE Transactions on Engineering Management. doi: 10.1109/TEM.2020.2982583. Available at: https://vensim.com/. Basilaia, G. and Kvavadze, D. (2020), “Transition to online education in schools during a SARS-CoV-2 coronavirus (COVID-19) pandemic in Georgia”, Pedagogical Research, Vol. 5 No. 4, pp. 1-9. Bi, Q., Wu, Y., Mei, S., Ye, C., Zou, X., Zhang, Z., Liu, X., Wei, L., Truelove, S.A., Zhang, T. and Gao, W. (2020), “Epidemiology and transmission of COVID-19 in 391 cases and 1286 of their close contacts in Shenzhen, China: a retrospective cohort study”, The Lancet Infectious Diseases. Briano, E., Caballini, C., Giribone, P. and Revetria, R. (2010), “Using a system dynamics approach for designing and simulation of short life-cycle products supply chain”, Proceedingsof the 4th WSEAS International Conference on Computer Engineering and Applications, World Scientific and Engineering Academy and Society (WSEAS), p. 27143. Bruzzone, A., Frascio, M., Longo, F., Chiurco, A., Zanoni, S., Zavanella, L., Fadda, P., Fancello, G., Falcone, D., Felice, F.D., Petrillo, A. and Carotenuto, P. (2014), “Disaster and emergency management simulation in industrial plants”, in Proceedings of 26th European Modeling and Simulation Symposium, EMSS, p. 649. Clarke, K. (2020), “With all eyes on Covid-19, refugee suffering continues in Greece, Turkey and Syria”, available at: https://www.americamagazine.org/politics-society/2020/03/19/all-eyes-covid-19-refugee-suffering-continues-greece-turkey-and-syria. Clark, A., Jit, M., Warren-Gash, C., Guthrie, B., Wang, H.H., Mercer, S.W., … and Jarvis, C.I. (2020), “Global, regional, and national estimates of the population at increased risk of severe COVID-19 due to underlying health conditions in 2020: a modelling study”, The Lancet Global Health, Vol. 8 No. 8, pp. e1003-e1017. Coronavirus disease 2019 (COVID-19) – transmission” (2020), ECDC, Centers for Disease Control and Prevention, 17 March 2020, European Centre for Disease Prevention and Control. Currie, C.S.M., Fowler, J.W., Kotiadis, K., Monks, T., Onggo, B.S., Robertson, D.A. and Tako, A.A. (2020), “How simulation modelling can help reduce the impact of COVID-19”, Journal of Simulation, Vol. 14 No. 2. Dubey, R., Altay, N. and Blome, C. (2019), “Swift trust and commitment: the missing links for humanitarian supply chain coordination?”, Annals of Operations Research, Vol. 283 No. 1, pp. 159-177. Dubey, R., Gunasekaran, A., Bryde, D.J., Dwivedi, Y.K. and Papadopoulos, T. (2020), “Blockchain technology for enhancing swift-trust, collaboration and resilience within a humanitarian supply chain setting”, International Journal of Production Research, Vol. 58 No. 11, pp. 3381-3398. Education Cannot Wait (ECW) (2020), “COVID-19 and education in emergencies”, available at: https://www.educationcannotwait.org/covid-19/. Forrester, J.W. (1958), “Industrial dynamics. A major breakthrough for decision makers”, Harvard Business Review, Vol. 36 No. 4, pp. 37-66. Gaia, V. (2020), “The world's largest refugee camp prepares for covid-19”, BMJ, p. 368, doi: 10.1136/bmj.m1205 (accessed 26 March 2020). Govindan, K., Mina, H. and Alavi, B. (2020), “A decision support system for demand management in healthcare supply chains considering the epidemic outbreaks: a case study of coronavirus disease 2019 (COVID-19)”, Transportation Research Part E: Logistics and Transportation Review, Vol. 138, p. 101967. Guan, W.J., Ni, Z.Y., Hu, Y., Liang, W.H., Ou, C.Q., He, J.X., … and Zhong, N.S. (2020), “Clinical characteristics of coronavirus disease 2019 in China”, New England Journal of Medicine, Vol. 382 No. 18, pp. 1708-1720. Hans Henri, P.K., Jakab, Z., Bartovic, J., D'Anna, V. and Severoni, S. (2020), “COVID-19 will not leave behind refugees and migrants”, The Lancet, Vol. 395 No. 10230, p. 1090, doi: 10.1016/S0140-6736(20)30791-1. Hargreaves, S., Kumar, B.N., McKee, M., Jones, L. and Veizis, A. (2020), “Europe's migrant containment policies threaten the response to Covid-19”, BMJ, p. 368, doi: 10.1136/bmj.m1213. Heymann, D.L. and Shindo, N. (2020), “COVID-19: what is next for public health?”, The Lancet, Vol. 395 No. 10224, pp. 542-545. Iacobucci, G. (2020), “Covid-19: ‘doctors warn of humanitarian catastrophe at Europe's largest refugee camp’”, Clinical research ed., BMJ, Vol. 368, p. m1097. Inter-Agency Standing Committee (IASC) March (2020), Interim Guidance: Scaling-Up COVID-19 Outbreak Readiness and Response Operations in Humanitarian Situations, Including Camps and Camp-like Settings, ISAC Organization. Ibrahim, A. (2020), The COVID-19 Impact on the Most Vulnerable Refugee and IDP Populations, Center for Global Policy. Ivanov, D. (2020), “Viable supply chain model: integrating agility, resilience and sustainability perspectives – lessons from and thinking beyond the COVID-19 pandemic”, Annals of Operations Research. doi: 10.1007/s10479-020-03640-6. Jones, G., Haeghebaert, S., Merlin, B., Antona, D., Simon, N., Elmouden, M., Battist, F., Janssens, M., Wyndels, K. and Chaud, P. (2016), “Measles outbreak in a refugee settlement in Calais, France”, Euro Surveillance. doi: 10.2807/1560-7917.ES.2016.21.11.30167. Jones, N. and Carver, C. (2020), Are Interventions Such as Social Distancing Effective at Reducing the Risk of Asymptomatic Healthcare Workers Transmitting COVID-19 Infection to Other Household Members?, The Centre for Evidence-Based Medicine, University of Oxford. Kluge, H.H.P., Jakab, Z., Bartovic, J., D'Anna, V. and Severoni, S. (2020), “Refugee and migrant health in the COVID-19 response”, The Lancet, Vol. 395 No. 10232, pp. 1237-1239. Liu, Y., Gayle, A.A., Wilder-Smith, A. and Rocklöv, J. (2020), “The reproductive number of COVID-19 is higher compared to SARS coronavirus”, Journal of Travel Medicine, Vol. 27 No. 2, pp. 1-4. Manoj, D. and Maneesh, K. (2020), “Operational improvement programs and humanitarian operations”, Production Planning and Control. doi: 10.1080/09537287.2020.1834137. Médecins Sans Frontières (2020), Covid-19: Evacuation of Squalid Greek Camps More Urgent than Ever in Light of Coronavirus Pandemic, Médecins Sans Frontières. Merler, S., Ajelli, M., Fumanelli, L., Gomes, M.F.C., Piontti, A.P.Y., Rossi, L., Chao, D.L., Longini, I.M., Jr Halloran, M.E. and Vespignani, A. (2015), “Spatiotemporal spread of the 2014 outbreak of Ebola virus disease in Liberia and the effectiveness of nonpharmaceutical interventions: a computational modelling analysis”, The Lancet Infectious Diseases, Vol. 15 No. 2, pp. 204-211. Nott, D. (2020), “The COVID-19 response for vulnerable people in places affected by conflict and humanitarian crises”, The Lancet, Vol. 395 No. 10236, pp. 1532-1533. Queiroz, M.M., Ivanov, D., Dolgui, A. and Wamba, S.F. (2020), “Impacts of epidemic outbreaks on supply chains: mapping a research agenda amid the COVID-19 pandemic through a structured literature review”, Annals of Operations Research, pp. 1-38, doi: 10.1007/s10479-020-03685-7. Rachah, A. and Torres, D.F. (2018), “Analysis, simulation and optimal control of a SEIR model for Ebola virus with demographic effects”, Communications Faculty of Sciences University of Ankara Series A1 Mathematics and Statistics, Vol. 67 No. 1, pp. 179-197. Reinhardt, S. (2018), “Exploring the emerging field of online tertiary education for refugees in protracted situations”, Open Praxis, Vol. 10 No. 3, pp. 211-220. Revetria, R., Oliva, F. and Mosca, M. (2008), “Modelling of voltri terminal europe in genoa using system dynamic model simulation”, Proceedings of the 7th WSEAS International Conference on System Science and Simulation in Engineering, Vol. 21, World Scientific and Engineering Academy and Society (WSEAS), p. 411417. Rumble, G. (2012), The Costs and Economics of Open and Distance Learning, ODI. Sharareh, N., Sabounchi, S.N., Sayama, H. and MacDonald, R. (2016), “The Ebola crisis and the corresponding public behavior: a system dynamics approach”, PLOS Currents Outbreaks, Edition 1. doi: 10.1371/currents.outbreaks.23badd9821870a002fa86bef6893c01d. Sterman, J. (2000), Business Dynamics: Systems Thinking and Modeling for a Complex World, Number HD30, 2 S7835 2000, Computer Science. Tekin-Koru, A. (2020), “Precarious lives: Syrian refugees in Turkey in corona times”, available at: https://voxeu.org/article/precarious-lives-syrian-refugees-turkey-corona-times. Ulrichs, M., Hagen-Zanker, J. and Holmes, R. (2017), Cash Transfers for Refugees, ODI. UNHCR (2013), “Turkey response plan”, available at: https://www.unhcr.org/en-us/51b0a6689.pdf. UNHCR (2020), “Turkey response plan”, available at: www.unhcr.org. Vlagyiszlav, M. (2020), “Refugees left behind in coronavirus crisis, aid groups warn”, available at: https://www.euractiv.com/section/justice-home-affairs/news/refugees-left-behind-in-coronavirus-crisis-aid-groups-warn/. World Health Organization (2020), Migration and Health: Key Issues, World Health Organization. World Health Organization (a) (2020), Coronavirus Disease 2019 (COVID-19): Situation Report, p. 53, World Health Organization. World Health Organization (b) (2020), Strengthening the Health Systems Response to COVID-19, World Health Organization. WHO, Novel Coronavirus – China (2020a), WHO. (accessed 9 April 2020). World Health Organization (WHO) April(2020b), COVID-19 Strategy Update, World Health Organization. Q&A on Coronaviruses (2020), World Health Organization, 11 February 2020. Statement on the Second Meeting of the International Health Regulations (2005) Emergency Committee Regarding the Outbreak of Novel Coronavirus (2019-nCoV) (2020), WHO, 30 January 2020. WHO Director-General's Opening Remarks at the Media Briefing on COVID-19 (2020), 11 March 2020. WHO Measures against COVID-19 Need to Include Refugees and Migrants March (2020), available at: http://www.euro.who.int/en/health-topics/health-emergencies/coronavirus-covid-19/news/news/2020/3/measures-against-covid-19-need-to-include-refugees-and-migrants.
2022-05-29 05:00:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3859023451805115, "perplexity": 3965.387360227782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663039492.94/warc/CC-MAIN-20220529041832-20220529071832-00786.warc.gz"}
https://blog.efiens.com/post/cothan/tamuctf2019-cr4ckz33c0d3-with-angr/
TamuCTF2019 Cr4ckZ33C0d3 With Angr A gentle introduction to Angr This may be too soon to publish the writeup, but I think if I don’t do it now I can’t do it anymore. Before betting on Angr, I check: Decompiled C code by IDA: • No anti-debug, good for automating analysis • The verify() graph has many branches, therefore it possible to do with Pintool • Decompiled code seem to be solvable with Z3, therefore Angr can solve (because Angr use Z3 as core solver) In the worst case I can copy-paste the decompiled C code to solve with Z3 python, this doable but I solve similar challenges a thousand times, so I leave Z3 at the worst-case scenario. To do automate analysis, now I have 3 options, Pintool or Angr or Radare2 scripting: • Pintool can only solve the check sequentially, for example: a[0],a[1],a[2] …​ etc, it’s possible to solve it with Pintool, since I don’t want to modify the tool so I skip Pintool for now. • Angr seem a good solution, since the code has no anti-debug (even if it has, I will patch it), there are a few paths to solution: on top-level main() function, it checks verify() true or false, and inside verify() all sub-functions check() must be true. • Radare2 scripting is a hard work in this case, I normally apply Radare2 for bruting comparison or examing memory purpose. The plan is, I will go with Angr, and in case I can’t solve it with Angr, I’ll use Z3. Time to solve with Z3: 5 minutes. A solution for this challenge only. Time to solve with Angr: 30 minutes. Time to sit down and write a script that can be applied to many many basic RE challenges. Ok, Let’s start Angr. import angr p = angr.Project('prodkey') In automate binary analysis, it’s like you walk along the river, you should know where you start, some weirdo people decide to start with some functions before the program enters main() ( for example entry_point() ), some start at main(), some start at verifying (). Here is my explanation of why you should think a bit before choosing a good start point: • Start before main() : In complicated binary, some malware injects obfuscation before main, if we choose the start point before main(), we may collect too many constraints to the function we want to solve when we go through unnecessary functions, this would lead to a very slow or non-solvable solution. • Start at main(): Most of the CTF binaries are small, small enough to explore all the paths of the binaries, however, in big binaries and above 10 continuous if-else branches, this would lead to a slow or non-solvable solution too. • Start at a function we need to solve: verify(), this would be quick, in this case, we see that verify() are quite an independent function. So let’s start at verify(), which is at 0x00400c99 verify_function = 0x00400c99 state = p.factory.blank_state(addr=verify_function) Good, now we have somewhere to walk, while we’re walking, we want to know where we want to go, unless you’re crazy then you don’t know where to go. Therefore, we want to walk to find good points, avoid bad points on the way. Yup, now we know the bad points and good points. If you are a moron then you can go straight to hell at 0x400df2 to have nothing, but we want to have some something, good point is 0x400deb good = (0x400deb) bad = (0x400df2) Now we know where to go, let’s fire up the simulation machine, it will collect constraints and solve them for us. simgr = p.factory.simgr(state) Let’s see where are we now? In [59]: simgr = p.factory.simgr(state) In [60]: simgr Out[60]: <SimulationManager with 1 active> In [61]: simgr.active Out[61]: [<SimState @ 0x400c99>] Good, no weird things happen, we are standing at the address of verify(). Now set up good and bad point or good/bad destination. simgr.explore(find=good, avoid=bad) In [65]: simgr.explore(find=good, avoid=bad) WARNING | 2019-02-23 21:10:36,634 | angr.state_plugins.symbolic_memory | Filling register rbp with 8 unconstrained bytes WARNING | 2019-02-23 21:10:36,649 | angr.state_plugins.symbolic_memory | Filling register rdi with 8 unconstrained bytes WARNING | 2019-02-23 21:10:36,868 | angr.state_plugins.symbolic_memory | Filling memory at 0xffffffffffffff80 with 256 unconstrained bytes Out[65]: <SimulationManager with 1 deadended, 1 found, 25 avoid> Oops, we found 1 path to a good point. Yay. Let’s collect our results. result = simgr.found[0] for i in range(3): print (result.posix.dumps(i)) Output: b'' b'' b'' Well, we get empty results. What? Why? How? The reason is simple, as you can see in the log when we start exploring paths, we see that the symbolic memory is only about 8 bytes or 256 bytes, which is incorrect. Now let’s stop for a bit and think about why: • We thought verify() is independent function. Unfortunately, it’s not, the program receives input and store to memory, and that input is grabbed by verify(), we start at verify() so simgr doesn’t know where is that memory comes from. • If we start somewhere different than verify(), we have to do more calculation, it’s a trade-off. Well, let’s pay the price, let’s start at main(), during my experiment, if I start at 0x00400e20, it doesn’t work right, although the fgets_function receive the correct parameter setup, I left it as the question after we solve the challenge. • Still start at verify() we will set symbolic_memory so simgr can fill what it needs. Ok, let’s go with the trade-off option. Rebuild the script to start at main instead. import angr p = angr.Project('prodkey') good = (0x400deb) verify_function = 0x00400c99 fget_function = 0x00400e20 main = 0x00400dfc state = p.factory.blank_state(addr=main) ## Start at main() simgr = p.factory.simulation_manager(state) result = simgr.found[0] for i in range(3): print (result.posix.dumps(i)) Output: WARNING | 2019-02-23 21:31:44,954 | angr.state_plugins.symbolic_memory | The program is accessing memory or registers with an unspecified value. This could indicate unwanted behavior. WARNING | 2019-02-23 21:31:44,954 | angr.state_plugins.symbolic_memory | angr will cope with this by generating an unconstrained symbolic variable and continuing. You can resolve this by: WARNING | 2019-02-23 21:31:44,954 | angr.state_plugins.symbolic_memory | 1) setting a value to the initial state WARNING | 2019-02-23 21:31:44,954 | angr.state_plugins.symbolic_memory | 2) adding the state option ZERO_FILL_UNCONSTRAINED_{MEMORY,REGISTERS}, to make unknown regions hold null WARNING | 2019-02-23 21:31:44,954 | angr.state_plugins.symbolic_memory | 3) adding the state option SYMBOL_FILL_UNCONSTRAINED_{MEMORY_REGISTERS}, to suppress these messages. WARNING | 2019-02-23 21:31:44,954 | angr.state_plugins.symbolic_memory | Filling register rbp with 8 unconstrained bytes WARNING | 2019-02-23 21:31:45,148 | angr.state_plugins.symbolic_memory | Filling memory at 0x7ffffffffff0000 with 96 unconstrained bytes WARNING | 2019-02-23 21:31:45,148 | angr.state_plugins.symbolic_memory | Filling memory at 0x7fffffffffeff7e with 106 unconstrained bytes b'M4\x7f\[email protected]@[email protected]\x08 \[email protected]@BB2-\x08\x80\x1088' b'\nPlease Enter a product key to continue: \n' b'' Wow, the 1st string seems weird but it’s the actual solution. Since we don’t constrain our solution to be printable, but it’s a solution anyway. Submit and get the flag Yay, solved it. Can we stop here? Nope. Let’s get back to the optimal solution, where we start at verify() and help simgr fill the memory it needs. In IDA decompiled C code, we see that the array access to array a[28], and the string length check is over 0x1c = 28 (decimal) to satisfy, so let’s set the length of input is 29 bytes. We can set the input length longer, it doesn’t matter much, Angr is smart if it knows we short of memory, don’t worry. Because we have the flag, we see that even non-printable string still give us the flag, the purpose of this post is to introduce you to Angr, rather than grab a flag and go, so we continue our journey, find out the beauty of the real flag. Constrains we have until now: • Length: 29 bytes • Printable, a normal product key is often included capital letters, number and dash - • Start at verify() This time, we need to add constraints, although Angr use Z3 as the internal symbolic solver, however, it provides Claripy to help users interact with constrains. So now let’s add some constraints import claripy def AND1(c): '''constrain 1: printable''' return claripy.And(33 <= c , c <= 126) length = 29 flag = claripy.BVS('flag', length*8) for i in range(length): state.solver.add( AND1(flag.get_byte(i)) ) To start at verify(), if we check the ASM input, we see the verify() has one argument arg1, pass to function by rdi register. / (fcn) sym.verify_key 355 | sym.verify_key (char *arg1); | ; var char *s @ rbp-0x8 | ; arg char *arg1 @ rdi | ; CALL XREF from main (0x400e45) | 0x00400c99 55 push rbp | 0x00400c9a 4889e5 mov rbp, rsp | 0x00400c9d 4883ec10 sub rsp, 0x10 | 0x00400ca1 48897df8 mov qword [s], rdi ## point to dummy | 0x00400ca5 488b45f8 mov rax, qword [s] | 0x00400ca9 4889c7 mov rdi, rax ; const char *s Dig deeper into ASM, we easily see qword [s] is the local variable, and verify() use s to copy input to check() function. Because we create our own constraints, therefore we need to replace input arg1 to verify() by ours, Angr is smart, let’s create a dummy buffer address to store our constrains, after that, we point rdi to it, therefore, at setup address 0x00400ca1, we point our dummy buffer address to local variable s my_buf = 0x12345678 state.regs.rdi = my_buf Now, it seems we’re done. Let’s prepare the simulation manager, and run. simgr = p.factory.simulation_manager(state) good = (0x00400deb) result = simgr.found[0] # Always print this for i in range(3): print (result.posix.dumps(i)) print (result.solver.eval(flag, cast_to=bytes)) I spent some time to debug why simgr found a solution but it doesn’t print out, so I need to add the last line to convert constrains to byte strings. If you don’t do that, you will be stuck at the ridiculous stage, that the time when you solve the challenge but where is the flag? :)) Let’s run. Wow, this time solution is better. I know you may wonder why after all we did, the solution is not beautiful? > The answer is simple, it's because the author doesn't constrain to unique solution, that's why there are many solutions for the challenge. Angr always bring me some good memories, I remember there was a DEFCON challenge, which is a hell of reversing work, could be solved with Angr easily. Oh, memories time. It’s 3 years already. Here is the full script, play with it. import angr import claripy def AND1(c): '''constrain 1: printable''' return claripy.And(33 <= c , c <= 126) def AND2(c): '''returns constraints s.t. c is printable''' return claripy.And(65 <= c , c <= 90) def AND3(c): '''returns constraints s.t. c is printable''' return claripy.And(97 <= c , c <= 122) p = angr.Project('prodkey') verify_function = 0x00400c99 length = 29 flag = claripy.BVS('flag', length*8) for i in range(length): # state.solver.add( AND2(flag.get_byte(i)) ) # state.solver.add( AND3(flag.get_byte(i)) ) my_buf = 0x12345678 state.regs.rdi = my_buf @p.hook(0x00400ca9) def debug_func(state): rdi_value = state.regs.rdi print ( 'rdi is point to {}'.format(rdi_value) ) simgr = p.factory.simulation_manager(state) good = (0x00400deb) result = simgr.found[0] # Always print this for i in range(3): print (result.posix.dumps(i)) print (result.solver.eval(flag, cast_to=bytes)) So far, in this post, we automate simple binary, which helps us to solve similar binaries in RE category faster. Unlike Z3, we write solutions unique to a challenge, but Angr solution can be adapted to any Z3 solvable challenges. Next post, we will deal with Anti-Debug binary with Angr. Cothan Cryptography Engineering My research interests include implementation of Post-Quantum Cryptography using High Level Synthesis in FPGA and NEON instruction in ARM platform, beside, sometimes I play CTFs, Crypto and Reverse Engineering are my favorite categories.
2022-10-03 08:35:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4311862885951996, "perplexity": 6314.996973464143}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00444.warc.gz"}
http://stat310.had.co.nz/homework/05-homework.html
stat310 # Homework 05 This homework follows the standard late penalty: 0% if in the stat310 mailbox by Thursday 16 Feb 4pm, 10% by 5pm the following day, 100% otherwise. Please read the syllabus for other homework policies. 1. (4 pts each) For each of the following random variables, identify the distribution that most closely matches the situation. Justify your choice and describe any assumptions that you made. 1. On average, flight from Houston (HOU) to Dallas (DAL) leaves every 60 minutes. Let $$X_{1}$$ be the amount of time I have to wait for the next airplane to come. 2. Despite what it says on the bottle, the amount of beer in a bottle actually varies a little. 12oz beer bottles have a mean volume of 11.8 oz and on variance on 0.7 oz. Let $$X_{2}$$ be the amount of beer in the bottle from this sample. 3. The registrar gets lazy and decides to use a random number generator to determine the threshold GPA for the President’s Honor Roll. Let $$X_{3}$$ be the minimum GPA to get you on the Honor Roll. 2. (2 pts each) For each of the following random variables, find the specified probability using the CDF. 1. $$X_{1} \sim Exp(\theta = 10)$$, $$P(10 < X_{1} < 100)$$ 2. $$X_{2} \sim Gamma(\alpha = 1, \beta = 2)$$, $$P(1 < X_{2} < 5)$$ 3. $$X_{3} \sim Normal(\mu = 0, \sigma^{2} = 10)$$, $$P(-10 < X_{3} < 10)$$. Don’t use wolfram alpha! 3. (4 pts each) Given that $$X \sim Exp(\theta)$$, find the pdfs of the following two transformations of X. Do they correspond to named distributions that we know about? 1. $$Y = X^{2}$$ 2. $$Z = e^{X}$$ 4. (Bonus 3pts) Show that if $$X \sim Normal(\mu, \sigma^2))$$ then $$Z = (X - \mu) / \sigma$$ has a standard normal distribution.
2020-07-14 03:04:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5901452302932739, "perplexity": 1299.0393283206577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147917.99/warc/CC-MAIN-20200714020904-20200714050904-00219.warc.gz"}
https://chem.libretexts.org/Courses/University_of_Illinois_Springfield/UIS%3A_CHE_267_-_Organic_Chemistry_I_(Morsch)/Chapters/Chapter_01%3A_Structure_and_Bonding/1.03%3A_Lewis_Structures
# 1.3: Lewis Structures ## Using Lewis Dot Symbols to Describe Covalent Bonding This sharing of electrons allowing atoms to "stick" together is the basis of covalent bonding. There is some intermediate distant, generally a bit longer than 0.1 nm, or if you prefer 100 pm, at which the attractive forces significantly outweigh the repulsive forces and a bond will be formed if both atoms can achieve a completen s2np6 configuration. It is this behavior that Lewis captured in his octet rule. The valence electron configurations of the constituent atoms of a covalent compound are important factors in determining its structure, stoichiometry, and properties. For example, chlorine, with seven valence electrons, is one electron short of an octet. If two chlorine atoms share their unpaired electrons by making a covalent bond and forming Cl2, they can each complete their valence shell: Each chlorine atom now has an octet. The electron pair being shared by the atoms is called a bonding pair ; the other three pairs of electrons on each chlorine atom are called lone pairs. Lone pairs are not involved in covalent bonding. If both electrons in a covalent bond come from the same atom, the bond is called a coordinate covalent bond. We can illustrate the formation of a water molecule from two hydrogen atoms and an oxygen atom using Lewis dot symbols: The structure on the right is the Lewis electron structure, or Lewis structure, for H2O. With two bonding pairs and two lone pairs, the oxygen atom has now completed its octet. Moreover, by sharing a bonding pair with oxygen, each hydrogen atom now has a full valence shell of two electrons. Chemists usually indicate a bonding pair by a single line, as shown here for our two examples: The following procedure can be used to construct Lewis electron structures for more complex molecules and ions: 1. Arrange the atoms to show specific connections. When there is a central atom, it is usually the least electronegative element in the compound. Chemists usually list this central atom first in the chemical formula (as in CCl4 and CO32−, which both have C as the central atom), which is another clue to the compound’s structure. Hydrogen and the halogens are almost always connected to only one other atom, so they are usually terminal rather than central. Note The central atom is usually the least electronegative element in the molecule or ion; hydrogen and the halogens are usually terminal. 1. Determine the total number of valence electrons in the molecule or ion. Add together the valence electrons from each atom. (Recall from Chapter 2 that the number of valence electrons is indicated by the position of the element in the periodic table.) If the species is a polyatomic ion, remember to add or subtract the number of electrons necessary to give the total charge on the ion. For CO32−, for example, we add two electrons to the total because of the −2 charge. 2. Place a bonding pair of electrons between each pair of adjacent atoms to give a single bond. In $$H_2O$$, for example, there is a bonding pair of electrons between oxygen and each hydrogen. 3. Beginning with the terminal atoms, add enough electrons to each atom to give each atom an octet (two for hydrogen). These electrons will usually be lone pairs. 4. If any electrons are left over, place them on the central atom. Some atoms are able to accommodate more than eight electrons. 5. If the central atom has fewer electrons than an octet, use lone pairs from terminal atoms to form multiple (double or triple) bonds to the central atom to achieve an octet. This will not change the number of electrons on the terminal atoms. Now let’s apply this procedure to some particular compounds, beginning with one we have already discussed. ## H2O 1. Because H atoms are almost always terminal, the arrangement within the molecule must be HOH. 2. Each H atom (group 1) has 1 valence electron, and the O atom (group 16) has 6 valence electrons, for a total of 8 valence electrons. 3. Placing one bonding pair of electrons between the O atom and each H atom gives H:O:H, with 4 electrons left over. 4. Each H atom has a full valence shell of 2 electrons. 5. Adding the remaining 4 electrons to the oxygen (as two lone pairs) gives the following structure: This is the Lewis structure we drew earlier. Because it gives oxygen an octet and each hydrogen two electrons, we do not need to use step 6. ## OCl− 1. With only two atoms in the molecule, there is no central atom. 2. Oxygen (group 16) has 6 valence electrons, and chlorine (group 17) has 7 valence electrons; we must add one more for the negative charge on the ion, giving a total of 14 valence electrons. 3. Placing a bonding pair of electrons between O and Cl gives O:Cl, with 12 electrons left over. 4. If we place six electrons (as three lone pairs) on each atom, we obtain the following structure: Each atom now has an octet of electrons, so steps 5 and 6 are not needed. The Lewis electron structure is drawn within brackets as is customary for an ion, with the overall charge indicated outside the brackets, and the bonding pair of electrons is indicated by a solid line. OCl is the hypochlorite ion, the active ingredient in chlorine laundry bleach and swimming pool disinfectant. ## CH2O 1. Because carbon is less electronegative than oxygen and hydrogen is normally terminal, C must be the central atom. One possible arrangement is as follows: 2. Each hydrogen atom (group 1) has one valence electron, carbon (group 14) has 4 valence electrons, and oxygen (group 16) has 6 valence electrons, for a total of [(2)(1) + 4 + 6] = 12 valence electrons. 3. Placing a bonding pair of electrons between each pair of bonded atoms gives the following: Six electrons are used, and 6 are left over. 4. Adding all 6 remaining electrons to oxygen (as three lone pairs) gives the following: Although oxygen now has an octet and each hydrogen has 2 electrons, carbon has only 6 electrons. 5. There are no electrons left to place on the central atom. 6. To give carbon an octet of electrons, we use one of the lone pairs of electrons on oxygen to form a carbon–oxygen double bond: Both the oxygen and the carbon now have an octet of electrons, so this is an acceptable Lewis electron structure. The O has two bonding pairs and two lone pairs, and C has four bonding pairs. This is the structure of formaldehyde, which is used in embalming fluid. An alternative structure can be drawn with one H bonded to O. Formal charges, discussed later in this section, suggest that such a structure is less stable than that shown previously. Example Write the Lewis electron structure for each species. 1. NCl3 2. S22− 3. NOCl Given: chemical species Strategy: Use the six-step procedure to write the Lewis electron structure for each species. Solution: Nitrogen is less electronegative than chlorine, and halogen atoms are usually terminal, so nitrogen is the central atom. The nitrogen atom (group 15) has 5 valence electrons and each chlorine atom (group 17) has 7 valence electrons, for a total of 26 valence electrons. Using 2 electrons for each N–Cl bond and adding three lone pairs to each Cl account for (3 × 2) + (3 × 2 × 3) = 24 electrons. Rule 5 leads us to place the remaining 2 electrons on the central N: Nitrogen trichloride is an unstable oily liquid once used to bleach flour; this use is now prohibited in the United States. 1. In a diatomic molecule or ion, we do not need to worry about a central atom. Each sulfur atom (group 16) contains 6 valence electrons, and we need to add 2 electrons for the −2 charge, giving a total of 14 valence electrons. Using 2 electrons for the S–S bond, we arrange the remaining 12 electrons as three lone pairs on each sulfur, giving each S atom an octet of electrons: 2. Because nitrogen is less electronegative than oxygen or chlorine, it is the central atom. The N atom (group 15) has 5 valence electrons, the O atom (group 16) has 6 valence electrons, and the Cl atom (group 17) has 7 valence electrons, giving a total of 18 valence electrons. Placing one bonding pair of electrons between each pair of bonded atoms uses 4 electrons and gives the following: Adding three lone pairs each to oxygen and to chlorine uses 12 more electrons, leaving 2 electrons to place as a lone pair on nitrogen: 3. Because this Lewis structure has only 6 electrons around the central nitrogen, a lone pair of electrons on a terminal atom must be used to form a bonding pair. We could use a lone pair on either O or Cl. Because we have seen many structures in which O forms a double bond but none with a double bond to Cl, it is reasonable to select a lone pair from O to give the following: All atoms now have octet configurations. This is the Lewis electron structure of nitrosyl chloride, a highly corrosive, reddish-orange gas. Exercise Write Lewis electron structures for CO2 and SCl2, a vile-smelling, unstable red liquid that is used in the manufacture of rubber. ### Formal Charges It is sometimes possible to write more than one Lewis structure for a substance that does not violate the octet rule, as we saw for CH2O, but not every Lewis structure may be equally reasonable. In these situations, we can choose the most stable Lewis structure by considering the formal charge on the atoms, which is the difference between the number of valence electrons in the free atom and the number assigned to it in the Lewis electron structure. The formal charge is a way of computing the charge distribution within a Lewis structure; the sum of the formal charges on the atoms within a molecule or an ion must equal the overall charge on the molecule or ion. A formal charge does not represent a true charge on an atom in a covalent bond but is simply used to predict the most likely structure when a compound has more than one valid Lewis structure. To calculate formal charges, we assign electrons in the molecule to individual atoms according to these rules: • Nonbonding electrons are assigned to the atom on which they are located. • Bonding electrons are divided equally between the bonded atoms. For each atom, we then compute a formal charge: To illustrate this method, let’s calculate the formal charge on the atoms in ammonia (NH3) whose Lewis electron structure is as follows: A neutral nitrogen atom has five valence electrons (it is in group 15). From its Lewis electron structure, the nitrogen atom in ammonia has one lone pair and shares three bonding pairs with hydrogen atoms, so nitrogen itself is assigned a total of five electrons [2 nonbonding e + (6 bonding e/2)]. Substituting into Equation 5.3.1, we obtain A neutral hydrogen atom has one valence electron. Each hydrogen atom in the molecule shares one pair of bonding electrons and is therefore assigned one electron [0 nonbonding e + (2 bonding e/2)]. Using Equation 4.4.1 to calculate the formal charge on hydrogen, we obtain The hydrogen atoms in ammonia have the same number of electrons as neutral hydrogen atoms, and so their formal charge is also zero. Adding together the formal charges should give us the overall charge on the molecule or ion. In this example, the nitrogen and each hydrogen has a formal charge of zero. When summed the overall charge is zero, which is consistent with the overall charge on the NH3 molecule. Typically, the structure with the most charges on the atoms closest to zero is the more stable Lewis structure. In cases where there are positive or negative formal charges on various atoms, stable structures generally have negative formal charges on the more electronegative atoms and positive formal charges on the less electronegative atoms. The next example further demonstrates how to calculate formal charges. Example Calculate the formal charges on each atom in the NH4+ ion. Given: chemical species Strategy: Identify the number of valence electrons in each atom in the NH4+ ion. Use the Lewis electron structure of NH4+ to identify the number of bonding and nonbonding electrons associated with each atom and then use Equation 4.4.1 to calculate the formal charge on each atom. Solution: The Lewis electron structure for the NH4+ ion is as follows: The nitrogen atom shares four bonding pairs of electrons, and a neutral nitrogen atom has five valence electrons. Using Equation 4.4.1, the formal charge on the nitrogen atom is therefore formalcharge(N)=5(0+82)=0 Each hydrogen atom in has one bonding pair. The formal charge on each hydrogen atom is therefore formalcharge(H)=1(0+22)=0 The formal charges on the atoms in the NH4+ ion are thus Adding together the formal charges on the atoms should give us the total charge on the molecule or ion. In this case, the sum of the formal charges is 0 + 1 + 0 + 0 + 0 = +1. Exercise Write the formal charges on all atoms in BH4. If an atom in a molecule or ion has the number of bonds that is typical for that atom (e.g., four bonds for carbon), its formal charge is zero. ### Using Formal Charges to Distinguish between Lewis Structures As an example of how formal charges can be used to determine the most stable Lewis structure for a substance, we can compare two possible structures for CO2. Both structures conform to the rules for Lewis electron structures. ## CO2 1. C is less electronegative than O, so it is the central atom. 2. C has 4 valence electrons and each O has 6 valence electrons, for a total of 16 valence electrons. 3. Placing one electron pair between the C and each O gives O–C–O, with 12 electrons left over. 4. Dividing the remaining electrons between the O atoms gives three lone pairs on each atom: This structure has an octet of electrons around each O atom but only 4 electrons around the C atom. 5. No electrons are left for the central atom. 6. To give the carbon atom an octet of electrons, we can convert two of the lone pairs on the oxygen atoms to bonding electron pairs. There are, however, two ways to do this. We can either take one electron pair from each oxygen to form a symmetrical structure or take both electron pairs from a single oxygen atom to give an asymmetrical structure: Both Lewis electron structures give all three atoms an octet. How do we decide between these two possibilities? The formal charges for the two Lewis electron structures of CO2 are as follows: Both Lewis structures have a net formal charge of zero, but the structure on the right has a +1 charge on the more electronegative atom (O). Thus the symmetrical Lewis structure on the left is predicted to be more stable, and it is, in fact, the structure observed experimentally. Remember, though, that formal charges do not represent the actual charges on atoms in a molecule or ion. They are used simply as a bookkeeping method for predicting the most stable Lewis structure for a compound. Note The Lewis structure with the set of formal charges closest to zero is usually the most stable. Example The thiocyanate ion (SCN), which is used in printing and as a corrosion inhibitor against acidic gases, has at least two possible Lewis electron structures. Draw two possible structures, assign formal charges on all atoms in both, and decide which is the preferred arrangement of electrons. Given: chemical species Asked for: Lewis electron structures, formal charges, and preferred arrangement Strategy: A Use the step-by-step procedure to write two plausible Lewis electron structures for SCN. B Calculate the formal charge on each atom using Equation 4.4.1. C Predict which structure is preferred based on the formal charge on each atom and its electronegativity relative to the other atoms present. Solution: A Possible Lewis structures for the SCN ion are as follows: B We must calculate the formal charges on each atom to identify the more stable structure. If we begin with carbon, we notice that the carbon atom in each of these structures shares four bonding pairs, the number of bonds typical for carbon, so it has a formal charge of zero. Continuing with sulfur, we observe that in (a) the sulfur atom shares one bonding pair and has three lone pairs and has a total of six valence electrons. The formal charge on the sulfur atom is therefore 6(6+22)=1.5(4+42)=1 In (c), nitrogen has a formal charge of −2. C Which structure is preferred? Structure (b) is preferred because the negative charge is on the more electronegative atom (N), and it has lower formal charges on each atom as compared to structure (c): 0, −1 versus +1, −2. Exercise Salts containing the fulminate ion (CNO) are used in explosive detonators. Draw three Lewis electron structures for CNO and use formal charges to predict which is more stable. (Note: N is the central atom.)
2021-09-22 02:41:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5521306395530701, "perplexity": 1279.8625513370587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057303.94/warc/CC-MAIN-20210922011746-20210922041746-00264.warc.gz"}
http://www.oxfordjournals.org/our_journals/mnras/for_authors/
# Instructions to Authors 1 Overview 1.1 Scope 1.2 Paper types 1.3 Charges 2 Preparing a manuscript 2.1 LaTeX 2.2 Microsoft Word and other word processors 2.3 Contents 2.4 Figures and tables 2.5 Language 2.7 Catalogues and online-only material 2.8 Errata 3 Submitting a paper 3.1 Ethics 3.2 Newsworthy articles 3.3 Submissions through Overleaf 3.4 Submissions through ScholarOne Manuscripts 4 Editorial review 4.1 Decisions 4.2 Submitting a revised version 5 Publication 5.1 Author Services 5.2 Licence form 5.3 Artwork 5.4 Proofs 5.5 Open Access 5.6 Offprints 6 Style guide 6.1 Layout 6.2 Spelling, grammar, punctuation and mathematics 6.3 References and citations 6.4 Miscellaneous journal style 7 Contacts ### 1 Overview Monthly Notices of the Royal Astronomical Society (MNRAS) is a peer-reviewed scientific journal which publishes research in astronomy and astrophysics. First published in 1827, MNRAS is one of the world’s largest and most prestigious astronomy journals. Anyone may submit a paper to be considered for publication in MNRAS. There are no restrictions based on nationality, institutional affiliation, qualifications etc. Over three-quarters of papers published by MNRAS originate from outside the UK. The processing of papers has two major – and largely separate – elements: editorial review by the Royal Astronomical Society (RAS, section 4), and production by Oxford University Press (OUP, section 5). Authors are asked to read these instructions carefully. ### 1.1 Scope Papers submitted for publication in MNRAS are considered by members of the editorial board, who will usually seek the opinion of one or more expert referees. Decisions on whether or not to publish a paper are subjective, but the minimum requirements are: a) The paper must present original research, clearly demonstrating its novelty beyond that of previously published work. b) The results must be significant and likely to make an important contribution to the advancement of their field. c) The paper must be clearly presented, written in good scientific English, and conform to journal guidelines for content and presentation (see section 2). d) The subject must be of interest to readers of MNRAS and fall within the range of topics covered by the journal. MNRAS publishes the results of original research in astronomy and astrophysics, including work which is observational, theoretical or concerned with astronomical instrumentation. Assessment of whether papers fall within this scope is made by the members of the editorial board, who will reject papers which are not on suitable topics. ### 1.2 Paper types Three types of paper are published by MNRAS: Main Journal papers, Letters, and Errata. Main Journal papers are the most common type of paper published. There are no page limits, but it is important for papers to be concise: referees and editors may suggest shortening of any that are not, which may lead to delay in acceptance. Letters should be self-contained and describe the results of an original study whose rapid publication might be expected to have a significant and immediate impact on the development of research in the associated subject area. They must not exceed five pages in length, and are handled along a fast-track process. The page limit must be respected. Authors are required to state their reasons for seeking publication in the form of a Letter when submitting their manuscript. Letters are published rapidly after acceptance in a separately paginated section of the journal and appear online only. They are published within 30 days of receipt of the final manuscript files in the production office, and linked immediately into the NASA ADS. This enables the fastest possible publication, widest dissemination to the research community and greatest impact. Electronic publication means that colour is fully supported, without charge and at the discretion of the author. Errata are short corrections to papers which have previously been published in MNRAS. Errata may only be submitted by the authors of the original paper, and should be used to correct errors which may lead to significant misunderstandings or incorrect conclusions. See section 2.8 for details of errata. ### 1.3 Charges There is no charge for submitting a paper to MNRAS, and no page charges for publication if the paper is accepted (although authors must ensure their papers are concise and may be required to shorten overly-long papers). There are two additional services which authors may choose to pay for if they wish: 1. No charge is made for colour figures in the electronic edition of the journal. Authors who wish to have their figures printed in colour (Main Journal only, Letters are not printed) are charged a flat rate of £200 + VAT per paper. See section 2.4 for more on colour printing. 2. All papers published in MNRAS are made available to all the subscribers to the journal. MNRAS also provides the option for authors to pay to make access free of charge to everyone, regardless of subscription status (author-pays open access). See section 5.5 for details. In rare cases when authors make excessive changes to their papers at the proof stage (see section 5.4), it may be necessary to charge for the increased production costs incurred. Authors can avoid this charge by carefully checking all versions of papers before they are submitted, and avoiding making substantial changes at the proof stage. ### 2 Preparing a manuscript Authors may prepare their manuscripts using any word processing package which can generate the document in a suitable format (see section 3.3 for suitable file formats). It is recommended that papers are prepared using LaTeX because this is the method best suited to the mathematical nature of the material. We can usually also accept papers written using Microsoft Word or other word processing packages, although these are not suitable for papers with significant mathematical content. ### 2.1 LaTeX For authors preparing their manuscripts using LaTeX, MNRAS has its own LaTeX class files which simulate the appearance of the journal page. Authors are encouraged to use these, although papers prepared using other class files can also be accepted. From June 2015 a major update to this package, version 3.0, has been made available. The journal class files and documentation are available at the Comprehensive TeX Archive Network (CTAN) site in this directory. The package consists of a readme.txt file, the class file mnras.cls, a bibliography style file mnras.bst for authors wishing to use BibTeX, and documentation explaining how to use them. A simple template paper is also available. ### 2.2 Microsoft Word and other word processors Papers will also be considered if they have been prepared using word processors such as Microsoft Word (although we encourage authors to use LaTeX). Word processed papers should follow the same style and layout as those prepared with LaTeX. Authors should pay particular attention to features which are not automated in these packages, such as references, figure numbers etc. ### 2.3 Contents Authors must include the following contents in their manuscripts; any paper which does not will be returned to the authors for correction before it is considered for publication. • Pages: all pages must be numbered. • Title page: the title page must include the title of the paper, the names of the authors, full institutional addresses for each author, and the address for correspondence if that is different. E-mail addresses and present addresses (if different from those where the work was done) may be included as footnotes. • Abstract: authors must provide an abstract (except for Errata, which do not have abstracts), normally of not more than 250 words for Main Journal papers or 200 words for Letters. The abstract should be presented as a single paragraph and briefly summarize the goals, methods, and new results presented in the paper. • Key words: the abstract must be followed by between one and six key words from the MNRAS key words list – this list is common to MNRAS, ApJ and A&A, and only key words that appear on the list are allowed. • Sections: the paper must be divided into a suitable number of sections and, if necessary, subsections. Sections and subsections must be numbered. • Tables and figures: numbers and captions must be provided for every table and figure; all must be cited in the text of the paper in the correct numerical order. See section 2.4 for guidelines on the preparation of figures and tables. • Mathematics: equations must be numbered. • References: all citations in the text must appear in a list of references at the end of the paper, and vice versa. The reference list must be in alphabetical order. Citations must be in the Harvard author (year) style e.g. Smith & Jones (1991). • Facility acknowledgements should be placed in the Acknowledgements section, and not as footnotes. • Crossref Funding Data Registry: in order to meet your funding requirements authors are required to name their funding sources, or state if there are none, during the submission process. For further information on this process or to find out more about the CHORUS initiative please click here. These are the minimum requirements for consideration; authors should also see section 6 for further information on MNRAS journal style. ### 2.4 Figures and tables Figures should be prepared to publication standard. For line diagrams and plots, authors should use vector graphics. For images and photographs, high-quality raster formats are preferable (though please note the file size limit in section 3.3). Technical details on the preparation of figures are discussed in section 5.3. All figures and tables must be numbered, accompanied by a suitable caption, and be mentioned in the text in the correct numerical order. They should be placed at logical points in the text (i.e. not all at the end). All figure axes must be labelled, including units where applicable. Colour figures are supported for free in the online edition of the journal, but authors will be charged for colour printing (the current charge is £200 + VAT for the whole paper). If authors choose not to pay for colour printing, they should ensure that their figures are legible when printed in black & white, or provide separate sets of figures for the print and online editions of the journal. 3D figures, such as those generated by S2PLOT, are fully supported in the online edition of the journal. ### 2.5 Language Authors for whom English is not their first language should have their manuscript inspected by an English-speaking colleague. Language editing, particularly if English is not your first language, can be used to ensure that the academic content of your paper is fully understood by the journal editors and reviewers. Please note that edited manuscripts will still need to undergo peer-review by the journal. Language editing does not guarantee that your manuscript will be accepted for publication. For further information on this service, please click here. Several specialist language editing companies offer similar services and you can also use any of these. It is not mandatory to use a language editing service. Authors are liable for all costs associated with such services. It is the responsibility of the authors to ensure they have the necessary copyright permissions for any material (including, but not limited to, figures and text) used in their paper. Any re-use of material which has previously been published – even by the same authors, and/or in the same journal – must be accompanied by a citation to the original source and the necessary copyright permissions obtained. Quotation marks should be used around any text which has been reproduced from elsewhere, in addition to a citation. Failure to properly cite material which has previously been published constitutes plagiarism and is a serious breach of scientific ethics. Papers which are found to contain plagiarized (including self-plagiarized) material will be rejected. All submissions are screened for originality using the iThenticate plagiarism detection system. Note that the copyright for previously published material may rest with its publisher not its author, so it is not sufficient to merely obtain the original author’s permission. This also applies in the case of the author’s own previous publications. Please refer to the relevant journal or publisher websites for instructions. Authors who wish to re-use material previously published in MNRAS should refer to the instructions at http://www.oxfordjournals.org/access_purchase/rights_permissions_ras.html. Third-Party Content in Open Access papers If you will be publishing your paper under an Open Access licence but it contains material for which you do not have Open Access re-use permissions, please state this clearly by supplying the following credit line alongside the material: Title of content Author, Original publication, year of original publication, by permission of [rights holder] This image/content is not covered by the terms of the Creative Commons licence of this publication. For permission to reuse, please contact the rights holder. ### 2.7 Catalogues and online-only material Papers may be accompanied by online-only supporting information, such as long data tables, videos, additional figures, or supplementary appendices. Authors are particularly encouraged to make catalogues and databases available, so readers may reproduce their results or use them for future studies. Online material will be available for download alongside the paper on the journal website. MNRAS can host all commonly used file types with a file size limit of 10 MB per file. If you have a query regarding hosting a specific file type, please contact the publishers (see section 7). Authors who wish to make additional material available online only should follow this procedure: • In the case of long tables, the paper should include a sample table consisting of the first 5–10 rows of data, and the caption should state that the full table is available online. • In the case of videos, extra figures or appendices, these should be mentioned in the text (or figure caption) along with a statement that they are available online. • The file(s) containing the online material should be uploaded to ScholarOne as ‘Supplementary material (online)’, and mentioned in the box provided. • The file(s) will be placed online in exactly the format in which they are provided – the publishers will not modify them in any way. For tables, authors should provide a machine-readable file (e.g. ASCII.txt) containing the data and a description of the columns. Authors can in addition provide a formatted PDF containing the full table if they wish. Additional figures (with captions) and appendices should be provided as PDF files. LaTeX files should be avoided, as they will not be compiled before being placed online. Authors are encouraged to mount machine-readable versions of their tables on the VizieR database of astronomical catalogues at the Centre de Données astronomiques de Strasbourg (CDS) website. It is the responsibility of the author to upload such material to CDS and to ensure that it is in the correct format for the database. Authors should consult the CDS website for instructions on preparing and submitting tabular data, which include a template that can be adapted for MNRAS tables. A hyperlink can be included to CDS from the electronic text of the MNRAS article. ### 2.8 Errata Errata are short corrections to previously published papers. Errata may only be submitted by the authors of the original paper, and should be used to correct errors which may lead to significant misunderstandings or incorrect conclusions. Errata should be prepared in the same way as other papers, with the following exceptions: • The title should be ‘Erratum: [original title]’. In most cases the author list will be the same as the original paper. • There should be no abstract. Key words should be the same as the original, but with the addition of ‘errata, addenda’ at the start of the list (even if this results in 7 key words). • The first sentence should identify the original paper, which should be followed by a description of the error. There should be an explanation of how the error arose, what needs to be changed (e.g. replacement figure or table, new text), how these affect the conclusions of the earlier paper, and the erratum should finish with any references. See errata published in recent volumes of the journal for examples of the format. • When submitting on the ScholarOne website, select Erratum as the manuscript type, enter ‘Erratum’ instead of an abstract, and in the cover letter quote the original manuscript ID and give a one or two sentence explanation of why an erratum is required. ### 3 Submitting a paper New manuscripts must be submitted electronically via the ScholarOne Manuscripts (S1M, formerly known as Manuscript Central) submission and tracking system at http://mc.manuscriptcentral.com/mnras. Paper or email submissions are not accepted. ### 3.1 Ethics Authors who submit a paper must be able to certify that the paper is original work, has not been published before and is not being considered for publication elsewhere. MNRAS is governed by the RAS Editorial Code of Practice, whose terms must be followed by all authors, editors and referees. Authors should familiarize themselves with their obligations under the Editorial Code. In particular, authors are reminded that any of the following are considered to be serious breaches of scientific ethics, which will result in the immediate rejection of their paper: • Submitting a paper to more than one publication at the same time. • Personal attacks directed at referees, editors or other authors. ### 3.2 Newsworthy articles The RAS Press Officer will be happy to assist with publicity and press releases in cases where submissions are likely to be of more general interest e.g. with the popular media. Authors wishing to take advantage of this service should request it during the submission process. ### 3.3 Submissions through Overleaf For creating manuscripts in LaTeX, MNRAS recommends the use of its own LaTeX class files. Our class files are available online at Overleaf and also as a downloadable package via the links below. Overleaf is a free, collaborative online LaTeX editor that allows you to write your manuscript in a TeX or rich text environment, to generate PDF outputs as you write, and to share your manuscript with co-authors and collaborators. Overleaf also allows you to submit your manuscript files directly into our online submission system, without needing to upload files manually, as well as to make updates to those files if preparing a revised submission. If you are submitting via Overleaf please use the link below, and adapt the .tex file provided or upload your own manuscript files. https://www.overleaf.com/latex/templates/monthly-notices-of-the-royal-astronomical-society-mnras-latex-template-and-guide-for-authors/kqnjzrwjwjth#.V_zm-Jb2aUm Authors uploading their own manuscript files to Overleaf may also use the MNRAS LaTeX class files (see section 2.1). ### 3.4 Submissions through ScholarOne Manuscripts Manuscripts must be submitted as a single file containing all figures and tables, in a single line spaced format so that the length of the paper may be judged. ScholarOne Manuscripts is able to handle manuscripts in PDF, PS, Word, RTF or plain text formats, which are automatically converted to a single PDF for use by the editor and reviewers. Do not zip or otherwise compress this file. Designate this file as ‘Complete manuscript file (PDF, PS or DOC)’. Files should be kept as small as possible at this stage – files larger than 10 MB are not supported and should not be uploaded without prior approval. Authors may need to reduce the quality of their figures to meet this file size requirement; if the paper is accepted then higher quality figures may be reincorporated at the production stage (see section 5). Any material for publication as online-only supporting information (see section 2.7) should be uploaded as ‘Supplementary material (online)’. Authors may also upload supplementary material which they wish to make available to the editor and referee but is not intended for publication, such as additional data tables or figures. This should be designated as ‘Supplementary material (file for reviewer)'. Both forms of supplementary material will be automatically added to the PDF generated by the system. For authors using LaTeX: our ScholarOne website does not compile LaTeX files, so please compile a PDF or PS before uploading. PDF files generated with pdfTeX/pdfLaTeX sometimes fail on the ScholarOne Manuscripts system; this can be fixed by adding \pdfminorversion=5 to the preamble of your LaTeX file, or alternatively by converting to a PS file before uploading. Please check the PDF generated by the system before submitting. All authors must also upload their manuscript and figure source files. For authors using LaTeX, this means the .tex, .eps, .bib etc. files. For authors using Word, this means the .doc or .docx and figure files. All the source files should be combined into a single .zip or .tar.gz archive and uploaded as ‘Source files (.zip or .tar.gz)’. The source files will be used for typesetting purposes and must be uploaded with every version of your paper, i.e. original version and all revisions. The source files must correspond exactly to the complete manuscript, otherwise delays in publication will occur. Please include an explanatory readme file in your archive. If you have used BibTeX to generate your bibliography in LaTeX, also include the .bib file in the archive along with the .bbl and .tex files; this will aid the typesetting process. How to submit a new paper To submit a new paper, click on ‘Submit a Manuscript’ on the top toolbar, or use the blue star icon. There are seven steps to complete when submitting a paper, which are listed on the left hand side of the screen. Some information, such as your name as author, is added automatically. A green tick appears next to each step as it is completed. The steps do not have to be completed in sequence and the process can be abandoned mid-way through and picked up again at a later session, the information being stored as an ‘Unsubmitted manuscript’. To continue with the submission at a later date, click on ‘Unsubmitted manuscripts’ from the Author Centre. The paper will appear in a table at the bottom and you should then click on ‘Continue submission’. All stages must be completed for a successful submission. Compulsory fields are marked with a purple ‘req’. Do not use your browser’s ‘back’ or ‘forward’ buttons, but move through the stages either by clicking on the step numbers on the left hand side of the page or by using the system’s ‘next’ and ‘previous’ buttons. Step 1 - Enter the manuscript type (Main Journal, Letter or Erratum), title and abstract. The ‘running head’ is the short form of the title which appears at the top of odd-numbered pages. Errata do not have abstracts – please enter ‘Erratum’ into the box instead. Step 2 - Choose at least one and up to six keywords from the list provided. Note: this list currently differs slightly from the list of approved keywords for use in the paper – if one of your keywords is missing then please leave it out. Step 3 - List all authors of the paper. You are automatically added as first author. Additional authors may be added and the order changed using the order drop-down box in the first column of the table. All the authors must be listed. Please use the ‘Find’ button to avoid duplication of accounts. If any co-authors do not yet have accounts on ScholarOne Manuscripts, fill out their details to create a new account and they will be notified by email. The order of the authors on ScholarOne should match that on the PDF; the ‘first author’ is the one whose name appears first on this list. The ‘corresponding author’ is the one to be listed as such on the final published paper, whilst the ‘contact author’ is the person we will correspond with during the peer review and publication processes. The ‘submitting author’ is whichever author completes the manuscript submission process. In most cases all four of these will be the same person, but there is no requirement for this and they may be different if necessary. Step 4 - Authors may optionally designate particular editors and reviewers that they would prefer not to assess their paper. Reasons must be given in the cover letter (next step). The editor will be informed of the request, but is under no obligation to grant it. Step 5 - A cover letter may be added here, which will be seen by the editorial office only (i.e. not the referee). Please do not use this to summarize your results – the abstract already does this. Instead, use this box to highlight any special handling required, or to communicate with the editorial office. For example, the cover letter should be used to highlight any online material, explain requests for non-preferred reviewers and editors, list any companion papers or earlier papers in the same series etc. Only attach a file if absolutely necessary. Step 6 - Upload your files here, giving each file a designation from the drop down list. See section 3.3 for details of which files you should upload. Make sure to click on ‘Upload Files’ at the bottom of the screen. All files, except those designated ‘not for review’, will be combined into a single PDF file. Step 7 - Here you will see a checklist of what you have entered. Before you can complete your submission, you must check the PDF generated by the system. This is exactly what will be seen by the editor and referee, so if anything is missing or wrongly included it should be corrected now. Once the PDF has been checked carefully, submission can be completed by clicking the 'submit' icon. You will receive confirmation on screen and via email. Keep a note of your Manuscript ID; this will help you track your submission via ScholarOne Manuscripts. The Editorial Office will contact you as soon as a decision has been made. ### 4 Editorial review Manuscripts submitted to MNRAS undergo editorial review by the Royal Astronomical Society, via a process of scholarly peer review. Each paper is assessed by a member of the Editorial Board, who in most cases will solicit the opinion of one or more expert reviewers (also called referees). Reviewers critically examine the content of the paper and make recommendations on its suitability for publication. Reviewers may choose whether to reveal their identity to the authors; editors usually remain anonymous. The scientific editors are assisted by a team of Assistant Editors (formerly Editorial Assistants), who act as the primary point of contact and handle the administration of each paper. ### 4.1 Decisions Based on the report(s) of the reviewer(s), the editor will choose to make one of the following decisions on each paper: Accept – the paper is immediately accepted for publication and forwarded to the publishers. Accept after revision – very minor changes, such as corrections to language or layout, are required. Once these have been made the paper will be forwarded to the publishers without further editorial review. Minor/Moderate/Major revision – changes to the content of the paper are required before it can be published. The nature of the revisions required will be explained in the report. Once these changes have been made the paper will be reconsidered. Withdraw – the editor and/or referee feel that the paper is not suitable for publication. The authors are therefore advised that they should withdraw their paper, and should inform the editorial office if they wish to do so. However, the authors may instead choose to modify their paper and submit a new version if they believe they can adequately address the report. Reject – the editor feels that the paper is not suitable for publication, and cannot be made so through modification. All papers rejected at this stage are confirmed by a second editor before the decision is forwarded to the authors. The paper will not be considered any further, and the authors may not submit a revised version. ### 4.2 Submitting a revised version If the editor decides to request modifications to a manuscript, the authors are allowed a maximum of six months to complete them (two months for Letters). Authors who submitted the original version of their manuscript using Overleaf can login to their Overleaf project to prepare and submit the revised version. Authors who submitted their manuscript through ScholarOne Manuscripts should follow the instructions below. The revised version of the manuscript should be uploaded to ScholarOne Manuscripts by logging in, opening the Author Centre, and clicking on the purple button marked ‘Click here to submit a revision’. Do not use the blue ‘submit a new manuscript’ button for revised papers. Locate the entry for the paper in the table, and then click on the ‘create a revision’ link. Another seven-step process is then required. Steps 2–7 are identical to steps used when submitting a new manuscript (see section 3.4), and are automatically completed with the information from the original submission. Authors should check these carefully and make any modifications necessary. Step 1 is new, and requires the authors to enter a response to the editor and/or referee’s comments on their earlier version. Changes to the manuscript should be highlighted (e.g. in bold or colour), to assist the referee and editor. The response to the previous report should be as specific as possible, and directly address each of the points raised by the editor and/or referee.The process may be interrupted and continued at a later date. The partially-complete submission can be found under ‘Unsubmitted manuscripts’ in the Author Centre. Authors should also upload a clean file (remove bold font or track changes) for the publisher, since uncorrected versions of accepted manuscripts are now published online ahead of the proof corrected versions (see below). ### 5 Publication Once a paper has been accepted for publication, it will be forwarded by the RAS to the publishers, Oxford University Press (OUP). An uncorrected version of your manuscript will appear online on the Advance Access page within 24 hours of you completing your Licence to Publish form. Appearance in Advance Access constitutes publication and establishes precedence. Papers published in Advance Access are citable using the DOI and publication date. The paper will then be copy-edited and typeset from the supplied electronic files. After proof correction, the final version of your article will be immediately published in an online issue, and the uncorrected proof will be taken off the Advance Access page. Once published in an issue, articles can be cited by year, volume and article page number. Please note that Advanced Access is granted when the Editors consider that extensive copy-editing of the manuscript will not be required. Authors wishing to take advantage of this early visibility should ensure that their manuscripts are as free as possible from spelling, syntactical and other language errors. All articles are also published in print. OUP aims to publish all MNRAS papers online within 30 days of receipt in the production office. ### 5.1 Author Services A variety of author services are available from Oxford University Press. For more information please see the ‘For Authors’ section of the Oxford Journals website. Online production tracking is available for accepted articles through OUP Author Services. Author Services enables authors to track their article – once it has been accepted – through the production process to publication online and in print. The author will receive a ‘Welcome to Oxford Journals!’ e-mail with a link that enables them to set up a ‘My account’. Authors can check the status of their articles online using this account. ### 5.2 Licence form Upon receipt of accepted manuscripts at Oxford Journals authors will be invited to complete an online licence to publish form. ### 5.3 Artwork Guidelines for the use of figures were given in section 2.4. In this section, detailed instructions are given for the preparation of artwork which is suitable for professional publication and printing. Authors are asked to bear in mind, when preparing their diagrams, the likely reduction that will be needed when the figure is placed in the journal page. It is important to ensure that the line thickness used will withstand a possibly significant reduction in size. This applies to all aspects of the figure, but dotted and dot-dashed lines can cause particular problems. For all graphics files please make sure that the line weight is acceptable – the weight should not be less than 0.3 pt at final size. Finer lines and points than this will not print, even if you can see them on your laser printed hard copy – bear in mind that your laser printer has a far lower resolution than the imagesetter that will be used at the journal printers. Do not use hairlines as these can effectively disappear (they print at 1/1200th of an inch in thickness) when printed on a high-resolution imagesetter. When selecting line styles avoid triple-dot-dashed lines as this line style is overly complicated and is not always supported by typesetting, PostScripting and artwork software. Solid, dotted, dashed, dot-dashed, double-dot-dashed and dot-double-dashed lines are all OK. Axis labelling, lettering and any plotting symbols should be sized appropriately for the figure and its likely final size. For example, a relatively empty figure containing only a couple of line plots will be reduced to a single journal column (84 mm wide), and should therefore have thick enough lines and large enough labelling to withstand reduction perhaps to one-half or one-third of original size, or even smaller. Labelling that is far too large for a figure can also be problematic, and may look very odd on the typeset page. Unsuitable artwork will be referred back to the author, inevitably leading to delay in publication. Grey-scale and half-tones Grey-scale images can be tricky to reproduce well, owing to the slight but unavoidable degradation (loss of contrast) that occurs during the printing process (which involves wet ink on absorbent paper). Aspects that cause particular problems include: many shades of grey in a figure with only subtle differences between them; very fine tints or very solid tints; large areas of dark grey and black next to each other; black contours or symbols overlaid on a dark grey background. Steps that authors can take to remedy these problems and so improve the final result include: avoiding very fine (80 per cent) tints; increasing the contrast between shades as much as possible; using fewer different levels of grey; reversing the grey-scale so that large areas of dark grey next to black become light grey next to white; making contours/symbols white where they are overlaid on dark grey shades; making figures as close to the final size as possible, to minimize the reduction needed; or even considering whether grey shading is really needed at all in a figure – e.g. could contours alone be used to represent the data, or could cross-hatching be used to represent particular regions of a graph or histogram? File formats The preferred format for electronic graphics file is Encapsulated PostScript (EPS), although PDF and TIFF (Tagged Image File Format) files can also be used. EPS files should be saved with a PC preview/header to allow viewing on screen, cropped tightly, and saved with a minimum amount of white space around the illustration. All fonts and any logos should be embedded as part of the file, and please use a common font like Times, Arial or Helvetica for labelling. Please also make sure that all labelling to be included in the figure [e.g. (a), (b), names of objects in multi-panelled figures, etc.] is embedded in the file – please do not use LaTeX code to include these labels as the figures are processed entirely separately from the LaTeX code. Authors should take care in particular to make sure that the bounding box of the EPS file encompasses the entire visible area of the image. If the bounding box is not large enough, the figure will appear cropped when imported into the typesetter’s software (Adobe Photoshop or Adobe Illustrator). Ideally, the EPS file should be scaled to the final size and have the desired aspect ratio. Do not alter the aspect ratio using LaTeX code as the files are dealt with separately from the LaTeX file. Also please note that the typesetters cannot use graphics that have been produced using the LaTeX ‘picture’ environment. TIFF files should be saved with a minimum amount of white space around the illustration, and with the PC option if possible. Please make sure that the TIFF file has sufficient resolution: this is 300 pixels per inch (ppi) for grey-scale/half-tone figures (e.g. photographs), and 800 ppi for combined line/tone figures, at final size. For example, a figure that is to fill one column (approx. 80 mm wide, or 3.15 inches) needs to be at least 945 pixels wide if it is a photograph (3.15 × 300) or 2500 pixels wide (3.15 × 800) if it is a combination of a photograph and labelling. If the file is very large then it can be compressed: please tell us which compression method has been used. Graphics files should be named to indicate clearly to which illustration they pertain (e.g. fig6.eps for Fig. 6). Please do not supply figures with long, complicated filenames. Please supply the figures as one figure per file and not as multi-page PS or TIFF files. Colour Note that there is a charge for colour printing – see section 2.4. Colour figure files should be supplied as CMYK if possible, rather than RGB. It may not always be possible to get an exact match for all of the colours in a particular figure: in particular, colours that appear fluorescent on-screen will look flatter when printed. The exact appearance of a colour figure at any stage depends on the display medium and settings used: e.g. EPS file viewed on screen, laser print, CMYK printing of ink on paper. Any colour files not printed in colour will be published as grey-scale in the paper journal and in colour on the web, free of charge. If you have figures that are to be processed in this way, please check the proofs very carefully, as false colours can sometimes reproduce in unusual ways when converted to grey-scale mode. If you wish, you can supply separate grey-scale and colour files for the print and web versions of your paper. ### 5.4 Proofs Once a paper has been received by the publishers it is edited for style and language, and then typeset ready for publication. At this stage the authors are sent a copy of the typeset paper, referred to as the ‘proof’. This is the final chance for authors to make any corrections to their paper, so it is vital that the proofs are checked thoroughly for any mistakes. Any subsequent erratum should relate only to significant errors that are identified in the scientific content of the publication, not to cosmetic changes. Note that although papers are typeset using the author's source files as a starting point, the paper will have been converted to XML in the typesetter's own system and the PDF proofs created from this. It is therefore not possible to submit corrections using new LaTeX or Word files. Short LaTeX excerpts for mathematical corrections are acceptable. At the proof stage the authors should carefully check their paper, including spelling, grammar, style, layout, referencing etc. If references need to be updated, please carefully check the textual citations as well. All corrections must be clearly marked and returned to the publishers as soon as possible, along with the answers to any queries made by the publishers. Changes to the substantive content or scientific results of a paper should be avoided. Proofs should be returned by the date requested if at all possible – delay in returning the proofs will lead to delay in publication of the paper. Sometimes important new results become available between the time when a paper is accepted and when the proofs are returned. The authors may choose to mention these if they wish by inserting a 'Note added in proof' at the end of the paper, just before the references. This should not normally exceed two or three sentences in length. Please appreciate that in order to achieve the rapid 30-day publication goal, the production schedule is very tight. If authors realize that they need to make substantive changes to their paper (beyond minor changes of e.g. spelling and grammar) after acceptance, the changes must be cleared by the RAS, and may need to be referred back to the editor and/or referee. Any such changes notified after the paper has gone into production (i.e. the day after the acceptance email is sent from the RAS) cannot be incorporated into the paper before it is typeset. Such changes will therefore need to be made as part of the proof corrections. To avoid excessive proof corrections and the delay that these can cause, authors are strongly encouraged to ensure that each version of their paper that they submit to MNRAS is completely ready for publication. Authors may be charged for excessive changes during production (see section 1.3). After typesetting, editing, and proof correction, articles are immediately published in an online issue and this constitutes official publication. Once published, articles can be cited by year, volume and article page number. ### 5.5 Open Access Authors may optionally choose to publish their paper under the Oxford Open scheme. This author-pays open access service makes papers freely available to everyone, online and immediately upon publication, for a fee. There is no need for authors to indicate that they wish to use Oxford Open until after a paper has been accepted. All open access papers are treated in the same way as any other paper; editors and referees will not be informed if an author opts for this service. These papers go through the journal’s standard peer-review process and will be accepted or rejected based on their own merit. Oxford Open articles are published under Creative Commons licences. Authors publishing in MNRAS can use the following Creative Commons licence for their articles: • Creative Commons Attribution licence (CC-BY) You can pay Open Access charges using our Author Services site. This will enable you to pay online with a credit/debit card, or request an invoice by email or post. The Open Access charges applicable are: Regular charge - £1450 / $2550 / €2175 List B Developing country charge* - £725 /$1275 / €1088 List A Developing country charge* - £0 /\$0 / €0 Discounted rates are available for RAS Fellows (rates available here). Please note that these charges are in addition to any colour printing charges that may apply. Orders from the UK will be subject to the current UK VAT charge. For orders from the rest of the European Union, OUP will assume that the service is provided for business purposes. Please provide a VAT number for yourself or your institution and ensure you account for your own local VAT correctly. ### 5.6 Offprints Authors will be provided with a PDF offprint on publication of their paper. These are provided free of charge to the corresponding author, and may be distributed subject to the accompanying terms and conditions. For main journal articles, paper offprints of the published article may be purchased if ordered via the method stipulated on the instructions that accompany the proofs. Note that it is not uncommon for printed offprints to take up to eight weeks to arrive after publication of the journal. Single copies of the Journal in which an author’s paper is published (back issues) can also be ordered from the Author Services site. ### 6 Style guide Papers published in MNRAS follow the journal’s house style. The minimum requirements for papers were set out in section 2.3. Full compliance with MNRAS style will be ensured by the publishers, but the authors should note the points below (which are not intended to be exhaustive) on common points of style. Manuscripts should be prepared accordingly. ### 6.1 Layout Papers should be formatted with two columns (except the abstract) and single line spaced. A single column layout may be used only if necessary for the display of numerous very long equations. The journal is printed on A4-sized paper. Sections should be numbered 1, 2, 2.1, 2.1.1 etc. Appendices should be labelled A, B, etc. Capital letters should be used only where they would occur in a normal sentence – e.g. ROSAT observations of the unusual star…, not ROSAT Observations of the Unusual Star…, with the exception of main section headings which are all capitals (e.g. INTRODUCTION). The first numbered section (after the abstract) should be the Introduction, and the last numbered section should present the authors’ conclusions. These should be followed by un-numbered Acknowledgements and References sections, with any Appendices appearing at the end (after the list of references). Between one and six key words should be selected from the MNRAS key words list No other key words may be used. The correct layout for key words (note punctuation) is, for example, ‘Key words: galaxies: active – galaxies: Seyfert – radio continuum: galaxies.’ Figures and tables should be referred to as e.g. Fig. 1 and Table 1, unless they are from another paper, in which case fig. 1 and table 1 should be used. Where a figure has several parts, labels (a), (b) etc. should be added as appropriate. Figures (plots) containing quantitative information should have borders on all sides and fiducial marks on every border. Axes should be labelled and include the units. Tables should only have horizontal lines at the top and bottom, and under the column headings; no vertical lines should be used. Authors should note any special instructions regarding sizing or layout of figures and tables in their cover letter. ### 6.2 Spelling, grammar, punctuation and mathematics Punctuation Hyphens (one dash in LaTeX) should be used for compound adjectives (e.g. low-density gas, least-squares fit, two-component model). This also applies to simple adjectival units (e.g. 1.5-m telescope, 284.5-nm line), but not to complex units or ranges, which could become cumbersome (e.g. 15 km s–1 feature, 100–200 µm observations). Some words (e.g. time-scale) are always hyphenated as part of journal style (see below). N-rules (two dashes in LaTeX): these are longer than hyphens and are used (i) to separate key words, (ii) as parentheses (e.g. the results – assuming no temperature gradient – are indicative of …), (iii) to denote a range (e.g. 1.6–2.2 µm), and (iv) to denote the joining of two words (e.g. Kolmogorov–Smirnov test, Herbig–Haro object). M-rules (three dashes in TeX/LaTeX) are not used in MNRAS. Spelling and grammar Please use British English spellings – e.g. centre not center, labelled not labeled. For words ending in -ise/yse or -ize follow this style: use -ise/yse for devise, surprise, comprise, revise, exercise, analyse; use -ize for recognize, criticize, minimize, emphasize, organize, ionize, polarize, parametrize (note the spelling of this word in particular). ‘None’ is a singular word (none of the stars is a white dwarf), whilst ‘data’ is a plural word (these data show…). Miscellaneous journal spellings: acknowledgements, artefact, best-fitting (not best-fit), disc (except computer disk), haloes (not halos), hotspot, none the less, non-linear, on to, time-scale. For any other spellings, use whichever version is listed first in the Oxford English Dictionary. Mathematics Scalar variables are italic; vectors are bold italic (no arrows); matrices are bold Univers font (like bold sans serif); dot products are denoted by a bold centred dot • , cross-products by a bold multiplication sign ×. Differential d, complex i, exponential e, sin, cos, tan, log, etc., are roman (not italic). Sub/superscripts that are physical variables are italic, while those that are merely labels are roman (e.g. Ct and Fν but Teff and bmax). Equations should be punctuated as part of the sentence. Displayed equations are ranged left (i.e. no indent). Numbering of equations should follow the convention (1), (2)… throughout the whole paper, or (2.1), (2.2)… by section. Equations in appendices should be numbered (A1), (A2), (B1), etc. ### 6.3 References and citations MNRAS, in common with other journals in astronomy, uses the Harvard – i.e. author (year) – referencing style. All papers cited in the text must be included in an alphabetical list of references at the end of the paper, and vice versa. It is the responsibility of the authors to ensure the accuracy of their references. This is particularly important for the online version of the journal, where links are provided to cited references. If the reference details are wrong then the links will fail, and the citations will not be counted in bibliographic databases. Citations in the text, tables or figure captions, should use the following style: • For one author, use either the form (Brown 1999) or e.g. the observations of Brown (1999)…, as appropriate for the context. • For two authors, use an ampersand: Brown & Jones (1991). • For three authors, give all three names at first mention, e.g. (Brown, Jones & Smith 1994), but use first author et al. (in roman, not italic) thereafter, e.g. (Brown et al. 1994). • For more than three authors, use the first author et al., e.g. (Brown et al. 1994). • For several papers by the same author(s), use the style (Brown 1992, 1995) or Smith et al. (2000a,b) show that… • When several papers are cited in brackets, they should be ordered by date and separated by semi-colons, e.g. (Smith et al. 1990; Brown et al. 1995). If any catalogues, databases or scientific software are referred to in the paper, authors should ensure that those responsible for compiling them are properly credited. Rather than citing only a URL, if at all possible a reference should also be cited (and included in the reference list), or if a reference is not available then the names of those who compiled the database, or wrote the software, should be given. Note that some catalogues, databases and software do provide guidelines on how they should be cited - if so then these guidelines should be followed. The reference list should include no bold or italic, no commas after author surnames, and no ampersand between the final two author names. List all of the authors if there are eight or fewer, otherwise give just the first author followed by ‘et al.’. The styles for journal articles, conference proceedings, textbooks and PhD theses are illustrated by the following examples: • Eke V., Cole S., Frenk C. S., 1996, MNRAS, 282, 263 • Smith A., 2000, in Minh Y. C., van Dishoeck E. F., eds, Proc. IAU Symp. 197, Astrochemistry: from Molecular Clouds to Planetary Systems. Astron. Soc. Pac., San Francisco, p. 210 • Felsteiner J., Opher R., 1991, in Treves A., ed., Iron Line Diagnostics in X-ray Sources. Springer-Verlag, Berlin, p. 209 • Garrido R., 2000, in Brege M., Montgomery M. H., eds, ASP Conf. Ser. Vol. 210, Delta Scuti and Related Stars. Astron. Soc. Pac., San Francisco, p. 67 • Jones P., Taylor N., 2013, MNRAS, in press • Peebles P. J. E., 1980, The Large-Scale Structure of the Universe. Princeton Univ. Press, Princeton, NJ • Pounds K. A. et al., 1993, MNRAS, 260, 77 • Smith P. et al., 2013, preprint (arXiv:0123.45678) • Williams B. G., 1992, PhD thesis, Univ. Edinburgh • Brown, J., 2015, Astrophysics Source Code Library, record ascl:1234.567 Barr, Ewan 2014, presentation at "Extreme-Astrophysics in an Ever-Changing Universe: Time-Domain Astronomy in the 21st Century", lerapetra, Crete, 16-20 June 2014. http://www3.mpifr-bonn.mpg.de/div/jhs/Program_files/EwanBarrCrete2014.pdf (accessed January 4, 2016) Private communications or papers in preparation should be listed as such in the text, but omitted from the reference list, e.g. Smith (in preparation) shows that… The reference list should be in alphabetical order by surname. Spelling of author names and years must be consistent between the text and reference list. Prefixes such as de or van should be considered as part of the family name for alphabetical arrangement, and Mc should be alphabetized as if it were Mac. If there are several references with the same first author, arrange in the following order: firstly single-author papers (by date); then two-author papers (alphabetically by co-author, then by date); then multi-author papers (by date). Letters are denoted by the prefix L on the page number (e.g. ApJ, 298, L14) or a p, (small capitals) for older MNRAS papers (e.g. MNRAS, 251, 23p). The following simplified abbreviations are used for frequently used journals, as in the examples above. For journals not on this list, use the IAU standard abbreviations published on the IAU website. • A&A: Astronomy and Astrophysics • A&ARv: Astronomy and Astrophysics Review (the) • A&AS: Astronomy and Astrophysics Supplement Series • Afz: Astrofizika • AJ: Astronomical Journal (the) • Ap&SS: Astrophysics and Space Science • ApJ: Astrophysical Journal (the) • ApJS: Astrophysical Journal Supplement Series (the) • ARA&A: Annual Review of Astronomy and Astrophysics • ASP Conf. Ser.: Astronomy Society of the Pacific Conference Series • Azh: Astronomicheskij Zhurnal • BAAS: Bulletin of the American Astronomical Society • Mem. RAS: Memoirs of the Royal Astronomical Society • MNASSA: Monthly Notes of the Astronomical Society of Southern Africa • MNRAS: Monthly Notices of the Royal Astronomical Society • Nature (do not abbreviate) • PASJ: Publications of the Astronomical Society of Japan • PASP: Publications of the Astronomical Society of the Pacific • QJRAS: Quarterly Journal of the Royal Astronomical Society • Rev. Mex. Astron. Astrofis.: Revista Mexicana de Astronomia y Astrofisica • Science (do not abbreviate) • SvA: Soviet Astronomy ### 6.4 Miscellaneous journal style Non-Roman alphabets Papers must be written in the Roman alphabet used in English. As an exception to this rule, personal names may be given in their native alphabet (e.g. Cyrillic, Chinese, Greek, Arabic etc.) in the list of authors (only), provided the Roman equivalent is given first with the native name in brackets e.g. Ivan Petrovich Sidorov (Иван Петрович Сидоров), Zhang San (張三 ). Units • Units should be in roman and separated from the number by a non-breaking space: e.g. 200 keV. • The units of time are ms, s, min, h, d, yr. • The units of length/distance are Å, nm, µm, mm, cm, m, km, au, light-year, pc. • Use superscript –1, not solidus /, for units: e.g. km s–1 (not km/s). • The unit of arcseconds is arcsec when used to denote angular size or separation (e.g. beamsize 12 arcsec, 30 arcsec west of the star), similarly for arcmin. Use the prime and double prime symbols (not apostrophes) for coordinates (e.g. dec. –30° 29ʹ 23ʺ). If decimal points are used, these symbols should appear directly above them. • Use the degree symbol ° except to denote e.g. areas, where deg2 may be more appropriate (e.g. a survey area of 3 deg2). • Avoid repeating units unnecessarily (e.g. ‘1.3 and 2.6 mm’ rather than ‘1.3 mm and 2.6 mm’). • The unit of magnitudes is mag, not superscript m. • Percentages should be written per cent, not %, except in tables. • Solar masses and solar luminosities should use the subscript solar symbol and be set roman e.g. M, L. Best practices Authors are encouraged to follow those guidelines which are relevant to their field for the best practice in publications. These include: Other journal style • Use a single (not double) space after a full stop. • The abbreviations e.g. i.e. cf. etc. NB et al. are all roman (not italic). Note the punctuation. • Use single quotes ‘. . .’ not double quotes “. . .”, except where this would cause ambiguity. • Letters denoting wavebands (e.g. UBV, K-band) are set italic. Colour excess is set as E(B – V) i.e. with no subscript and using a minus sign. Extinction is set AV i.e. with subscript. • Letters denoting orbital states (1s2, 2p2 etc.) are set in roman. • Ionized species should be denoted by small capitals, preceded by a thin space – e.g. He i, Ca i. • Balmer, Lyman etc. lines are set as e.g. H β, Ly α (no subscript, non-breaking space). • Computer software should be in small capitals e.g. iraf, cloudy. • Satellite names should be in italic e.g. Herschel, XMM-Newton, JWST. • The correct order of brackets is {[( . . . )]}. • Acronyms and abbreviations should be spelt out at the first occurrence, unless they are very well known throughout astronomy e.g. CCD. • Dates should be written as e.g. 1998 April 14, except in tables, where months may be abbreviated as Jan, Feb, Mar, Apr, May, June, July, Aug, Sept, Oct, Nov, Dec. • Stellar names derived from constellations should be written in the genitive form: e.g. V386 Sagittarii or V386 Sgr (not V386 Sagittarius). • Facility acknowledgements should be placed in the Acknowledgements section, and not as footnotes. ### 7 Contacts There are separate points of contact for enquires relating to papers which are undergoing editorial review by the RAS and those in production by Oxford University Press. Please do not contact the publishers with queries about papers that are still under editorial review, or the editorial office about papers which are in production – the two stages are almost entirely separate and they will be unable to assist. Submitted papers For papers which have been submitted but have not yet been accepted, please contact the assigned Assistant Editor by clicking on their name in the ScholarOne Author Centre. If this is impossible, contact the RAS Editorial Office: Royal Astronomical Society Burlington House London W1J 0BQ UK Tel: +44 (0)20 7734 3307/4582 Fax: +44 (0)20 7494 0166 E-mail: kc@ras.org.uk Accepted papers For papers which have been accepted and are in production, contact the publishers: RAS Journal Production Oxford Journals Oxford University Press Great Clarendon Street Oxford OX2 6DP UK Tel: +44 (0)1865 353116 E-mail: mnrasj@oup.com ## Companion title: MNRAS Letters You are viewing MNRAS - switch to MNRAS Letters. D. R. Flower ## For Authors ### Looking for your next opportunity? Looking for jobs...
2016-12-05 15:27:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4264340102672577, "perplexity": 2309.22106781712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541697.15/warc/CC-MAIN-20161202170901-00440-ip-10-31-129-80.ec2.internal.warc.gz"}
https://gstguntur.com/tag/cs-foundation-fundamentals-of-accounting-notes/
CS Foundation Fundamentals of Accounting Notes Depreciation Accounting – CS Foundation Fundamentals of Accounting Notes Go through this Depreciation Accounting – CS Foundation Fundamentals of Accounting and Auditing Notes will help students in revising the entire subject quickly. Depreciation Accounting – CS Foundation Fundamentals of Accounting Notes Depreciation: • Depreciation means a fall in the value of asset due to usage, efflux of time or due to obsolescence. • It is a permanent, continuous or gradual shrinkage in the book value of a fixed asset. • The annual loss in the value of the asset is taken as the expenditure of the business. • Depreciation is a process of allocating the cost of a fixed asset over its estimated useful life in a rational and systematic manner. • “Depreciation is a process of allocation of expired cost and not of valuation of fixed asset”. Depreciation Accounting: Definition: “A measure of the wearing out, consumption or other loss of value of a depreciable asset arising from use, effluxion of time or obsolescence through technology and market changes. Depreciation is allocated so as to charge a fair proportion of depreciable amount in each accounting period during the expected useful life of the asset. Depreciation includes amortization of assets whose useful life is predetermined.” Institute of Charted Accountants of India (ICAI): “A system of accounting which aims to distribute the cost or other basic value of tangible capital assets less salvage (if any) over the estimated useful life of the unit (which may be a group of assets) in a systematic and rational manner. It is a process of allocation and not of valuation.” American Institute of Certified Public Accountants (AICPA): Characteristics of Depreciation – • Depreciation is the reduction in the book value of an asset. • Depreciation reduces the book value of an asset and not the market value. • Depreciation is a charge against profit. • Depreciation is a process of allocation of cost of an asset over the period of its life. • The term depreciation is used for tangible fixed assets. For wasting assets (Like mines) it is called depletion and for fictitious assets such as goodwill it is called amortization. • The amount of depreciation can never be calculated exactly, it can only be estimated. • Depreciation is must, i.e. it always takes place whether the asset is carefully handled or neglected. • If market value of fixed asset is fluctuating, the same does not affect the amount of depreciation so made on respective assets. • Total depreciation cannot exceed its depreciable value or original cost where the scrap value is nil. The fundamental objectives of depreciation are: • To maintain the nominal capital invested in fixed assets. • To allocate the expired cost of fixed assets over a number of accounting years. Causes of Depreciation: • Physical wear and tear due to continuous use. • Efflux (passage of time) • Physical deterioration • Obsolescence (asset becoming redundant due to technological changes • Accidents (fire etc.) • Depletion Objectives of providing Depreciation: • To ascertain correct profit/loss • To show a true and fair view of financial statements • To show assets at their proper value • To make provision for replacement of assets. • Compliance of legal provisions • To get tax benefit Note : Replacement of asset. Depreciation is a non cash expenditure, hence the amount debited in the profit and loss account are retained In the business. These are available for the replacement of the asset (buying a new asset), when replacement is required. Factors in Measurement of Depreciation – Cost of Asset: Cost of the asset refers to the cost at which the asset is purchased. It includes all expenses incurred upto the point the asset is ready for use. Original cost = Purchase price + freight + installation cost Useful Life of the Asset: Useful life of the asset means the period for which an asset can be used productively without incurring extraordinary repairs and maintenance expense. Determination of useful life is a matter of estimation. Scrap (Residual value): Residual value is the estimated sale value of the asset at the end of the economic life. Difference between the cost and the residual value is the depreciable amount which is to be written off over the useful life. Other factors: The following are the other factors affecting the measurement of depreciation. • Obsolescence i.e. chance of going out of fashion of the asset. • Working hours of the asset. • Repairs and Renewals. • Skills of the operator handling the asset. • Legal provisions relating to the asset. Depreciation Accounting: Depreciation Accounting is the process of allocating the cost of depreciable asset less its salvage value over its serviceable useful life. Note: Depreciable assets are the assets which: • Are expected to be used for more than one accounting period • Have a limited useful life • Are held by the organisation for use in the production or supply of goods and services. • Depreciation is not a process of valuation but it is an allocation. • There are two methods of recording depreciation: • When depreciation is charged to asset account • When provision of depreciation/ accumulated depreciation account is created. (i) When Depreciation is Charged to Asset A/c: • Depreciation is charged from the asset directly. • At the year end, depreciation A/c is closed by transferring it to the Profit & Loss Account. • In Balance Sheet, asset is shown at the written down value (cost less depreciation). • Depreciation is to be charged whether the business incurs profit or loss. • Depreciation provides funds for replacing the asset when its useful life ends Journal Entries: 1. Charging Depreciation from Asset: Depreciation A/c Dr. (With amount of depreciation) To Asset A/c (Being depreciation on asset charged) 2. Transferring Depreciation A/c to P/L A/c : Profit & Loss A/c Dr. (With amount of depreciation) To Depreciation A/c (Being depreciation transferred to P/L) (ii) When Provision for Depreciation A/c is Made: • Current year’s depreciation will be transferred to Profit/Loss A/c each year. • In the Balance Sheet, asset will continue to appear at its original cost and total amount of depreciation charged till date will be shown in provision for depreciation or accumulated depreciation A/c. • The balance is calculated by deducting provision for depreciation from original cost of asset. Journal Entries: 1. Charging Depreciation: Depreciation A/c Dr. (With amount of depreciation) To Provision for depreciation A/c (Being the depreciation on asset charged) 2. Transfer of Depreciation to P/L A/c at the year end: Profit/Loss A/c Dr. (With amount of current year depreciation) To Depreciation A/c (Being depreciation transferred to P/L Account) 3. When asset is sold/discarded/exchanged accumulated depreciation for that asset in provision for Dep. A/c is transferred to asset account Provision for Dep. A/c Dr. To Relevant Asset A/c Notes : 1. Accumulated depreciation means total depreciation provided on an asset till date. 2. If the words “p.a.” i.e. per annum are attached to the rate of depreciation, then depreciation must be calculated only for that period when asset was held. Whereas if the words “p.a.” is not attached then depreciation is to be charged for the full year. Component Method of Depreciation: It may be noted that Accounting Standards as well as the Companies Act, 2013 allow depreciation to be charged on a component basis. Each part of an item of Property, Plant and Equipment with a cost that is significant in relation to the total cost of the item should be depreciated separately, eg.: It may be appropriate to depreciate separately the airframe and engines of an aircraft. Methods of Providing Depreciation: Uniform Charge Method: (i) Straight Line Method (Fixed Installment Method): 1. Under this method, a percentage of original cost of the asset is written off every year so that asset account may be reduced to its residual value at end of its estimated economic life. 2. If the percentage of depreciation is not given, then the amount of depreciation to be charged every year shall be calculated as follows: Amount of Depreciation = $$\frac { Cost-Estimated scrao value }{ Expected life }$$ Percentage of Depreciation = $$\frac { Deprecation × 100 }{ Ongnal cost of assete }$$ Example: A firm bought an asset for ₹ 2,00,000 on 1st January, 2012. ₹ 10,000 are spent on installation. The life of the asset is estimated to be 5 years. Its scrap value at the end of the period is ₹ 10,000. Find the amount of annual depreciation. Solution: Amount of depreciation = $$\frac{\text { Cost – scrapvalue }}{\text { Estimatedlife }}$$ Here; Cost of asset Original cost + expenses upto installation = 2,00,000 + 10,000 = ₹ 2,10,000 Scrap value = ₹ 10,000 (given) Estimated life = 5 years (given) Applying the above Formula : Amount of depreciation = $$\frac{\text { Cost – scrapvalue }}{\text { Estimatedlife }}$$ = $$\frac{2,10,000-10,000}{5}$$ = ₹ 40,000. Therefore, ₹ 40,000 will be charged annually as depreciation. Note : Calculating rate of depreciation [Taking figures of above example] Rate of depreciation = $$\frac{\text { Amount of Annual Depreciation }}{\text { Cost of Asset }}$$ = $$\frac{40,000}{2,10,000} \times 100$$ = 19% (approx) • The amount of depreciation under this method will remain same every year. • If the asset is purchased in between the year, depreciation will be charged only for that part of the year when the asset was held by the company. • If the date on which asset is purchased is not given, depreciation will be charged for half year (assuming that the asset was purchased in the mid of the year). • Value of asset each year in Balance Sheet is reasonably fair. Merits of this Method: • Simple to calculate • Asset can be completely written off • Amount and rate of depreciation remains same throughout the useful life. • Best suited when asset is depreciating due to efflux of time (passage of time). • The value of the asset each year in the Balance Sheet is reasonably fair. Demerits of this Method: • It assumes that the asset should be used equally throughout its life. This is not realistic. • Charge on asset will not be uniform because in the later years apart from depreciation, repair expenses to the asset will also be incurred. • It does not take into account the effective utilization of the asset. Declining Charge Method – Diminishing Balance Method: • This method is also known as written down value method. • Under this method, depreciation is charged at a fixed rate on the reducing balance. • Reducing Balance or Written Down Value = Cost of Asset – Depreciation • The depreciation charge under this method goes on decreasing gradually. Hence in the earlier years when there are negligible repairs, depreciation is high and in the later years when repairs are high, depreciation charge is low. Annuity Method: • Under this method, depreciation takes into account element of interest on capital outlay. • Here, along with the value of asset, the interest lost over the life of the asset is also written off. • The amount of Interest is calculated on Book value of the asset in the beginning of each year. • Since the amount of interest lost cannot be computed easily, hence we make use of annuity tables for calculating the amount of depreciation. • Here, the amount written off annually is constant. • This method is best suited for writing off amount paid for long leases which involve a heavy capital outlay. Note: What is the element of interest on capital outlay? When an amount is invested to purchase a capital asset, it is assumed that if that amount was invested elsewhere, it would have earned interest. This notional income is considered as a cost of the asset. It is a type of opportunity cost. Journal Entries: (i) On Purchase of Asset: Asset A/c Dr. To Bank A/c (ii) For Charging Interest on Asset: Asset A/c Dr. To Interest A/c (iii) For Charging Depreciation: Depreciation A/c Dr. To Asset A/c (iv) For Transfer of Interest A/c to P/L A/c: Interest A/c Dr. To P/L A/c (v) For Transfer of Depreciation A/c to P/L: Profit & Loss A/c Dr. To Depreciation A/c Depreciation Fund Method (Sinking Fund Method): • Under this method, the amount of depreciation is not charged from the assets and remain same year after year. • Here, the amount annually provided for depreciation is placed to the credit of a special account named as “Sinking Fund A/c”. • The amount so accumulated in the sinking fund account shall be invested in government securities bearing interest at specified rate. • When the asset is due for replacement, the securities are sold and the new asset is purchased with the proceeds of their sale. • The book value of old asset is transferred to the Sinking Fund A/c. • Any amount realised from sale of old asset as well as Profit/Loss on sale of securities is transferred to Sinking Fund A/c. • Sinking Fund A/c is closed by transferring the balance to Asset A/c. Journal Entries: (a) At the end of First Year: (i) For setting aside amount of depreciation : Depreciation A/c Dr. To Depreciation Fund A/c (ii) For investing the amount of depreciation : Depreciation Fund Investment A/c Dr. (with the amount in dep. fund) To Bank (b) In the Second and Subsequent Years : (i) For interest received on investment: Bank A/c Dr. To Interest on Dep. Fund Investment A/c (ii) For transferring interest to depreciation fund A/c : Interest on depreciation fund Investment A/c Dr. To Depreciation Fund (iii) For annual installment of depreciation : Depreciation A/c Dr. To Depreciation Fund A/c (iv) For investing the amount of depreciation and interest received on investment: Depreciation Fund Investment A/c Dr. To Bank At the end of Last Year: First three entries will be same as in second year. In the last year, the amount will not be invested because the old asset is replaced by new one for which investments will need to be sold. (i) For Sale of Investment: Bank A/c Dr. To Depreciation Fund Investment A/c (ii) For Transfer of Profit or Loss on sale of Investment: In case of Profit In case of Loss Depreciation Fund Investment A/c Dr. To Depreciation Fund A/c (with the amount of net profit on sale of investment) Depreciation Fund A/c   Dr. To Depreciation Fund Investment A/c (with net loss on sale of investment) (iii) For Sale of Old Asset: Bank A/c Dr. (with net amount To Old Asset A/c (iv) Transferring Depreciation Fund A/c to Old Asset A/c: Depreciation Fund A/c Dr. To Old Asset A/c (with the balance of depreciation fund A/c) The Balance in Old Asset A/c represents Profit or Loss. It will be transferred to P/L A/c. (v) For Purchases of New Asset New Asset A/c Dr. (with cash realised on sale of old asset) To Bank Insurance Policy Method: • Under this method, the company takes an insurance policy for replacement of the asset. • At the beginning of every year, a fixed amount of premium is paid. • At the end of the term, the agreed sum is received from the insurance company which is used for the replacement of asset. • Amount will be paid in the beginning of year. Journal Entries: (a) First year and Subsequent years: (i) Insurance premium paid at the beginning of the year: Depreciation Insurance Policy A/c Dr. To Bank A/c (ii) At year end: P/L A/c Dr. To Depreciation Reserve A/c (b) At the end of Last year: (i) Amount realised from insurance company: Bank A/c Dr. To Depreciation Insurance Policy A/c (ii) For transfer of profit on insurance policy: Depreciation Insurance Policy A/c Dr. To Depreciation Reserve A/c (iii) For transfer of accumulated depreciation to asset A/c: Depreciation Reserve A/c Dr. To Asset A/c (iv) On purchase of new asset: New Asset A/c Dr. To Bank A/c Note: Sinking Fund Method and Insurance Policy Method: Under sinking fund, the amount in the reserve is used for buying government securities whereas in insurance policy method an insurance policy is taken for this purpose. Example : Cost of asset is ₹ 1,00,000 Rate of depreciation to be written off each year is 10% on the reducing balance. Calculate the depreciation charge for the first 3 years. Solution: Depreciation under this method can also be determined using the following formula: Rate of Depreciation = 1 – n$$\sqrt{\frac{N e t \text { Residual Value(Salvage Value) }}{\text { Costof Acquisition }}}$$ Where n = life of the asset Merits of this Method: • Uniform weight of charge to P/L A/c for depreciation and repairs • This method is recognized by the Income Tax Act • Easier to compute • Any additions to the asset are depreciated at the same rate. Demerits of this Method: • Value of asset can never be reduced to zero. • Computation of rate of depreciation is a bit complex. • Depreciation is neither based on use of asset nor a uniform charge is made. Difference between Straight Line Method and Written Down Value Method: Basic Straight Line Method Written Down Value Method 1. Depreciation Charge Depreciation is calculated on the original cost of a fixed asset: Depreciation is calculated on the diminishing balance or written down value of a fixed asset. 2. Amount of Depreciation The amount of depreciation remains the same for all years. The amount of depreciation reduces year after year. 3. Zero Balance At the expiry of the working life of the asset, the balance in the asset account reduces to zero. The balance in the asset account will not reduce to zero. 4. Cost of Depreciation and Repairs The combined cost on account of depreciation and repairs is lower in the initial years and higher in the later years. The combined cost on account of depreciation and repairs are more or less, equal throughout the period. 5. Suitability This method is more suitable for assets which get depreciated on account of expiry of working life of the asset. This method is suitable for such assets which require more and more repairs in the later years of their working life. 6. Calculation Easy or Difficult It is easy to calculate the rate of depreciation. It is difficult to calculate the rate of depreciation. Sum of Years Digit Method: • This method is a slight variation of reducing balance method. • Under this method, the charge for depreciation for an accounting period is calculated in proportion of the remaining life of the asset at the beginning of every accounting year. • Depreciation = $$\frac { Remaining life of asset including current year × Cost of asset }{ Sum of digits of life of the asset }$$ In the above Formula : Remaining life of assets Individual digits used in life of asset taken in reverse order Sum of digits representing life of asset = n$$\frac { (n+1) }{ 2 }$$ Example Suppose the estimated life of an asset is 10 years and cost of asset is ₹ 1,00,000. Depreciation for the First year: $$\frac { Remaining life including C.Y × Cost of asset }{ Sum of digits of life of asset }$$ → Remaining life including C.Y. =10 → Sum of digits n$$\frac { (n+1) }{ 2 }$$ = $$\frac { 10×11 }{ 2 }$$ = 55 Depreciation = $$\frac { 10 }{ 55 }$$ x 1,00,000 = ₹ 18,181. Depreciation for Second Year = $$\frac { 9 }{ 55 }$$ x 1,00,000 = ₹ 16,363 Note : The depreciation is reducing year by year. Depreciation Accounting MCQ Questions 1. Depreciation is: (a) a fall in the original cost of an asset (b) a fall in the book value of an asset (c) a fall in the market value of asset (d) a fall in the real value of an asset (b) a fall in the book value of an asset 2. Depreciation is: (a) a process of valuation of fixed asset (b) a process of allocation of the cost of fixed asset (c) a method of providing funds for replacement (d) a process of writing off losses (b) a process of allocation of the cost of fixed asset 3. The amount of depreciation remains constant year after year under: (a) Written Down Value Method (b) Straight Line Method (c) Sinking Fund Method (d) Annuity Method (b) Straight Line Method 4. Any Profit of Loss on the sale of Sinking (depreciation)fund investment is transferred to: (a) Profit and loss account (b) Asset account (c) Sinking fund account (depreciation fund account) (d) Depreciation A/c (c) Sinking fund account (depreciation fund account) 5. Under annuity method, the amount of depreciation is: (a) Increasing every year (b) Decreasing every year (c) Fixed for all the years (d) Revalued every year (c) Fixed for all the years 6. The number of production units expected to be obtained from the use of an asset by an enterprise is called as: (a) Unit life (b) Useful life (c) Production life (d) Expected life (b) Useful life 7. In which of the following methods, the cost of the asset is not spread over in equal proportion during its useful economic life? (1) Straight Line Method (2) Written Down Value Method (3) Units of Production Method (4) All of the above (a) 2 and 3 (b) 1 and 2 (c) 3 and 4 (d) 1 and 4 (a) 2 and 3 8. For charging depreciation, on which of the following assets, the depletion method is adopted? (a) Plant & machinery (b) Land & building (c) Goodwill (d) Wasting assets like mines and quarries (d) Wasting assets like mines and quarries 9. If a concern proposes to discontinue its business from March 31,2006 and decided to dispose off all its assets within a period of 4 months, the Balance Sheet as on March 31,2006 should show the assets at their: (a) Historical cost (b) Net realizable value (c) Cost less depreciation (d) Cost price or market value, whichever is lower. (b) Net realizable value 10. Obsolescence of a depreciable asset may be caused by: I. Technological Changes II. Improvement in Production Method III. Change in Market Demand for the Product or Service Output IV. Legal or Other Restrictions (a) Only (I) above (b) Both (I) and (II) above (c) All (I), (II),(III) and (IV) above (d) only (IV) above. (c) All (I), (II),(III) and (IV) above 11. Using the equal instalment method for depreciation the relevant formula is: (a) Annual charge against profit = $$\frac { Originalcost-Residual value }{ Number of year of active life }$$ (b) Annual charge against profits = $$\frac { Number of year of active life }{ Originalcost-Residualvalue }$$ (c) Annual charge against profits = $$\frac { Originalcost-Residual value }{ Estimated number of year remaining }$$ (d) Annual charge against profits = $$\frac { Estimated number of year remaining }{ Originalcost-Residual value }$$ (a) Annual charge against profit = $$\frac { Originalcost-Residual value }{ Number of year of active life }$$ 12. A Second hand machinery was purchased for ₹ 1,00,000 five years ago and was overhauled by carrying out some repairs at a cost of ₹ 10,000. It has also an accumulated depreciation of ₹ 50,000. It has been disposed off in the beginning of the sixth year for ₹ 60,000.Profit /loss on such disposal shall be: (a) Profit of ₹ 10,000 (b) Loss of ₹ 50,000 (c) Loss of ₹ 40,000 (d) No Profit, no loss (d) No Profit, no loss 13. An asset was purchased for ₹ 12,500 and under the reducing balance method 20 percent of the reducing value of the asset is written off each year. What is the value of the asset at the end of three years? (a) ₹ 8,000 (b) ₹ 7,500 (c) ₹ 6,400 (d) ₹ 5,000 (c) ₹ 6,400 14. A machine is purchased for ₹ 200. To achieve a residual value of ₹ 128 at the end of the second year (assuming that depreciation is calculated at the end of each year) the percentage depreciation using the reducing balance method must be: (a) 72% (b) 36% (c) 20% (d) 12% (c) 20% 15. The main objective of providing depreciation is to: (a) Create secret reserves (b) Reduce the book value of assets (c) Value the assets property (d) Allocate cost of the assets (d) Allocate cost of the assets 16. Charging a period for the proportionate cost of an intangible asset is termed as: (a) depreciation (b) diminution (c) amortisation (d) expiration (c) amortisation 17. In the books of D Ltd. machinery account shows a debit balance of ₹ 60,000 as on April 1,2003. The machinery was sold on September 30,2004 for 7 20,000. The Company charges depreciation @ 20% P.a. on diminishing balance method Profit / Loss on sale will be: (a) ₹ 23,200 Profit (b) ₹ 23,200 loss (c) ₹ 7,800 Profit (d) ₹ 7,800 loss (b) ₹ 23,200 loss 18. A new machine costing ₹ 1,10,000 was purchased by a company to manufacture a special product. Its useful life is estimated to be 5 years and scrap value of 7 20,000. The production plan for the next 5 years using the above machine is as follows: Year 1-10,000 units: Year 2-20,000 units: Year 3-24,000 units: Year4- 40,000 units : year 5- 50,000 units. The depreciation for the 1sl year under units-of-production method will be (a) ₹ 6,250 (b) ₹ 12,500 (c) ₹ 15,000 (d) ₹ 25,000 (a) ₹ 6,250 19. A Co. purchased a machine on Jan, 1,03 for ₹ 2,20,000. Installation expenses were ₹ 40,000. Residual value after 5 years ₹ 5,000 on 01.07.2003, expenses for repairs were incurred to the extent of ₹ 2,000. Depreciation is provided @ 10% p.a. underwritten down value method. Depreciation for the 4th year will be: (a) ₹ 52,000 (b) ₹ 26,000 (c) ₹ 21,060 (d) ₹ 18,954 (d) ₹ 18,954 20. Original Cost = ₹ 1,30,000: Salvage Value = 4,000. Useful Life = 6 years. Depreciation for the first year under sum of years digits method will be: (a) ₹ 6,000 (b) ₹ 12,000 (c) ₹ 18,000 (d) ₹ 36,000 (d) ₹ 36,000 21. A Co. purchased a machine on Jan 1,2003 for ₹ 1,20,000. Installation expenses were ₹ 10,000. Residual value after 5 years ₹ 5,000. On July 1, 2003 expenses for repairs were incurred to the extent of ₹ 2,000. Depreciation is provided under straight line method. Annual Depreciation will be: (a) ₹ 13,000 (b) ₹ 24,000 (c) ₹ 21,000 (d) ₹ 25,000 (d) ₹ 25,000 22. Original Cost of a machine was ₹ 1,26,000; Salvage Value was nil, Useful Life was 6 years. Depreciation for the fourth year under sum of years digits method will be: (a) ₹ 6,000 (b) ₹ 12,000 (c) ₹ 18,000 (d) ₹ 24,000 (c) ₹ 18,000 23. Which of the following statements is / are True? I. The term ‘depreciation’,’depletion’ and ‘amortization’ convey the same meaning. II. Provision for depreciation a/c is created. III. The main purpose of charging the Profit and Loss a/c with the amount of depreciation is to spread the cost of an asset over its useful life for the purpose of income determination. (a) Only (I) above (b) Only (II) above (c) Only (III) above (d) All (I), (II) and (III) above. (d) All (I), (II) and (III) above. 24. Which of the following expenses is not included in the acquisition cost of a Plant and Equipment? (a) Cost of site preparation (b) Delivery and handling Charges (c) Installation costs (d) Financing costs incurred subsequent to the period after plant and equipment is put to use. (d) Financing costs incurred subsequent to the period after plant and equipment is put to use. 25. The portion of the acquisition cost of the asset, yet to be allocated is known as: (a) Written down value (b) Accumulated value (c) Realizable value (d) Salvage value (a) Written down value 26. Depreciation is charged because of: (i) Wear & tear (ii) Deterioration (iii) Depletion (iv) Passage of time (a) Only (i) (b) Both (i) & (iii) (c) Only (ii) (d) All (i), (ii), (iii) & (iv) (d) All (i), (ii), (iii) & (iv) 27. Objectives of charging depreciation are: (i) Ascertaining correct profits (ii) Ascertaining the cost of the product (iii) To gain tax benefits (iv) To meet the legal requirements (a) Both (i) & (ii) (b) (iii) only (c) Both (ii) & (iv) (d) All (i), (ii), (iii) & (iv) (d) All (i), (ii), (iii) & (iv) 28. If an asset is purchased for ₹ 5,00,000 and installation charges are ₹ 50,000. The estimated scrap value is ₹ 1,00,000 and the useful life of the asset is 5 years, then the amount of depreciation to be charged as per SLM method is: (a) ₹ 90,000 (b) ₹ 80,000 (c) ₹ 1,11,111 (d) ₹ 1,20,000 (a) ₹ 90,000 29. Scrap value of an asset refers to the amount that it can fetch at the: (a) Beginning of its life (b) End of its life (c) Middle of its life (d) None of the above (b) End of its life 30. Obsolescence of a depreciable asset Is caused by: (a) Change in technology (b) Innovation (c) Improvement in the method of production (d) All of the above (d) All of the above 31. Depreciation is a process of: (a) Verification of asset (c) Allocation of cost of asset to the period of its life (d) All of the above (c) Allocation of cost of asset to the period of its life 32. Annuity method of depreciation is suitable for: (a) Tangible assets (b) Intangible assets (c) Leasehold assets (d) None of the above (c) Leasehold assets 33. A gold mine was taken on lease for ₹ 50,00,00,000. The total production capacity of the mine is 10,000 tonnes. The total production in the year 2010 was 2,000 tonnes. The depreciation for the year 2010 is: (a) ₹ 10,00,00,000 (b) ₹ 5,00,00,000 (c) ₹ 20,00,00,000 (d) None of the above (a) ₹ 10,00,00,000 34. Depletion method is normally applied in case of: (a) Wasting Assets (b) Intangible Asset (c) Tangible Asset (d) None of these (a) Wasting Assets 35. In the sinking fund method of charging depreciation, the amount debited to P&L A/c is: (a) More in the initial years (b) More in the ending year (c) Remains the same every year (d) None of the above (c) Remains the same every year 36. If diminishing value method is used then the amount of depreciation charged to P&L A/c is: (a) Equal in all years (b) Decreases year after year (c) Increases year after year (d) None of these (b) Decreases year after year 37. In sinking fund investments method, the profit on sale of investments is transferred to: (a) Asset A/c (b) Bank A/c (c) Depreciation Fund A/c (d) Depreciation Fund Investment A/c (c) Depreciation Fund A/c 38. Under the insurance policy method, the fixed premium is paid: (a) To the beginning of the year (b) Middle of the year (c) End of the year (d) None of the above (a) To the beginning of the year 39. In group depreciation method: (a) Assets having similar average life is grouped together (b) Depreciation is charged on the entire group and not on individual assets (c) Both (a) and (b) (d) Neither (a) nor (b) (c) Both (a) and (b) 40. The effect of change in the method of depreciation is to be taken: (a) Retrospectively (b) Prospectively (c) Both (a) and (b) (d) Neither (a) nor (b) (a) Retrospectively 41. Depreciation is charged on the: (a) Historical cost (b) Replacement cost (c) Realisable cost (d) None of these (a) Historical cost 42. During inflationary period, which method of depreciation is the most suitable: (a) Charging depreciation on historical cost (b) Charging depreciation on realisable value (c) Charging depreciation on replacement cost (d) None of the above (c) Charging depreciation on replacement cost 43. A machine was purchased for ₹ 10,000 on Jan, 2008. Depreciation is to be charged @ 25% on WDV method. It was sold for ₹ 6,000 at the end of the third year. Calculate the profit / loss: (a) Profit ₹ 1,781 (b) Profit ₹ 2,300 (c) Loss ₹ 3,219 (d) Loss ₹ 3,299 (a) Profit ₹ 1,781 44. Which of the following is depleted? (a) Land (b) Goodwill (c) Machinery (d) Quarries (d) Quarries 45. Which method is allowed as per Income Tax Act? (a) Reducing balance method (b) Sinking fund method (c) Annuity method (d) Straight line method (a) Reducing balance method 46. Under the annuity method, the asset account is debited by: (a) Depreciation fund A/c (b) Interest A/c (c) Sinking fund A/c (d) None of these (b) Interest A/c 47. Which method of depreciation considers the element of interest on capital outlay? (a) WDV method (b) Sinking fund method (c) Annuity method (d) SLM method (c) Annuity method 48. The value of an asset is ₹ 50,000. Its working life is 10 years. Firm uses sum of years digits method for providing depreciation. What will be the amount of depreciation for second year? (a) ₹ 5,000 (b) ₹ 9,091 (c) ₹ 4,500 (d) ₹ 8,181 (d) ₹ 8,181 Sum of year digits = 10 + 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 = 55 Depreciation for the second year will be: = 50,000 x $$\frac { 9 }{ 55 }$$ = ₹ 8,181 49. Decrease in value of a fixed asset due to normal wear and tear is known as: (a) Depreciation (b) Obsolescence (c) Appropriation (d) Spoilage. (a) Depreciation Depreciation means a fall in the value of asset due to usage, efflux of time or due to obsolescence. In other words we can say, that decrease in value of a fixed assets due to normal wear and tear is known as Depreciation. 50. Dinesh Garments purchased a machine for ₹ 50,000 and spent ₹ 6,000 on its erection. On the date of purchase, it was estimated that effective life of the machine will be ten years and after ten years its scrap value will be ₹ 6,000. The amount of depreciation for second year on straight line basis is: (a) ₹ 5,000 (b) ₹ 5,600 (c) ₹ 6,000 (d) ₹ 6,200 (a) ₹ 5,000 Cost of Machine = 50,000 + 6,000 = ₹ 56,000 Depreciation as per Straight line basis = $$\frac { Cost of Machine-Scrap value }{ Estimated Life of year }$$ = $$\frac{56,000-6,000}{10}$$ = ₹ 5 000 Depreciation for each year will be ₹ 5,000 Thus, Second year Depreciation = ₹ 5,000 51. A firm charges depreciation on straight line method. The rate of depreciation is reduced from 25% to 10%. What will be the impact of this change on profits? (a) Decrease in profits (b) Increase in profits (c) Decrease in assets (d) Increase in expenses. (b) Increase in profits Depreciation is transferred to debit side of profit & loss A/c. If depreciation rate is reduced from 25% to 10%, depreciation amount will be less than other past years. The less amount of depreciation will be transferred to profit & loss A/c while will result in increase in profits. 52. Under straight line method, depreciation is calculated on: (a) Written Down Value (b) Salvage Value (c) Original Cost (d) Market Value (c) Original Cost Under straight line method, a fixed proportion of the original cost of the asset is written off each year so that asset account may be reduced to its residual value at the end of its estimated economic useful life. Thus, it can be said that under this method, depreciation is calculated on Original Cost. 53. Which of the following assets are shown at written down value in Balance Sheet? (a) Current Assets (b) Liquid Assets (c) Floating Assets (d) Fixed Assets (d) Fixed Assets Depreciation is charged only on fixed assets and depreciation is a permanent, continuous and gradual shrinkage in the book value of fixed assets. So, it can be said that Fixed Assets are shown at written down value in Balance Sheet. 54. On 1st April, 2012 in Sethi’s Ledger, furniture account showed a balance of ₹ 2,00,000. On 1st October, 2012 Sethi purchased new furniture by paying ₹ 5,000 and giving old furniture whose book value on 1st April, 2012 was ₹ 12,000 to the seller. Sethi provides depreciation on furniture @ 10% per annum on diminishing balance method. The net value of furniture in Sethi’s books as on 31st March, 2013 would be: (a) ₹ 1,85,080 (b) ₹ 1,83,960 (c) ₹ 1,84,780 (d) ₹ 2,04,400. (c) ₹ 1,84,780 In Books of Sethi Furniture Account 55. The written down value of machine on 31st March, 2013 is ₹ 72,900. The machine was purchased on 1st April, 2010. Depreciation is being charged @ 10% p.a. by diminishing balance method. The cost price of the machine would be: (a) ₹ 1,00,000 (b) ₹ 90,000 (c) ₹ 81,000 (d) ₹ 72,900. (a) ₹ 1,00,000 Cost price of machine = $$\frac{72,900}{90 \% \times 90 \% \times 90 \%}$$ = ₹ 1,00,000 56. A company purchased plant for ₹ 50,000. The useful life of the plant is 10 years and the residual value is ₹ 5,000. The management wants to depreciate it by straight line method. Rate of depreciation will be: (a) 8% (b) 9% (c) 10% (d) None of the above. (b) 9% 57. Madhur and Company purchases a machine for a certain sum. The company has a policy of charging 8% depreciation on written down value. The depreciated value of the machine after three years in the books of Madhur and Company is ₹ 3,89,344. What was the purchase value of machine. (a) ₹ 5,00,000 (b) ₹ 4,60,000 (c) ₹ 4,23,000 (d) ₹ 5,52,000. (a) ₹ 5,00,000 58. The value of a fixed asset after deducting depreciation is known as its __________. (a) Book value (b) Market Value (c) Face Value (d) Realisable value. (a) Book value The value of a fixed asset after deducting depreciation is known as written down value or book value. 59. Dinesh Garments purchased a machine for ₹ 50,000 and spent ₹ 6,000 on its creation. On the date of purchase it was estimated that the effective life of the machine will be ten years and after ten years its scrap value will be ₹ 6,000. The amount of depreciation for each year on straight line basis is __________. (a) ₹ 5,000 (b) ₹ 5,600 (c) ₹ 6,000 (d) None of the above. (a) ₹ 5,000 Total cost of machinery will be = 50,000 + 6,000 = ₹ 56,000 Scrap value after 10 years will be = ₹ 6,000 Dep. on the basis of straight line = $$\frac { Cost of machinery – Scrap Value }{ Life of machinery }$$ = $$\frac{56,000-6,000}{10}$$ = ₹ 5,000 Thus, Option (a) is right. 60. An equipment was purchased on 1st January, 2012 for ₹ 25,000 and is to be depreciated at 30% based on reducing balance method. If the company closes its books of account on 31st December every year, what would be the net book value of the equipment as at 31st December, 2013 __________. (a) ₹ 12,250 (b) ₹ 10,000 (c) ₹ 17,750 (d) ₹ 12,545. (a) ₹ 12,250 Calculation of Net Book Value of the Equipment 61. Coal mine is which type of asset __________. (a) Fixed Asset (b) Current Asset (c) Wasting Asset (d) Fictitious Asset. (c) Wasting Asset Coal mines are wasting assets as their value loses because they get exhausted on account of continuous extractions. 62. If the original and current price of machinery is given, it will be recorded at which value? (a) Historical value (b) Market value (c) Realisable value (d) Original cost. (d) Original cost. Due to the cost concept, we record the fixed assets at cost price and not at market price. 63. An equipment was purchased on 1st January, 2012 for ₹ 25,000 & is to be depreciated at 30% based on WDV method. If the company closes its books of account on 31st March every year. What would be the net book value of the equipment as at 31st December 2013: (a) ₹ 12,250 (b) ₹ 10,000 (c) ₹ 17,750 (d) ₹ 12,545 (a) ₹ 12,250 Calculation of Net Book Value of the Equipment 64. Which of the following are amortised : (a) Patent (c) Goodwill (d) All of these (d) All of these Amortization is nothing but the name given to the depreciation charged on intangible assets such as goodwill, patents, copyright, trademark, etc. Hence, all of the above are amortized. 65. The WDV of machine is ₹ 72,900, rate of depreciation @ 10%, period 3 years. Calculate the original cost of machinery. (a) ₹ 72,900 (b) ₹ 80,000 (c) ₹ 1,20,000 (d) ₹ 1,00,000. (d) ₹ 1,00,000. 66. Valueless assets are treated as: (a) Tangible Asset (b) Intangible Asset (c) Fictitious Asset (d) Current Asset. (c) Fictitious Asset Fictitious assets are those assets which have no value but are recognised as an asset. Thus, the valueless assets are treated as fictitious assets. 67. A company purchased a mine of ₹ 50,000. Its scrap value is ₹ 5,000 and expected working life is 9 years. 1,00,000 units were expected to be produced during its working life. Units produced in first 3 years are 7,000, 15,000 and 19,000 respectively. Calculate the amount of depreciation for the third year by using depletion method. (a) ₹ 3,150 (b) ₹ 8,550 (c) ₹ 3,000 (d) ₹ 6,750 (b) ₹ 8,550 Rate of depreciation = $$\frac { Total cost of mine }{ Total Units }$$ = $$\frac{50,000-5,000}{1,00,000}$$ = 45 = 45% Depreciation = Quantity extracted during the year x Rate of depreciation = 19000 x 45% = ₹ 8,550 is the depreciation for third year. Hence option (b) is correct. 68. The value of a fixed asset after deducting depreciation is known as its __________. (a) Face Value (b) Market Value (c) Realisable Value (d) Book Value (d) Book Value Depreciation is a process of allocating the cost of a fixed asset over its estimated useful life in a rational and systematic manner. The value of a fixed asset after deduction of depreciation is said as book value of the respective asset. 69. Samar purchased a machinery worth ₹ 1,00,000 and spent ₹ 20,000 on its repairs and ₹ 15,000 on its carriage. He decided to sell the machinery at 25% margin on selling price. What will be the expected sale value of machinery? (a) ₹ 1,25,000 (b) ₹ 1,53,000 (c) ₹ 1,80,000 (d) ₹ 1,33,000 (c) ₹ 1,80,000 Cost of machinery = ₹ 1,00,000 + 20,000 + 15,000 = ₹ 1,35,000. 25% on selling price = $$\frac { 25 }{ 100-25 }$$ on cost. $$\frac { 25 }{ 75 }$$ x ₹ 1,35,000 = ₹ 45,000 ₹ 1.35,000 + ₹ 45,000 = ₹ 1,80,000 70. A decrease in value of fixed asset due to age, wear and tear: (a) Appreciation (b) Written down value (c) Depreciation (d) Accumulated depreciation. (c) Depreciation Depreciation is decrease in value of fixed asset due to physical wear and tear, obsolescence, passage of time. 71. Depletion is charged on: (a) Fixed Assets (b) Wasting Assets (c) Current Assets (d) All of the above (b) Wasting Assets Depletion method is applicable in case of wasting assets, example – mines, quarries, oil well etc. from which a certain quantity of output is expected to be obtained. 72. An asset becomes useless because of technical changes this is because of: (a) Obsolescence (b) Physical Deterioration (c) Depletion (d) Passage of time (a) Obsolescence Sometimes an asset becomes useless because of technical changes within the industry, technical progress in other industry, change in supply etc., this is known as Obsolescence. 73. Which of the following is not considered while calculating depreciation under straight line method? (a) Salvage value of asset (b) Annual repair cost of the asset (c) Life of the asset (d) Cost of the asset. (b) Annual repair cost of the asset Under straight line method, a fixed proportion of the original cost of the asset is written off each year, so that asset account may be reduced to its residual value at the end of its estimated economic useful life. It ignores annual repair cost of the asset. The formula is: 74. On 14th April, 2014 tools account showed a balance of ₹ 12,960. On 31st March, 2015 closing balance of tools was ₹ 14,040. The tools purchased during the year were for ₹ 4,320. Depreciation on loose tools for the year would be: (a) ₹ 3,240 (b) ₹ 1,080 (c) ₹ 3,600 (d) ₹ 3,000 (a) ₹ 3,240 75. As per Income Tax Act, which method of providing depreciation is recognised? (a) Replacement method (b) Depletion method (c) Diminishing balance method (d) Sum of the year digit method. (c) Diminishing balance method Diminishing Balance Method is recognised by the income tax authorities. Under this depreciation is calculated at a certain percentage each year on the balance of the assets which is brought forward in the previous year. Thus, amount of depreciation becomes higher in the earlier periods and becomes gradually lower in subsequent periods, while repairs and maintenance charges increase gradually. 76. A company purchased plant for ₹ 50,000. The useful life of the plant is 10 years and the residual value is ₹ 5,000. The management wants to depreciate it by straight line method. Rate of depreciation will be: (a) 9% (b) 8% (c) 10% (d) 7% (a) 9% 77. On April 1,2013 the debit balance of the Machinery A/c of A Ltd. was ₹ 7,29,000. The machine was purchased on April 1,2010. The company charged depreciation @ 10% p.a, under diminishing balance method. The value of machinery on April 1,2012 was:- (a) ₹ 10,00,000 (b) ₹ 9,00,000 (c) ₹ 8,10,000 (d) ₹ 12,00,000 Let the Original Cost be 100 Dep. @ 10% under WDV Cost after 1 year = 100 – 10 = 90 Cost after 2 year = 90 – 9 = 81 Cost after 3 year = 81 – 8.1 = 72.9 Original Cost = $$\frac { 100 }{ 72.9 }$$ x 7,29,000 = 10,00,000 Cost on 1 April, 2010 = ₹ 10,00,000 Cost on 1 April, 2011= ₹ 9,00,000 Cost on 1 April, 2012 = ₹ 8,10,000 78. The amount of depreciation charged on machinery will be debited to __________. (a) Machinery A/c (b) Depreciation A/c (c) Cash A/c (d) Repair A/c. (b) Depreciation A/c The amount of depreciation charged on machinery will be debited to depreciation account. Hence, option (b) is correct. 79. A company purchased a mine of ₹ 50,000. Its scrap value is ₹ 5,000 and expected working life is 9 years. 1,00,000 units were expected to be produced during its working life. Units produced in first 3 years are ₹ 7,000, ₹ 15,000 and ₹ 19,000 respectively. Calculate the amount of depreciation for the third year by using depletion method. (a) ₹ 3,150 (b) ₹ 8,550 (c) ₹ 3,000 (d) ₹ 6,750 (b) ₹ 8,550 80. The written down value of machine on 31st March 2013 is ₹ 72,900. The machine was purchased on 1st April, 2010. Depreciation is being charged @ 10% p.a. by diminishing balance method. The cost price of the machine would be: (a) ₹ 1,00,000 (b) ₹ 90,000 (c) ₹ 81,000 (d) ₹ 72,900 (a) ₹ 1,00,000 Cost price of Machine would be ₹ 1,00,000 on 1st April, 2010. Written down Value of Machine on 1st April, 2011 is ₹ 90,000 (1,00,000 – 1,00,000 x 10%). Written down Value of Machine on 1st April, 2012 is (90,000 – 90.0 x 10%) = ₹ 81,000. Written down Value of Machine on 31st March, 2013 is (81,000 – 81.0 x 10%) = ₹ 72,900. 81. E Ltd. a dealer in second-hand machinery has the following five machines of different models and makes in their stock, at the end of the financial year 2012-13? The value of stock included in the Balance Sheet of the company as on 31st March, 2013 was: (a) ₹ 7,62,500 (b) ₹ 7,70,000 (c) ₹ 7,90,000 (d) ₹ 8,70,000 (b) ₹ 7,70,000 So, Closing Stock included in Balance Sheet by following Golden Rule ‘Cost or NRV whichever is lower’. Machine cost or NRV whichever is less taken A – 90,000, C – 2,65,000, E – 2,00,000 B – 1,15,000, D-1,00,000 Total Stock included in Balance Sheet is 90,000 + 1,15,000 + 2,65,000 + 1,00,000 + 2,00,000 = ₹ 7,70,000 82. Fire Insurance premium paid on 1st October, 2011 for the year ended on 30th September, 2012 was ₹ 2,400 and Fire Insurance Premium paid on 1st October, 2012 for the year ending on 30th September, 2013 was ₹ 3,200. Fire Insurance Premium paid as shown in the profit and loss account for the accounting year ended 31st December, 2012 would be: (a) ₹ 2,400 (b) ₹ 2,600 (c) ₹ 2,800 (d) ₹ 3,000 (b) ₹ 2,600 1.10.11 – 30.9.12 2,400 1.10.12 – 30.9.13 3,200 Premium to be shown in P/L A/c for the year ending on 31.12.12 would be: 83. A company purchased plant for ₹ 50,000. The useful life of the plant is 10 years and the residual value is ₹ 5,000. The management wants to depreciate it by straight line method. Rate of depreciation will be: (a) 8% (b) 9% (c) 10% (d) None of the above (b) 9% Plant purchased for ₹ 50,000 Residual Value = ₹ 5,000 So, Plant Cost after Residual Value is ₹ 45,000 (50,000 – 5,000) Useful life = 10 years. Rate of Depreciation = ? Amount of Depreciation = $$\frac{50,000-5,000}{10}$$ = $$\frac{45,000}{10}$$ = 4,500 Amount of depreciation = Original Cost x $$\frac{Rate of Deprecation}{100}$$ 4,500 = 50,000 x $$\frac{Rate of Deprecation}{100}$$ Rate of Depreciation = $$\frac{4,500 \times 100}{50,000}$$ = 9% 84. An equipment was purchased on 1st January, 2012 for ₹ 25,000 and is to be depreciated @ 30% based on written down value method. If the company closes its books of accounts on 31st March every year. What would be the net book value of the instrument /equipment as at on 31st December, 2013. (a) ₹ 12,250 (b) ₹ 10,000 (c) ₹ 17,750 (d) ₹ 12,545 (d) ₹ 12,545 Purchase Price of Asset: ₹ 25,000 (Jan. 2012) Depreciation = $$\frac{Price of on Asset }{100 }$$ x Rate of Depreciation Jan 2012 – March 2012 = $$\frac{25,000}{100} \times 30 \times \frac{9}{12}$$ = ₹ 1,875 Value of an asset on 1st April, 2012 = 25,000 – 1,875 = ₹ 23,125 Dep. for 1st April, 2012 to 31st March, 2013 = 23,125 x $$\frac{ 30 }{100}$$ = ₹ 6,938 Value of an asset on 1st April, 2013 = 23,125 – 6,938 = ₹ 16,187 Dep. for 1st April, 2013 to 31st Dec. 2013 = 16,187 x $$\frac{ 30 }{100}$$ x $$\frac{9}{12}$$ = ₹ 3,642 Book Value of An Asset = Purchase Price of Asset – Sum of all depreciation. Purchase Price : ₹ 25,000 Sum of All depreciation = 1,875 + 6,938 + 3,642 = 12,455 = 25,000 – 12,455 = ₹ 12,545, hence, option (d) is correct. 85. Dinesh Garments purchased a machine for ₹ 50,000 and spent ₹ 6,000 on its erection. On the date of purchase it was estimated that the effective life of the machine will be ten years and after ten years its scrap value will be ₹ 6,000. The amount of depreciation for each year on straight line basis is: (a) ₹ 5,000 (b) ₹ 5,600 (c) ₹ 6,000 (d) None of the above (a) ₹ 5,000 Depreciation = $$\frac{Cost-Scrap Value}{Estimated useful life}$$ = $$\frac{56,000-6,000}{10}$$ = ₹ 5,000 p.a. 86. An equipment was purchased on 1st January, 2012 for ₹ 25,000 and is to be depreciated at 30% based on reducing balance method. If the company closes its book of accounts on 31st March every year, what would be the net book value of the equipment as at 31st December, 2013: (a) ₹ 12,250 (b) ₹ 10,000 (c) ₹ 17,750 (d) ₹ 12,545 (d) ₹ 12,545 87. Madhur and company purchases a machine for a certain sum. The company has a policy of charging 8% depreciation on written down value. The depreciated value of the machine after three years in the books of Madhur and company is ₹ 3,89,344 what was the purchase value of machine: (a) ₹ 5,00,000 (b) ₹ 4,60,000 (c) ₹ 4,23,000 (d) ₹ 5,52,000 (a) ₹ 5,00,000 Cost of Machine = $$\frac{3,89,344}{92 \% \times 92 \% \times 92 \%}$$ = ₹ 5,00,000. 88. The value of a fixed asset after deducting depreciation is known as its: (a) Book value (b) Market value (c) Face value (d) Realisable value (a) Book value Value of an asset, less its depreciation is known as its book value or written down value. 89. Under which method of depreciation value can be zero __________. (a) SLM (b) WDV (c) Both ‘a’ and ‘b’ (d) None of the above (a) SLM As the book value reduces every years, it is also known as the Reducing Balance Method or written down value reduces every year, hence the amount of depreciation also reduces every year. Under this method, the value of asset never reduces to zero. 90. Depreciation is provided under which AS? (a) AS-1 (b) AS-6 (c) AS-10 (d) AS-4 (c) AS-10 Deprecation under AS 10 Property, Plant and Equipment Depreciable amount of any asset should be allocated on a methodical basis over the useful life of asset. Every part of property or P&E (Plant and Equipment) whose cost is substantial with respect to the overall cost of the item must be depreciated separately 91. Among which of the following is changed in depreciation? (a) SLM or WDV (b) WDV or SLM (c) Both (d) None (c) Both Method of Deprecation are: • WDV • SLM • Depletion method • Double dealing method • Annuity method • Machine hours method etc. 92. Which of the following is common method of charging depreciation (a) SLM (b) WDV (c) Annuity (d) None (a) SLM The most commonly used method for calculating depreciation under generally accepted accounting principles, or GAAP, is the straight line method. This method is the simplest to calculated results in fewer errors, stays the most consistent and transitions well from company prepared statements to tax returns. Accounting Process-II – CS Foundation Fundamentals of Accounting Notes Go through this Accounting Process-II – CS Foundation Fundamentals of Accounting and Auditing Notes will help students in revising the entire subject quickly. Accounting Process-II – CS Foundation Fundamentals of Accounting Notes Accounting Errors: Accounting Errors are the error committed by the persons responsible for recording and maintaining of a business in the course of accounting process. Rectification of Errors: • Errors means unintentional omission or commission of accounts or amounts while recording entries. • Due to errors, the final accounts do not show a true and fair view. So these errors need to be rectified. • There can be many types of errors, some may effect trial balance while others may not. Even if they do not affect trial balance, there occurrence may distort the true picture of books and accounts. We will be first studying these errors and their nature and then in the later part of chapter, we will study how to rectify these errors Types of Errors: (i) Error of principle (ii) Clerical errors • Errors of omission (partial or complete) • Error of commission • Compensating errors Type of Error Meaning Effect in Trial Balance 1. Error of principle When there is an error in complying accounting principles. Example: 1. Treating capital expenditure as revenue or vice versa. 2. Recording sale of fixed asset as an ordinary sale. No effect in Trial Balance. It will tally. 2. Error of omission (i) Complete omission (ii) Partial omission 1. When an entry is totally eliminated from being recorded. 2. When an entry is recorded partially i.e. any one aspect (debit or credit) is not recorded. 1. No effect on Trial Balance. 2. Trial Balance will be affected. It will not tally 3. Error of Commission Any type of error committed while recording entries. Example: 1. Writing wrong amount 2. Writing correct amount but on wrong side 3. Wrong casting (totalling) of subsidiary book etc. Trial Balance may or may not agree. 4. Compensating Errors When two errors are committed such that one compensates with that of another. For Example: Rahul’s A/c was debited with ₹ 100 instead of ₹ 1,000 while Ajay’s A/c was debited with ₹ 1,000 instead of ₹ 100. Trial Balance will agree. Effect of errors on the Trial Balance: If a Trial Balance is matched then it does not mean that it is free from errors. Thus, errors can be classified into two types. • Errors which effect the Trial Balance, these errors are disclosed by the Trial Balance. • Errors which have no effect on the Trial Balance. These errors are not disclosed by the Trial Balance. Errors disclosed by Trial Balance: The following are the examples of errors disclosed by Trial Balance: • Error in casting subsidiary books • Error in carrying forward total of one page to another • Error in totalling the trial balance • Error in balancing an account • Error in preparation of schedules • Error in carrying the balance to the trial balance • Error of partial omission • Double Posting to an account • Error of posting from book of subsidiary record to ledger. Errors not disclosed by Trial Balance: The following errors are not disclosed by trial balance i.e. the trial matches even if the errors are present. • Error of complete omission i.e. when a transaction has been completely omitted from being recorded • Errors of commission • Compensatory errors • Errors of principle • Recording wrong amount in subsidiary book • Errors of duplication Steps to Locate Errors: • First check whether the Trial Balance is agreeing, if not there is an indication of errors. • Even if the,trial balance has agreed still there may be errors (like compensating errors, errors of principle etc.) • Ensure that cash and bank balances have been transferred to the Trial Balance. • Balance the ledger accounts again and check whether the right totals have been transferred to trial balance. • Check the totals of subsidiary books again. • Check the opening balances. • Check the postings of nominal accounts first. All above points will locate the errors which are to be rectified. Rectification of Errors: • Errors whether affecting the trial balance or not should be rectified. • The process of rectifying the errors is called rectification of errors. Need for Rectification: • To present correct accounting information • Ascertaining actual profit or loss • To disclose true financial position of the enterprise. Stages of Rectification: • Before preparation of Trial Balance. • After preparation of Trial Balance but before preparation of Final Accounts. • In the next accounting period (i.e. After preparation of final accounts) Rectification before preparation of Trial Balance: • Errors located before preparation of Trial Balance can be one sided errors or two sided errors. • There are different rectification treatments for both. In case of one sided error: These are the errors affecting only one side of an Account. Example: The total of debit side was written as ₹ 1,000 instead of ₹ 10,000. This error will affect only the debit side. Errors affecting one account may occur on account of following reasons- • Wrong casting • Wrong balancing • Wrong posting • Wrong carry forward • Omission of an amount in Trial Balance Rectification of such errors: • No journal entry is to be passed. • Only the relevant account will be debited or credited. • The double entry for this rectification entry will not be complete. An agreement of Trial Balance does not prove that • All transactions have been correctly analyzed and recorded in proper account. • All transactions have been recorded in the books of original entry. Example: Total of Purchase Book was ₹ 1,00,000 short. Rectification : Debit purchase A/c with ₹ 1,00,000 with the words “To short total of purchase book”. In case of two sided error: • When there is an error which affects both aspects of a transaction (i.e. debit and credit) it is known as a two sided error. • Example – Complete omission of an entry. • Journal entry is required to be passed for these errors. Errors which affect two or more accounts are as follows : • Error of complete omission • Error in recording subsidiary books • Errors in posting to wrong account with or without wrong amount • Error of principle. Rectification of these errors • Step – 1 : Write the correct entry which should be passed. • Step – 2 : Write the entry which has been actually passed • Step – 3 : Reconcile both and pass the rectifying entry. Example: A credit sale of ₹ 1,000 to Mohan has been passed through purchase book. Rectification – 1. Mohan had to be debited with ₹ 1,000 but he was credited with ₹ 1,000. So for rectifying it he has been debited with 2,000. 2. Purchase A/c was wrongly debited so for rectifying, it has been credited. 3. Sale A/c was not credited so for rectifying, it has been credited. Rectification after preparation of Trial Balance but before preparation of final accounts: • If errors are located after preparation of Trial Balance, so they can’t be rectified using the previous methods because now the ledger accounts have already been closed. • Like earlier method, these errors can also be – (i) One sided (ii) Two sided. One sided errors (errors affecting one A/c): • Since the ledger accounts are already closed so one aspect of an entry cannot be rectified by posting it in the respective ledger A/c. • For rectifying such errors, Suspense A/c is opened. Suspense Account: • Sometimes, it is not possible for the accountant to locate the difference in the Trial Balance. But the books cannot be closed with such difference so he puts the Trial Balance difference to a newly opened account known as Suspense Account. • In simple words, it is an account in which the difference of the Trial Balance is put temporarily. • If debit side is less, Suspense A/c is debited and if credit side is less, it is credited. • When the errors are located, Suspense A/c will be closed. • A Suspense A/c is opened in the following cases – (a) to balance the disagreed Trial Balance (b) to post uncertain items. example: payment received from unknown person). Rectification of errors: Any difference in trial balance whether debit or credit shall be transferred to the Suspense A/c. This will lead to the agreement of trial balance total and when the error is located, the entry will be reversed and Suspense A/c will be closed. Example: Sales book was under cast by ₹ 500. Due to this, credit side of Trial Balance should be short by ₹ 500. Rectifying entry : Suspense A/c Dr. 500 To Sales A/c 500 After this entry the trial balance will tally and final accounts can be prepared easily. In case of two sided errors: It will be rectified in the same manner as two sided errors before preparation of Trial Balance were rectified. (i.e. by passing a wrong entry, then right entry and then a rectification entry.) Rectification of errors after preparation of Final A/c: One sided errors – When errors are detected after preparation of final accounts, then they are rectified as follows: (i) In case of Nominal Accounts: • Nominal Account balances are transferred to the P/L A/c at the year end. • So in the next accounting year, when rectification is to be made, we cannot use these nominal accounts. • For this purpose, a new account Profit and Loss Adjustment A/c is opened which substitutes all nominal accounts of the previous year. • For rectification, if nominal account is to be debited or credited then instead of nominal account, Profit and Loss Adjustment A/c is debited or credited. (ii) In case of Real or personal Accounts: The rectification is done through Suspense account and other concerned account affected by the errors. Two sided errors: • in case of nominal accounts – Rectification is done through Profit & Loss Adjustment A/c and the other A/cs affected. • In case of real or personal accounts – The rectification is carried out through two or more concerned accounts affected by the errors without involving Profit and Loss Adjustment A/c. Examples: Wages paid ₹ 2,000 for installation of machinery has been charged to Wages Account Rectification before preparation of final A/c’s Rectification after preparation of final A/c’s Machinery A/c                                Dr. 2,000 To Wages A/c                                       2,000 Machinery A/c  Dr. 2,000 To P/L Adjustment A/c 2,000 Note: • After rectification of all errors of last year the balance of P/L Adjustment A/c is transferred to Capital A/c being the net profit or loss due to rectification of errors of last year. • If both accounts are nominal, then no rectification entry is passed. Ascertainment of true profit of previous year: To know the correct profit of previous year, the following is to be done: • If P/L Adjustment A/c reveals a profit, add this to the profit of the previous year. • If P/L Adjustment A/c shows a loss, it should be deducted from the profit of the previous year. Accounting Process-II MCQ Questions 1. A Trial Balance will not tally if: (a) Correct journal entry is posted twice (b) The purchase on credit basis is debited to purchases and credited to cash (c) ₹ 5,000 cash payment to creditors is debited to creditors for ₹ 500 and credited to cash as ₹ 5,000. (d) None of the above. (c) ₹ 5,000 cash payment to creditors is debited to creditors for ₹ 500 and credited to cash as ₹ 5,000. 2. Error of commission do not permit: (a) The Trial Balance to agree (b) Correct total of Balance Sheet (c) Correct totalling of Trial Balance (d) None of the above. (a) The Trial Balance to agree 3. An item of ₹ 72 has been debited to a personal account as ₹ 27, is an error of: (a) Commission (b) Omission (c) Principle (d) None of the above. (a) Commission 4. Sales to Shyam of ₹ 500 not recorded in the books would affect: (a) Shyam’s Account (b) Sales Account (c) Sales Account and Shyam’s Account (d) Cash Account. (c) Sales Account and Shyam’s Account 5. Error of commission arises when: (a) Any transaction is incorrectly recorded either wholly or partially (b) Any transaction is left either wholly or partially (c) Any transaction is recorded in a fundamentally incorrect manner (d) None of these. (a) Any transaction is incorrectly recorded either wholly or partially 6. Errors which affect one account can be: (a) Errors of Omission (b) Errors of Principle (c) Errors of Posting (d) None of these. (c) Errors of Posting 7. Which of the following errors will not affect the Trial Balance? (a) Wrong balancing of an account (b) Wrong totalling of an account (c) Writing an amount in the wrong account but on the correct side (d) Omission of an account from Trial Balance. (c) Writing an amount in the wrong account but on the correct side 8. Purchase of office furniture for ₹ 20,000 has been debited to Purchase A/c it is : (a) An error of omission (b) An error of commission (c) Compensating error (d) An error of principle. (d) An error of principle. 9. In case a Trial Balance does not agree, the difference is put to: (a) Suspense A/c (b) Drawings A/c (c) Capital A/c (a) Suspense A/c 10. Sale of typewriter that has been used in the office should be credited to: (a) Sales A/c (b) Cash A/c (c) Capital A/c (d) Typewriter A/c (d) Typewriter A/c 11. Suspense Account in the Trial Balance will be entered in the: (a) Manufacturing A/c (c) Profit & Loss A/c (d) Balance Sheet. (d) Balance Sheet. 12. Rent paid to landlord amounting to ₹ 500 was credited to Rent A/c with ₹ 5,000. In the rectifying entry, Rent A/c will be debited with ₹ ________. (a) 5,000 (b) 500 (c) 5,500 (d) 4,500 (c) 5,500 13. Purchased goods from Gopal for ₹ 3,600 but was recorded in Gopal’s A/c as ₹ 6,300. In the rectifying entry, Gopal’s A/c will be debited with. (a) ₹ 9,900 (b) ₹ 2,700 (c) ₹ 2,600 (d) ₹ 6,300 (b) ₹ 2,700 14. Sohan returned goods to us amounting ₹ 4,200 but was recorded as ₹ 2,400 in his account. In the rectifying entry, Sohan’s A/c will be credited with. (a) ₹ 1,800 (b) ₹ 4,200 (c) ₹ 2,400 (d) ₹ 6,600 (a) ₹ 1,800 15. Error of principle arises when: (a) Any transaction is recorded in fundamentally incorrect manner (b) Any transaction is left to be recorded either wholly or partially (c) Any transaction recorded but with wrong amount (d) None of these. (a) Any transaction is recorded in fundamentally incorrect manner 16. Errors of carry forward from one year to another year affects: (a) Personal Account (b) Real Account (c) Nominal Account (d) Both Personal & Real A/cs. (d) Both Personal & Real A/cs. 17. Purchase of Office furniture ₹ 1,200 has been debited to General Expense Account. It is : (a) A clerical error (b) An error of principle (c) An error of omission (d) Compensating error (b) An error of principle 18. Goods purchased from A for ₹ 30,000 passed through the Sales Book. The error will result in : (a) Increase in gross profit (b) Decrease in gross profit (c) No effect on gross profit (d) Either (a) or (b) (a) Increase in gross profit 19. If the amount is posted in the wrong account or it is written on the wrong side of the account, it is called: (a) Error of omission (b) Error of commission (c) Error of principle (d) Compensating error. (b) Error of commission 20. A sale of ₹ 2,000 wrongly entered in the purchase book. It will: (a) Decrease the gross profit by ₹ 2,000 (b) Increase the gross profit by ₹ 2,000 (c) Increase the gross profit of ₹ 4,000 (d) None of the above. (a) Decrease the gross profit by ₹ 2,000 21. Wages paid for erecting a machine should be debited to: (a) Repair account (b) Machine account (c) Cash account (d) Furniture account. (b) Machine account 22. Goods given as charity should be credited to: (a) Charity account (b) Sales account (c) Purchase account (d) Cash account. (c) Purchase account 23. The preparation of a trial balance is for: (a) Locating errors of commission (b) Locating errors of principle (c) Locating clerical errors (d) All of the above. (c) Locating clerical errors 24. Sales to Ram of ₹ 336, were not recorded. This will affect: (a) Only Sales account (b) Only Ram’s accounts (c) Both the accounts (d) None of these accounts. (c) Both the accounts 25. Sales to Ram, ₹ 336 have been debited to Shyam’s account. This will be rectified by: (a) Debiting Ram’s account and Crediting Shyam’s account (b) Debiting Shyam’s account and Crediting Ram’s account (c) Crediting both the accounts. (d) None of these. (a) Debiting Ram’s account and Crediting Shyam’s account 26. Discount allowed ₹ 93 to Mohan has been credited to his account by ₹ 39. The error will be rectified by: (a) Crediting Mohan by ₹ 54 (b) Debiting Mohan by ₹ 54 (c) Debiting discount by ₹ 54 (d) None of these. (a) Crediting Mohan by ₹ 54 27. Out of the following the example of error of principle is : (a) Omitted to record sales in sales book ₹ 500 (b) Under total of purchase book ₹ 100 (c) Purchased furniture ₹ 1, 000 was recorded in Purchase A/c (d) None of the above. (c) Purchased furniture ₹ 1, 000 was recorded in Purchase A/c 28. While preparing Trial Balance, the head not included in trial balance. (a) Drawing A/c. (b) Suspense A/c. (c) Capital A/c. (d) Closing stock A/c. (d) Closing stock A/c. 29. ₹ 50,000 received from Ajay credited in the A/c of Abhay. It is an error of: (a) Principle (b) Commission (c) Both (a) and (b) (d) None. (b) Commission 30. There will be difference in trial balance if: (a) Repair of ₹ 500 was recorded in Plant A/c (b) Construction of roof ₹ 10,000 was recorded in Wages A/c instead of Building A/c. (c) Paid salary to clerk ₹ 3,000 was recorded in Clerk A/c instead of Salary A/c. (d) Received 5,000 from Manoj was debited to his account. (d) Received 5,000 from Manoj was debited to his account. 31. If rent received from tenant ₹ 5,000 is correctly entered in the trial balance but wrongly debited to the Rent A/c then: (a) The trial balance will agree (b) The debit side will exceed the credit side by ₹ 10,000 (c) The debit side total will exceed the credit by ₹ 5,000 (d) The credit side will exceed the debit side by ₹ 5,000 (b) The debit side will exceed the credit side by ₹ 10,000 32. The method for preparing the trial balances are: (a) Balance method (b) Total method (c) Both (a) and (b) (d) Neither (a) nor (b) (c) Both (a) and (b) 33. Wages paid for construction of office building debited to Wages A/c is a: (a) Error of principle (b) Error of commission (c) Error of omission (d) None of the above (a) Error of principle 34. Suspense Account is a: (a) Real A/c (b) Nominal A/c (c) Personal A/c (d) It has no nature (d) It has no nature 35. If the sales book is understated by ₹ 500, the rectification entry will be: (a) Debit sales A/c, Credit debtors A/c (b) Debit suspense A/c, Creditors sales A/c (c) Debit debtors A/c, Credit sales A/c (d) None of the above (b) Debit suspense A/c, Creditors sales A/c 36. In case of error of commission: (a) The trial balance agrees (b) The trial balance will not agree (c) The trial may agree or may not agree (d) None of the above (c) The trial may agree or may not agree 37. Sale of old car credited to Sales A/c is: (a) Error of commission (b) Compensating error (c) Error of omission (d) Error of principle (d) Error of principle 38. If the closing stock appears in the trial balance, then it shall be recorded in: (a) Balance Sheet (c) P & L A/c (d) Both (a) and (b) (a) Balance Sheet 39. Depreciation A/c appearing in the Trial Balance will be recorded in: (a) Balance Sheet (c) P & L A/c (d) None of the above (c) P & L A/c 40. Difference between the total of debit and credit side of Trial Balance is transferred to: (a) Suspense A/c (c) Miscellaneous A/c (d) Difference A/c (a) Suspense A/c 41. ________ is used to ensure the arithmetical accuracy of the posting that has been done. (a) Balance Sheet (b) Ledger (c) Trial Balance (d) Subsidiary Books (c) Trial Balance 42. If the closing stock appears in the trial balance, then it implies that: (a) It is adjusted against opening stock (b) It is adjusted against closing stock (c) It is adjusted against purchase (d) It is adjusted against sales (c) It is adjusted against purchase 43. Purchase of machinery on credit is recorded in: (a) Purchase book (b) Journal proper (c) Cash book (d) None of the above (b) Journal proper 44. The balance of various accounts are transferred to: (a) Trial Balance (b) Ledger (c) Balance Sheet (d) P & L A/c (a) Trial Balance 45. If the Purchase A/c is debited by ₹ 200 in excess and the Sales A/c is credited in excess by ₹ 200, then it is a: (a) Compensatory Error (b) Errors of Commission (c) Error of Principle (d) None of the above (a) Compensatory Error 46. A mistake in transferring the balance of an account to the trial balance is: (a) Error of omission (b) Errors of principle (c) Compensatory error (d) Error of commission (d) Error of commission 47. A mistake in casting of a subsidiary book: (a) Compensating Error (b) Error of Principle (c) Error of Omission (d) Error of Commission (d) Error of Commission 48. If purchases made for cash is correctly entered in the cash book but wrongly credited to the Purchase A/c, then it is: (a) Compensating Error (b) Error of Principle (c) Error of Commission (d) None of the above (c) Error of Commission 49. If a transaction is entered in the subsidiary book but it is not posted in the respective ledger, then it is: (a) Error of principle . (b) Error of commission (c) Partial omission (d) Complete omission (c) Partial omission 50. Which of the following error shall NOT be disclosed by the Trial Balance? (a) Error in casting subsidiary book (b) Error in totalling the Trial Balance (c) Errors in preparing schedules (d) Error of duplication (d) Error of duplication 51. Which of the following error shall be disclosed by the Trial Balance? (a) Error of complete omission (b) Error of partial omission (c) Error of duplication (d) Recording wrong amount in subsidiary books (b) Error of partial omission 52. If a wrong amount is written in the subsidiary book then: (a) The trial balance will not agree (b) The trial balance will agree (c) Both (a) and (b) (d) None of these (b) The trial balance will agree 53. If a transaction is entered twice in a subsidiary book then: (a) The trial balance will agree (b) The trial balance will NOT agree (c) Both (a) and (b) (d) None of these (a) The trial balance will agree 54. If there is an error in carrying forward the total of one page to another, then: (a) The trial balance will NOT agree (b) The trial balance will agree (c) Either (a) or (b) (d) Neither (a) nor (b) (a) The trial balance will NOT agree 55. If a transaction worth ₹ 215 is written as ₹ 251, then it is: (a) Error of principle (b) Error of commission (c) Partial omission (d) Complete omission (b) Error of commission 56. If there is transposition in figures, then the difference in trial balance will be divisible by: (a) Nine (b) Ten (c) Five (d) Three (a) Nine 57. Which of the following errors will affect agreement of trial balance? (a) Repairs on building have been debited to building account. (b) The total of purchase book is short by ₹ 10 (c) Freight paid on new machinery has been debited to freight account. (d) Sales of ₹ 500 to Ram has been debited to Shyam’s account. (b) The total of purchase book is short by ₹ 10 • Repairs on building have been debited to building account. • Freight paid on new machinery has been debited to freight account. • Sales of ₹ 500 to Ram has been debited to Shyam’s account. Above, all three entry was not cause of disagreement of Trial Balance as due to these errors the debit side and credit side of trial balance will remain unchanged. The total of purchase book is short by ₹ 10. Only this error will cause disagreement of trial balance as due to this error the total of debit side of trial balance will be short by ₹ 10 than the total of credit side of Trial Balance. 58. After preparing the Trial Balance, the accountant finds that the total of the debit side of Trial Balance is short by ₹ 1,000. This difference will be: (a) Credited to suspense account (b) Debited to suspense account (c) Adjusted to any of account having debit balance (d) Adjusted to any of account having credit balance (b) Debited to suspense account When a trial balance does not agree, efforts are made to locate errors and rectify them. However if reason for disagreement of trial balance cannot be found, the only treatment is that difference will be debited or credited to suspense account. If total of the debit side of Trial Balance is short by ₹ 1,000 the difference will be debited to suspense account. 59. Overcasting of sales book by ₹ 1,000 is a type of: (a) One sided error (b) Two sided error (c) Compensating error (d) Error of principle (a) One sided error Overcasting of sales book by ₹ 1,000 is a type of one sided error because due to this error only credit side of trial balance will be increased by ₹ 1,000 and debit side of trial balance will remain unchanged. 60. Which one of the following is correct about errors? (a) Errors always have impact on profits (b) Errors do not have any impact on profits (c) Errors may or may not have impact on profits (d) Errors always lead to decrease in profit. (c) Errors may or may not have impact on profits Unintentional omission or commission or amounts and accounts in the process of recording the transactions are commonly known as errors. Errors may occur as a result of mathematical mistakes, mistakes in applying accounting policies, misinterpretation of facts, or oversight. Thus, errors may or may not have impact an profits. 61. Whitewash charges of building ₹ 500 have been wrongly debited to building account. It is an example of: (a) Compensating error (b) Error of principle (c) Error of omission (d) Error of commission (b) Error of principle Whitewash charges of building is a revenue expenditure and it will be debited to profit and loss A/c. If any amount is debited to building A/c, it will be treated as capital expenditure. So, ‘whitewash charges of building ₹ 500 have been debited to building account’ is an error of principle. 62. If the effect of an error is cancelled by the effect of some other errors, the errors are known as: (a) Error of principle (b) Compensating Error (c) Error of omission (d) Error of commission (b) Compensating Error If the effect of an error is cancelled by the effect of some other error, the trial balance will naturally agree. Thus these type of errors are known as Compensating Error. 63. Which of the following errors will cause the disagreement of Trial Balance? (a) ₹ 821 received from Ravi has been debited to Kavi (b) A purchase of ₹ 281 from Sanju has been debited to his account as ₹ 281 (c) An invoice for ₹ 480 is. entered in the Sales Book as ₹ 840 (d) All of the above. (c) An invoice for ₹ 480 is. entered in the Sales Book as ₹ 840 An invoice of ₹ 480 is entered in the sales book as ₹ 840. This error was not cause the disagreement of Trial balance as due to this error the sales a/c will be credited by ₹ 840 and debtor a/c will be debited by ₹ 840 and hence the trial balance will match. 64. Error of principle will not permit: (a) Correct total of the balance sheet (b) Correct total of the trial balance (c) The trial balance to agree (d) None of the above. (d) None of the above. Error of principle has no impact on the agreement of trial balance and even after this error the trial balance agrees and hence balance sheet will also be totalled correctly. Hence, answer is none of the above. 65. Which of the following errors is an error of omission ________. (a) Sale of ₹ 1,000 was recorded in the purchase journal (b) Salary paid to Mohan and Vikas have been debited to their personal accounts (c) The total of sales journal has not been posted to the sales account (d) Repairs to building have been debited to building account. (c) The total of sales journal has not been posted to the sales account Error of omission means any transaction or entry is completely or partially omitted from the books of accounts. Thus ‘the total of sales journal has not been posted to the sales A/c’ is an error of omission. 66. Which of the following errors are revealed by the trial balance ________. (a) Errors of principle (b) Errors of omission (c) Errors of commission (d) None of the above. (c) Errors of commission Due to the errors of commission like • Wrong casting of subsidiary books • Posting the wrong amount in the ledger • Posting an amount on the wrong side • Wrong balancing of an account. The Trial Balance will not agree and will thus, the error will be revealed by the Trial Balance. 67. Which of the following errors will result into non-agreement of the trial balance? (a) Totalling the returns inwards journal as ₹ 11,400 instead of ₹ 12,600 (b) Recording a sales invoice for ₹ 5,600 as t 6,500 in the Sales Journal (c) Failing to record a purchase invoice for ₹ 54,000 in the Purchases Journal (d) Recording in the Purchases Journal, an invoice, for acquiring a non-current asset for ₹ 60,000. (a) Totalling the returns inwards journal as ₹ 11,400 instead of ₹ 12,600 “Totaling the return inwards journal as ₹ 11,400 instead of ₹ 12,600 “ is an error of commission means that the return inward account will be posted with wrong amount and this mistake will be reflected in the Trial Balance as the Trial Balance will not agree. 68. ₹ 1,000 was paid as rent to the landlord Krishna. This amount was debited to Krishna’s personal account. This error will ________. (a) Affect agreement of the trial balance. (b) Not affect agreement of the trial balance (c) Affect the suspense account (d) None of the above. (b) Not affect agreement of the trial balance ₹ 1,000 was paid as rent to the landlord, Krishna. This amount was debited to Krishna’s personal account. This error is a error of principle. Since error of principle does not affect agreement of trial balance, therefore option (b) is right. 69. If Sales is done and by mistake A’s account is transferred to Purchase A/c in such a case which accounts are affected? (a) Purchase a/c (b) A’s a/c (c) Both (a) and (b) (d) None of the above. (c) Both (a) and (b) If sale is done to A, the accounting entry will be- A’s A/c Dr. To Sales A/c In the given question, accounting entry passed Purchases A/c Dr. To Sales A/c. The rectifying entry for the same will be- To Purchases A/c Hence, it affects both, Purchases A/c and A’s A/c. 70. The credit side of trial balance shows: (a) Bank (b) Cash (c) Equipment (d) None of the above (d) None of the above The credit side of trial balance resembles the liabilities & income. Bank cash & equipment are assets & shown on debit side, hence, option (d) is correct. 71. A sold goods of ₹ 500/- to Z which is entered in purchase book as 5,000. What will be the entry after rectification? 72. “Wrong Casting of subsidiary book” is which type of error? (a) Error of Omission (b) Error of Commission (c) Error of Principle (d) Compensating Errors. (b) Error of Commission An error of commission is a type of error committed while recording entries. Hence, wrong casting of subsidiary book is an error of commission. 73. When two or more errors are committed in such a way that effect of one error is compensated by another error. Which type of error is this? (a) Error of Commission (b) Compensating Error (c) Error of Principle (d) None of these. (b) Compensating Error A compensating error is when two or more errors are committed in such a way that the effect of one error is compensated by another error. Hence option (b) is correct. 74. If there is any error in trial balance which is not effecting its total, will it affect any accounting procedure? (a) Yes (b) No (c) Don’t know (d) Partly Yes. (b) No If there is any error in trial balance which is not affecting its total, for example the compensating errors, there will be no effect on accounting procedure. Hence, option (b) is correct. 75. Which of the following errors are revealed by the trial balance? (a) Errors in balancing account (b) Errors of principle (c) Errors of complete omission (d) Compensatory Errors (a) Errors in balancing account Trial balance do not tally when balances which are posted from Ledger A/c differ. Thus errors in balancing accounts are revealed by trial balance. 76. Which type of error is there in trial balance? (a) Compensating error (b) Error of Principal (c) Error of omission/partial omission (d) All are applicable (c) Error of omission/partial omission Trial balance in general, discloses any error which affects one side of the account. These errors are disclosed by the trial balance as both sides of trial balance do not agree. Compensating errors are group of errors, the total effect of which is not reflected in trial balance. Errors of principle do not affect the agreement of trial balance. Errors of omission/partial omission affects the agreement of trial balance. 77. When an entry is passed correctly but on wrong A/c: (a) Compensating error (b) Error of commission (c) Error of principle (d) Error of omission (b) Error of commission If the transaction was debited or credited to a wrong account with correct amount and on the correct side in the books of original entry or in the ledger, it is known as error of commission. 78. Which of the following types of errors effect only one account? (I) Error casting (II) Errors of carry forward (III) Error of posting (a) (I) and (II) (b) (I) and (III) (c) (II) and (III) (d) (I), (II) and (III) (d) (I), (II) and (III) Trial balance in general, discloses any error which affects one side of the account. These errors are disclosed by the trial balance as both side of trial balance do not agree. 79. Commission received ₹ 2,500 correctly entered in cash book but posted on debit side of commission account, in trial balance: (a) Debit total will be greater by ₹ 5,000 than the credit total (b) Credit total will be greater by ₹ 5,000 than the debit total (c) The credit total will be greater by ₹ 2,500 than the debit total (d) The debit total will be greater by ₹ 2,500 than the credit total. (d) The debit total will be greater by ₹ 2,500 than the credit total. If commission received ₹ 2,500 correctly entered in cash book but posted on debit side of commission account in trial balance then debit total will be greater by ₹ 2,500 than the credit total to make a balance. 80. If a credit sale of ₹ 15,400 to Prem has been entered as ₹ 14,500. The journal entry for rectifying the error would be: (a) Debit Prem A/c 900 Credit Sales A/c 900 (b) Debit Sales A/c 900 Credit Prem A/c 900 (c) Debit Cash A/c 900 Credit Sales A/c 900 (d) Debit Prem A/c 15,400 Credit Sales A/c 15,400 (a) Debit Prem A/c 900 Credit Sales A/c 900 If credit sale of ₹ 15,400 to Prem has been entered as ₹ 14,500. The journal entry for rectifying the error would be: Prem A/c 900 To Sales A/c 900 81. Which of the following is not a Clerical error? (a) Error of Partial Omission (b) Error of Commission (c) Error of Principle (d) Error of Omission (c) Error of Principle Errors other than error of principle are clerical error. Clerical Error include: • Errors of Omission • Errors of Commission • Compensating error. 82. Whitewashing charges ₹ 50,000 were debited to building A/c, it is- (a) Error of omission (b) Error of commission (c) Error of principle (d) Compensating error (c) Error of principle Error of principles arise because of the failure to differentiate between capital expenditure and revenue expenditure and capital receipts and revenue receipts. The distinction between capital and revenue is of relevance because any incorrect adjustment or allocation in this respect would falsify the final results shown by the profit and loss account and the balance sheet. These errors do not affect the agreement of trial balance. Hence this is the example of error of principles. 83. Suspense A/c is a ________. (a) Real A/c (b) Personal A/c (c) Nominal A/c (d) None of the above (d) None of the above A suspense A/c could be a Personal, Real or Nominal A/c depending on the situation. Let us take an example you have received ₹ 5,000 but are not aware from whom and on what account this amount has been received, you can place this amount at the credit of Suspense A/c. Later if you come to know that it was received from Ramesh, then suspense account is a personal account. Similarly if you come to know that this amount was received against sale of old computer, suspense account is a real account. In case it was received on account of services you have rendered, it is an income account i.e. a nominal account. So suspense account can be of any type. 84. Commission received ₹ 2,500 correctly entered in the cash book but posted to the debit side of commission account. In the Trial Balance: (a) The credit total will be greater by ₹ 5,000 than the debit total (b) The debit total will be greater by ₹ 5,000 than the credit total (c) The Credit total will be greater by ₹ 2,500 than the debit total (d) The debit total will be greater by ₹ 2,500 than the credit total. (b) The debit total will be greater by ₹ 5,000 than the credit total Commission received is posted on the wrong side of the Commission A/c. In the Trial Balance the Debit side total will be greater by ₹ 5,000 than Credit side total. 85. An invoice from a supplier of office equipment has been debited to the stationary account. This error is known as: (a) An error of commission (b) A compensating error (c) An error of principal (d) An error of omission (a) An error of commission Supplier of office equipment has been debited to Stationery A/c. This is an Error of Compensation. 86. Which of the following errors will not cause the disagreement of trial balance? (a) ₹ 821 received from Ravi has been debited to Kavi (b) A purchase of ₹ 281 from Sanju has been debited to his account as ₹281 (c) An invoice for ₹ 480 is entered in the sales book as ₹ 840 (d) All of the above. (a) ₹ 821 received from Ravi has been debited to Kavi ₹ 821 has been received from Ravi has been debited to Kavi is a compensating error but it does not shown in the trial balance and trial balance will be agreed. 87. Error of principle will not permit: (a) Correct total of the balance sheet (b) Correct total of the trial balance (c) The trial balance to agree (d) None of the above (d) None of the above Due to error of principle, trial balance will agree, also Balance Sheet will agree and it is not shown in Trial Balance. 88. Charge legal expenses instead of Machinery A/c is an error of: (a) Principles (b) Commission (c) Partial ommission (d) None of the above. (a) Principles Legal expenses are expenditure and machinery is an asset. Whenever there is a failure in differentiating between capital expenditure and revenue expenditure, capital receipts and revenue receipts arise and this is known as an Error of Principle. So, option (c) is correct. 89. ₹ 1,000 was paid as rent to the landlord, Krishna. This amount was debited to Krishna’s personal account. This error will: (a) Affect agreement of the trial balance (b) Not affect agreement of the trial balance (c) Affect the suspense account (d) None of the above (b) Not affect agreement of the trial balance Since, the error is an error of principle, hence the agreement of the Trial Balance will not be affected. 90. Which of the following errors is on error of omission: (a) Sale of ₹ 1,000 was recorded in the purchase journal (b) Salary paid to Mohan and Vikas have been debited to their personal accounts (c) The total of sales journal has not been posted to the sales account (d) Repairs to building have been debited to building account (c) The total of sales journal has not been posted to the sales account Error of omission arise on account of some act of omission on the part of the person responsible for the maintenance of books of account. Example : Some transaction is entered in the subsidiary book, but is not posted to the ledger. Thus total of sales journal not posted to the sales account is an error of omission. 91. Which of the following errors are revealed by the trail balance: (a) Errors of principle (b) Errors of omission (c) Errors of commission (d) None of the above (c) Errors of commission Errors of commission, generally result in disagreement of the trial balance and hence are reflected by it. 92. Which of the following errors will result into non-agreement of the trial balance? (a) Totalling the returns inwards journal as ₹ 11,400 instead of ₹ 12,600 (b) Recording a sales invoice for ₹ 5,600 as ₹ 6,500 in the sales journal (c) Failing to record a purchase invoice for ₹ 54,000 in the purchases journal (d) Recording in the purchases journal, an invoice for acquiring a non-current assets, for ₹ 60,000 (a) Totalling the returns inwards journal as ₹ 11,400 instead of ₹ 12,600 Totalling the returns inward journal as 11,40.0 instead of ₹ 12,600 will affect the agreement of trial balance, as Debit and Credit amounts in ledger will be different. Accounting Process-I – CS Foundation Fundamentals of Accounting Notes Go through this Accounting Process-I – CS Foundation Fundamentals of Accounting and Auditing Notes will help students in revising the entire subject quickly. Accounting Process-I – CS Foundation Fundamentals of Accounting Notes Accounting is the language of business – Accounting cycle/Accounting process: Recording – Journal: • A journal is a book of original entry/prime entry wherein transactions are first recorded before being posted to the ledger. • A journal is that book of accounts in which transactions are original recorded in a chronological order. • An entry done in a journal is called a journal entry and the process of recording a transaction In a journal is known as joumalising. • A journal records both debit and credit aspects of a transaction. A journal contains the following columns: • Date : The date on which the transaction took place. • Particulars : The two aspects (debit and credit) are recorded here. • Ledger Folio (L.F.) : It records the page number in the ledger in which the accounts of the given entry are posted. • Amount (Debit) : Debit amount is recorded in the Dr. column. • Amount (Credit) : Credit amount is recorded in the Cr. column. Specimen of Journal: In the Books of ……… Journal Entries Date Particulars L.F Debit Amount (₹) Credit Amount (₹) (i) (ii) (iii) (iv) (v) Process of Journalising: • Step – 1 : Ascertain what accounts are affected in the transaction. • Step – 2 : Ascertain the nature of the account (i.e. real, nominal, personal etc.). • Step – 3 : Apply the rules of debit and credit to each type of account. • Step – 4 : Pass the entry. Example: Transaction – Rent paid in cash. Step – 1 : Ascertain what accounts are affected. Accounts affected are -Rent A/c and Cash A/c Step – 2 : Ascertain the nature of account Rent A/c – Nominal A/c (Expense) Cash A/c – Real A/c (Asset) Step – 3 : Apply golden rules of accounting: Rent (Nominal A/c’s) – Debit all expenses Cash A/c (Real) – Credit what goes out (as cash is going out of business) Step – 4 : Pass the entry Rent A/c Dr. (with the amount of rent) To Cash A/c (Being rent paid in cash) Note: • The account to be credited is written proceeded by a word “To”. • After every entry, a brief, description of the transaction is given in the next line of the entry. This is called narration and is written in brackets. Points to Note: • When goods are purchased “Purchase A/c” is debited, when goods are sold “Sales A/c” is credited. • If it is not stated that purchase/sale is on cash/credit, it is assumed to be on credit. • In a journal, the amount of debit and credit columns of each page are totaled and carried forward to the next page. Total c/f (carried forward) • Sometimes a journal entry may have more than one debit or credit aspects. These types of entries are known as compound entries. The total of debit should be equal to total of credits or vice versa. Example: Mohan purchased goods worth ₹ 15,000. He got ₹ 1,000 as discount and paid ₹ 14,000 in cash. Purchase A/c Dr. 15,000 To Cash A/c 14,000 (Being goods purchased on discount) Discount received is an income and is a Nominal A/c. Ledger (Principal Book of Accounts): • A ledger may be defined as a “book or register which contains in a summarized and classified form, a permanent record of all transactions”. • It is a book which contains all set of accounts – (real, personal, nominal). • Ledger is known as a principal book of account as it helps in the preparation of Trial Balance and financial statements (like P/L, B/S etc.) Format of ledger (i) It has two sides left side is the debit side whereas right side is the credit side. (ii) It has the following columns : • Date – Date of transaction • Particulars – Name of other account • Journal folio (J. F.) – Page number of journal where entry was first recorded • Amount: Amount of transaction • Same columns will be there on the other side also. Ledger posting: (i) The process of transferring the information contained in a journal to a ledger is called posting. (ii) Steps for posting: For account debited in a journal entry: • Step – 1 : Identify the ledger account to be debited • Step – 2 : In the debit side of that A/c, post the other aspect of the entry in the particular column by writing the word “To ”. • Step – 3 : Enter other details like amount J. F. and date. (iii) Rules for posting: • The name of the account in the journal and ledger should exactly be the same. • The account debited in journal will be debited in ledger and the account credited in ledger will be credited. • The word “To” will be added in the name of the accounts on the debit side and “By” will be added in the accounts on the credit side example : “To Sales”, “By Purchases” etc. • The page number of the journal from where the entry is transferred is to be written in the Folio Column. • The date of transactions is to be written in the date column. Example Rent paid in cash ₹ 10,000 Entry : Rent A/c Dr. 10,000 To Cash A/c 10,000 Posting Debit aspect of the entry – Rent A/c Ledger Rent A/c Difference between Journal and Ledger: 1. Journal is book of original entry while ledger is book of second entry. 2. Journal book is chronological while ledger is analytical. 3. Process of recording in journal is “journalising” while the process of recording in ledger is known as “Posting”. For an account credited in the journal entry: • Step – 1 : Identify the ledger A/c to be credited • Step – 2 : In the credit side of that account post the other aspect of the entry in the particular column by writing the word “By __________”. • Step – 3 : Enter other details like date, amount J. F. (if any) Example: Taking same example as above Rent A/c Dr. 10,000 To Cash A/c 10,000 Posting credit aspect of the entry: Ledger Cash A/c Balancing Ledger Account: 1. After all entries are posted, both the sides of an account are totalled. 2. For closing an account, both the side’s total shall be equal. 3. If any side falls short of another, in order to make them equal, a balance figure is placed on the side which is short. This process is known as balancing of an account. 4. If debit side total is more – the difference will be placed on credit side and it will be called as a debit balance. 5. If credit side is more – the difference will be placed on the debit side and it will be called as a credit balance. Note : The balance of an account is always known by the side which is greater. Example : Lets take the Rent A/c of the above example Rent A/c Debit side was more by ₹ 10,000 so balance has been written on the credit side to make them equal. This is a debit balance since debit side is more. (v) Note that all ledger accounts (except Nominal Accounts) are balanced. The nominal accounts are transferred to P/L A/c. Difference between Journal and Ledger: Basis Journal Ledger Nature of Book It is a book of primary entry. It is a book of final entry. Basis for Preparation Primary documents (such as vouchers, receipts etc.) are the basis for recording transactions in the journal. Journal is the basis for recording transactions in the ledger. Stage of Recording Recording in the Journal is the first stage. Recording in the ledger is the second stage. Process The process of recording in Journal is called journalising. The process of recording in the ledger is called posting. Subsidiary Book: • Subsidiary books are the journals in which transactions of similar nature are recorded at the first instance. • Recording all the entries in the journal will make the journal too lengthy and complicated. So for similar nature transactions separate journals are prepared which are known as subsidiary books. • The transactions will first time be recorded in subsidiary books. Types of subsidiary books: 1. Purchase Book – It records the credit purchase of goods traded in. Ex- Stationery dealer purchased stationery in credit from Ram. • Entries in the Purchase Book are made from the Invoice received from supplier at the end of week/month, total of Purchase Book is Debited to Purchases A/c in ledger. • Entries in the purchase books are made from the invoice received from the supplier. 2. Sales Day Book: • It records the credit sale of goods dealt in (traded in) Ex- Furniture dealer sold furniture on credit. • Sales Book is prepared on the basis of copies of invoice sent to customers. 3. Purchase Return Book (Return Outward Book): It records the goods or material returned to the supplier that have been purchased on credit. When goods are returned to the supplier a debit note is issued to him indicating that his account has been debited with the amount mentioned in the debit note. 4. Sales Return Day Book (Return Inwards Book): It records the goods or material returned by the purchaser that had been sold on credit. When goods are returned by a customer a credit note is sent to him mentioning that his account has been credited with the value of goods returned. 5. Bills Receivable Book: It records the bills of exchange or promissory note received by a business entity. 6. Bills Payable Book: It records the acceptance given to the creditor in the form of bills or promissory notes. 7. Cash Book: It is used to record all cash transactions of the business. 8. General Journal OR Journal Proper: All entries which cannot be recorded in the above subsidiary books are recorded in this book. Example: opening entries, closing entries, rectification entries, purchase and sale of asset etc. In Journal proper book, following types of transactions are recorded: • opening journal entry • closing journal entry • transfer entry • rectification entry • purchase of fixed asset/stationary on credit • sale of worn out or obsolete assets on credit Cash Book: • Cash book is a book of prime entry in which cash and bank transactions of a business are recorded in a chronological order. • Cash book acts as both a book of original entry and a ledger. Hence, it is both a principal book and a subsidiary book. It records transaction concerning cash receipts and cash payments. A cash book has two sides: • Debit: Cash and cheques received are recorded here. • Credit: Cash and cheque payments are recorded here. Types of Cash Book: It records only one aspect of transaction i.e. cash. Single column or Simple cash book: It is known as single column cash book because it contains only one amount column of cash. Format of simple cash book: Cash Book (Single Column) Double (two) column cash book • It is so called because it has two amount columns on both sides cash column and discount column. • Discount column on the debit side represents discount allowed while discount column on the credit side represents discount received. Format of two column cash book Three column (triple column) cash book: • It is so called because it contains three amount columns • Discount column, Cash column, Bank column • Discount column – for discount received and allowed Cash column – cash received and paid • Bank column – money deposited and money withdrawn from bank • When triple column cash book is prepared there is no need for preparing a bank account in ledger. Format of triple column cash book: Cash Book (Triple Column) Concept of Contra Entry • An entry which involves both cash and bank transactions is called a contra entry. • These entries are posted on both sides of a cash book one in bank column and other in cash column, [on opposite sides] • A letter “C” is written in L. F. column showing that the entry is a contra entry. Example Cash withdrawn from bank ₹ 5,000 Entry will be – Cash A/c Dr. 5,000 To Bank A/c 5,000 (Being cash withdrawn from bank) Showing the above entry in three column cash book Cash Book (Triple Column) Petty Cash Book: • Petty means small. A book which is used to record petty cash expenses of the business is called a petty cash book • Petty cash book is maintained by a petty cashier • The system by which petty cash book is maintained is known as “Imprest System” • Petty cash book is treated either as a part of double entry system or as a Memorandum Book. Note: Imprest System: Under this system, a fixed sum of money is given to the petty cashier for meeting expenses for a prescribed period called as Float. At the end of the period, if all the amount is used for meeting expenses, then the same fixed sum will be given to the petty cashier for the next period. If any balance is left, then the remaining amount will be given to the cashier for the next period. Example: If ₹ 500 are given to the cashier every month. For the month of January, he spends only ₹ 300 and 1200 are left with him. So, for the month of Feb, he will be given only additional ₹ 300 to complete ₹ 500. The balance of petty cash book at the year end is shown as an asset. Petty cash book has columns showing the amount allocated to various expenses. Format petty cash book Petty Cash Book Trial Balance: • “A trial balance is a statement prepared with the debit and credit balances of the ledger accounts including cash and bank balances to test the arithmetical accuracy of books”. • Trial balance is a statement and not an account and it is not a part of double entry system. • As per double entry system, totals of debit shall always be equal to totals of credit. To check this trial balance is prepared. • All the accounts showing either a debit balance or a credit balance are placed in the trial balance and the debit and credit balances of the accounts are placed at the debit and credit columns respectively. At last, total of debit and credit columns are done. • If both sides are equal – the accounts are arithmetically correct. However, there may be some hidden errors. Objectives of Trial Balance: • Check arithmetical accuracy of ledger accounts. • Helps in preparation of final accounts. • Helps in detection of errors. Methods of Preparing Trial Balance: (i) Totals method: Here totals of debits and credit columns of ledger accounts are taken to the trial balance. (ii) Balance method: Here the debit or credit balances of the ledger accounts are taken to the debit or credit column of trial balance respectively. Format of Trial Balance Specimen of Trial Balance Trial Balance as at _______ Accounting Process-I MCQ Questions 1. The process of recording a transaction in the journal is called : (a) Posting (b) Journalising (c) Tallying (d) Casting (b) Journalising 2. Personal accounts are related to: (a) Assets and liabilities (b) Expenses, losses and incomes (c) Debtors, creditors etc. (d) All of these. (c) Debtors, creditors etc. 3. Goods given away as charity would be credited to : (a) Sales A/c (b) Purchase A/c (c) Charity A/c (d) Cash A/c. (b) Purchase A/c 4. Which of the following statements is true : (a) Building account is a nominal account (b) Outstanding rent account is a non-personal account (c) Every debit has a corresponding credit (d) Incomes are debited. (c) Every debit has a corresponding credit 5. Which one of the following is a personal account? (a) Capital A/c (b) Livestock Account (c) Goodwill Account (d) Outstanding salaries A/c. (d) Outstanding salaries A/c. 6. Payment of salary is recorded by: (a) Debiting salary A/c crediting cash A/c (b) Debiting cash A/c crediting salary A/c (c) Debiting employee A/c crediting cash A/c (d) Debiting employee A/c crediting salary A/c. (a) Debiting salary A/c crediting cash A/c 7. Debit means: (a) An increase in asset (b) An increase in liability (c) A decrease in asset (d) An increase in proprietor’s equity. (a) An increase in asset 8. Journal is a book of: (a) Original entry (b) Secondary entry (c) All cash transactions (d) All non-cash transactions. (a) Original entry 9. Which of the following is a cash transaction? (a) Sold goods (b) Sold goods to a customer (c) Sold goods to a customer on credit (d) Sold goods to a customer on account. (a) Sold goods 10. Received first and final payment of 60 paise in a rupee from the official receiver of: Mr. Ram who owed ₹ 2,000. (a) Discount allowed A/c be debited with ₹ 800 (b) Bad debts recovered A/c be debited with ₹ 1,200 (c) Bad debt A/c be credited with ₹ 800 (d) Bad debt A/c be debited with ₹ 800. (d) Bad debt A/c be debited with ₹ 800. 11. Patent Right is : (a) Personal Account (b) Real Account (c) Nominal Account (d) Expense Account. (b) Real Account 12. The debts written of as bad, if recovered subsequently are : (a) Credited to Bad Debts Recovered Account (b) Credited to Debtors Account (c) Debited to Profit and Loss Account (d) None of the above. (a) Credited to Bad Debts Recovered Account 13. Insurance unexpired account is a: (a) Real Account (b) Personal Account (c) Nominal Account (d) None of these. (b) Personal Account 14. A withdrawal of cash from business by the proprietor should be debited to: (a) Drawing Account (b) Capital Account (c) Cash Account (d) Purchase Account. (a) Drawing Account 15. If the total of debit side of an account exceeds the total of its credit side it indicates: (a) Debit balance (b) Credit balance (c) Either debit or credit (d) Neither debit nor credit. (a) Debit balance 16. Credit balance of a personal account indicates : (a) Cash balance (b) Amount payable (c) Amount receivable (d) None of the above. (b) Amount payable 17. Cash account will show : (a) Debit or credit balance (b) A credit balance (c) A debit balance (d) None of these. (c) A debit balance 18. The words To Balance bIV or ‘By Balance b/f are recorded in the ‘Particulars Column’ of an account at the time of positing of _______. (a) An opening entry (b) A closing entry (d) A transfer entry. (b) A closing entry 19. Normally, the following accounts are balanced : (a) Personal accounts and nominal accounts (b) Real accounts and nominal accounts (c) Personal accounts and real accounts (d) All accounts. (c) Personal accounts and real accounts 20. Ledger Book is popularly known as: (a) Secondary book of accounts (b) Principal book of accounts (c) Subsidiary book of accounts (d) None of the above. (b) Principal book of accounts 21. Posting refers to the process of transferring information from _______. (a) Journal to general ledger (b) General ledger accounts to journals (c) Source documents to journals (d) Journals to source documents. (a) Journal to general ledger 22. L. F. (i.e., ledger folio column) in the journal is filled at the time of: (a) Journalising (b) Balancing (c) Posting (d) Casting. (c) Posting 23. The cash book records : (a) All Cash Receipts (b) All Cash Payments (c) All Cash Receipts and Payments (d) Cash and Credit Sale of Goods. (c) All Cash Receipts and Payments 24. Cash book is a: (a) Subsidiary Book (b) Subsidiary Journal and Ledger (c) Ledger Account (d) None of these. (b) Subsidiary Journal and Ledger 25. Which of the following will be recorded as contra entry : (a) Withdrew from bank for personal use (b) A cheque received from X lodged into bank on the same day (c) A cheque received from Y a week earlier lodged into bank (d) A customer directly deposited the money in our bank account. (c) A cheque received from Y a week earlier lodged into bank 26. Cash book does not record : (a) Credit Purchases (b) Credit sales (c) Outstanding expenses (d) All the above transactions. (d) All the above transactions. 27. The balance in the petty cash book is: (a) An expenses (b) A profit (c) An asset (d) A liability. (c) An asset 28. Balance of cash book is posted to the ledger _______. (a) In the cash account (b) In bank account (c) Nowhere (d) Either (a) or (b) (c) Nowhere 29. A cheque received and deposited in the same day is recorded in the: (a) Cash column of the cash book (b) Bank column of the cash book (c) Credited in the cash book (d) Debited in the cash book (b) Bank column of the cash book 30. Which is entered on the debit side of cash book? (c) Cash discount allowed (c) Cash discount allowed 31. In a three column Cash Book : (a) Only cash column and discount columns are balanced (b) Only bank column and discount columns are balanced (c) Only cash column and bank columns are balanced (d) Cash column, bank column and discount columns are balanced. (c) Only cash column and bank columns are balanced 32. Purchases book is used to record : (a) All purchases of goods (b) All credit purchases (c) All credit purchase of goods (d) All credit purchases of assets other than goods. (c) All credit purchase of goods 33. Sales returns book is used to record : (a) Returns of fixed assets sold on credit (b) Returns of goods sold for cash (c) Returns of goods sold on credit (d) Sales of goods. (c) Returns of goods sold on credit 34. Purchase for office furniture on account is recorded in: (a) General journal (b) Cash book (c) Purchases book (d) Sales book. (a) General journal 35. A periodic total of the purchase book is : (a) Posted to the debit of the Purchase Account (b) Posted to the debit of the Sales Account (c) Posted to credit of the Purchases Account (d) Posted to the credit of Sales A/c. (a) Posted to the debit of the Purchase Account 36. Acceptances received and recorded in Bills Receivable Book are transferred to ledger: (a) On the debit side of relevant personal accounts (b) On the credit side of relevant personal account (c) Nowhere (d) Either (a) or (b) (b) On the credit side of relevant personal account 37. Closing entries are recorded in : (a) Cash Book (b) Ledger (c) Journal proper (d) Balance sheet. (c) Journal proper 38. The following is entered in the journal proper: (c) Cash discount allowed (d) Opening entry. (d) Opening entry. 39. Credit purchase of stationery by a stationery dealer will be recorded in: (a) Purchase Book (b) Sales Book (c) Cash Book (d) Journal proper (General Journal) (a) Purchase Book 40. A debit note issued to a creditor for goods returned by us is to be recorded in the: (a) Bills Receivable Bank (b) Purchases Book (c) Journal proper (General Journal) (d) Purchases Return Book. (d) Purchases Return Book. 41. A Return Inwards Book is kept to record: (a) Returns of goods sold (b) Returns of anything purchased (c) Returns of goods purchased (d) Returns of anything sold. (a) Returns of goods sold 42. Journal proper is used to record: (a) All cash purchases of assets other than goods (b) All cash sales of assets other than goods (c) Returns of fixed assets purchased on credit (c) Returns of fixed assets purchased on credit 43. A second hand motor car was purchased on credit from Mohan will be recorded in the _______. (a) Journal proper (General Journal) (b) Sales Book (c) Cash Book (d) Purchase Book. (a) Journal proper (General Journal) 44. Which of these is a method of preparation of Trial Balance? (a) Total method (b) Balance method (c) Both (a) and (b) (d) None. (c) Both (a) and (b) 45. If Trial Balance tallies, it surely means that there are no errors in books of account. This statement is _______. (a) True (b) False (c) Partly True (d) None. (b) False 46. In a journal, if it is not stated that purchase or sale is on credit or cash, it is assumed to be on _______. (a) Cash (b) Credit (c) Any of the above (d) None of these. (b) Credit 47. Which of the following is both a principal as well as a subsidiary book? (a) Sales Book (b) Purchase Book (c) Cash Book (d) Bills Receivable Book (c) Cash Book 48. Goods worth ₹ 25,000 sold to Amit will be recorded in journal as: (a) Debit the sales A/c & credit Amit A/c (b) Credit sales A/c & debit Amit A/c (c) Debit sales A/c & credit cash A/c (d) None of the above (b) Credit sales A/c & debit Amit A/c 49. Payment of electricity bill of the proprietor’s house will be debited to: (a) Drawings A/c (b) Cash A/c (c) Electricity A/c (d) None of the above (a) Drawings A/c 50. If goods worth ₹ 10,000 are stolen, then it shall be recorded in: (a) Purchase Book (b) Journal Proper (c) Purchase Return Book (d) All of the above (b) Journal Proper 51. If the business issues a debit note to the seller of such goods, the entry will be passed in: (a) Purchase book (b) Purchase return book (c) Sales book (d) Sales return book (b) Purchase return book 52. The total of purchase book will be posted in ledger on: (a) Debit side of purchase A/c (b) Credit side of purchase A/c (c) Credit side of cash A/c (d) None of the above (a) Debit side of purchase A/c 53. The total of sales book will be posted in ledger in: (a) Debit side of sales A/c (b) Credit side of sales A/c (c) Debit side of cash A/c (d) None of the above (b) Credit side of sales A/c 54. The total of purchase return will be taken in the ledger in: (a) Debit side of purchase return A/c (b) Credit side of purchase return A/c (c) Debit side of cash A/c (d) Credit side of cash A/c (b) Credit side of purchase return A/c 55. The total of sales return will be recorded in the ledger by: (a) Debiting sales return A/c (b) Crediting sales return A/c (c) Crediting cash A/c (d) Debiting cash A/c (a) Debiting sales return A/c 56. Which of the following transactions will be recorded in the sales book of Bharat Furnitures & Co.? (a) Sold Table for cash ₹ 10,000 (b) Sold Chair to Mehra & Co. for ₹ 12,000 (c) Sold an old Typewriter for ₹ 2,000 to Verma & Co. (d) Both (a) and (c) (b) Sold Chair to Mehra & Co. for ₹ 12,000 57. Which of the following transactions will be recorded in the purchase book of Sharma Cloth House? (a) Purchased Cloth worth ₹ 2,000 for cash (b) Purchased stationery worth ₹ 200 on credit (c) Purchased cloth worth ₹ 5,000 from Verma Garments (d) None of the above (c) Purchased cloth worth ₹ 5,000 from Verma Garments 58. _______ is prepared to ensure arithmetical accuracy of the accounts. (a) Ledger (b) Balance Sheet (c) Trail Balance (d) P & L A/c (c) Trail Balance 59. Which of the following is NOT included in Trial Balance? (a) Closing stock (b) Opening stock (c) Suspense A/c (d) All of the above (a) Closing stock 60. If the trial balance is NOT reconciled, then it is reconciled by opening: (a) Suspense A/c (b) Reconciliation A/c (c) Miscellaneous A/c (d) None of the above (a) Suspense A/c 61. The trial balance is a: (a) Account (b) List (c) Subsidiary book (d) Statement (d) Statement 62. The overdraft balance in the Savings A/c of the bank will be at the _______. (a) Debit side of Bank column (b) Credit side of Bank column (c) Neither (a) nor (b) (d) Both (a) and (b) (b) Credit side of Bank column 63. The closing balance of Wages A/c is transferred to: (a) P & L A/c (c) Balance sheet (d) None of the above 64. Which of the following transactions are recorded in purchase book? (a) All purchases made during the year (b) Only credit purchases during the year (c) Only credit purchases of goods traded by the firm (d) None the above (c) Only credit purchases of goods traded by the firm 65. Goods destroyed by fire will be credited to: (a) Fire A/c (b) Purchases A/c (c) P&LA/c (d) None of the above (b) Purchases A/c 66. If goods worth ₹ 500 are taken by the proprietor for personal use, the entry will be: (a) Debit Drawings A/c, Credit Purchases A/c (b) Debit Purchases A/c, Credit drawings A/c (c) Debit Proprietor A/c, Credit Purchases A/c (d) Credit Proprietor A/c, Debit stock A/c (a) Debit Drawings A/c, Credit Purchases A/c 67. The Balance in the bank pass book is: (a) Debit (b) Credit (c) Both Debit & Credit (d) None of the above (c) Both Debit & Credit 68. If the owner of a business gives his personal car to the business, then which A/c will be debited and credited: (a) Debit Capital A/c & Credit Car A/c (b) Debit Car A/c & Credit Capital A/c (c) Debit Car A/c & Credit Cash A/c (d) Debit Car A/c & Credit Drawings A/c (b) Debit Car A/c & Credit Capital A/c 69. If the goods are destroyed by fire and the insurance company accepts the full claim, then the entry will be : (a) Debit Insurance Co., Credit Cash (b) Debit Insurance Co., Credit Purchase (c) Debit Cash, Credit Purchase (d) Debit Purchase, Credit Cash (b) Debit Insurance Co., Credit Purchase 70. If Ajay sells his car and brings the proceeds in the business, then the entry will be: (a) Debit Car, Credit Cash (b) Debit Car, Credit Capital (c) Debit Cash, Credit Capital (d) None of the above (c) Debit Cash, Credit Capital 71. If goods worth ₹ 1,00,000 are sold at a trade discount of 10%, then the amount to be entered in discount is: (a) 10,000 (Dr.) (b) Zero (c) 10,000 (Cr.) (d) None of the above (b) Zero 72. Capital A/c is a: (a) Real A/c (b) Personal A/c (c) Nominal A/c (d) Both (a) & (c) (b) Personal A/c 73. Which A/c is credited in case of bad debts? (a) Cash A/c (c) Debtors A/c (d) P&LA/c (c) Debtors A/c 74. Goods given on charity will be credited to: (a) Charity A/c (b) Goods A/c (c) Purchases A/c (d) Sales A/c (c) Purchases A/c 75. Prepaid Salary is a: (a) Real A/c (b) Nominal A/c (c) Personal A/c (d) None of the above (c) Personal A/c 76. is sent to a supplier on returning the goods: (a) Debit Note (b) Invoice (c) Credit Note (d) Material Receipt (a) Debit Note (a) To recorded in the discount A/c (b) Not recorded in the books at all (c) Recorded only in case of (d) Not to be considered in determining special cases the net sales price (b) Not recorded in the books at all 78. The expired portion of capital expenditure is shown in the financial statement is: (a) An income (b) An expense (c) An asset (d) A liability (b) An expense The expired portion of capital expenditure is known as depreciation. Since, depreciation is treated as expenses, it is transferred to debit side of P/L A/c. So, expired portion of capital expenditure is shown in financial statement as an expenses. 79. Maintaining petty cash book is: (a) Mandatory (b) Necessary (c) Dependant on nature of business (d) All of the above. (c) Dependant on nature of business Payments in cash of small amount like travelling, postage, refreshment etc. are petty cash expenses. In big organisation, it is not possible to maintain petty expenses for main cashier. In small organisation the number of petty expenses is less. So, maintaining petty cash book is dependent on nature of business. 80. Purchase book records: (a) All purchases made by the firm (b) All purchases of fixed asset used by the firm (c) Credit purchases of goods dealt in by the firm (d) Cash purchases of goods dealt in by the firm. (c) Credit purchases of goods dealt in by the firm Purchase book is meant for recording the purchase of goods on credit only. Because cash purchase are recorded in the cash book. 81. Sales Book is prepared: (a) On the basis of Cash Book (b) On the basis of copies of invoices. (c) Both (a) and (b) (d) On the basis of sales orders. (b) On the basis of copies of invoices. In the sales book, only credit sale of goods are recorded, sales book is prepared on the basis of copies of invoice sent to customers. 82. Expenses paid in cash and recorded as assets before they are used are called _______. (a) Accrued Expenses (b) Interim Expenses (c) Prepaid Expenses (d) Unearned Expenses (c) Prepaid Expenses Expenses paid in cash and recorded as assets before they are used are called prepaid expenses. Those expenses which have been paid in advance and whose benefit will be available in future are called prepaid expenses. 83. In which book does the cash sales will be recorded _______. (a) Cash Book (b) Purchase Book (c) General Journal (d) Sales Book. (a) Cash Book In sales book, only credit sales are recorded. Cash sales will be recorded in cash book because all cash receipts are recorded in cash book on credit side. 84. Which of the following transactions would have no impact on owner’s capital? (a) Purchase of land from the proceeds of a bank loan (b) Withdrawal of profits (c) Net loss (d) Cash brought in by owner as additional capital (a) Purchase of land from the proceeds of a bank loan Withdrawal of profit is a drawings and drawings is reduced from the owner’s capital. Net loss reduces the capital. Cash brought in by owner as additional capital increase the owner’s capital. Thus these three transaction would have impact on owner’s capital. On taking a bank loan, the following entry will be passed Cash A/c Dr. To Bank Loan A/c On purchase of Land the following entry will be passed – Land A/c Dr. To Cash A/c On considering these entry we found that purchase of land from the proceeds of a bank loan would have no impact on owner’s capital. 85. Which of the following accounts will be credited, when the goods are purchased for cash? (a) Stock Account (b) Cash Account (c) Supplier’s Account (d) Work in progress Account (b) Cash Account When the goods are purchased for cash, the following entry will be passed – Purchase A/c Dr. To Cash A/c So, cash A/c will be credited. 86. Which of the following would not be regarded as an asset? (a) A piece of equipment owned by a business (b) A sum of money owned by the business (c) An inventory of goods that is yet to be sold (d) A building that has been taken on rent by the business for its use. (d) A building that has been taken on rent by the business for its use. A piece of equipment owned by a business is treated as fixed assets. A sum of money owned by the business is treated as current assets. An inventory of goods that is yet to be sold is treated as closing stock which is an asset A building that has been taken on rent by the business for its use would not be regarded as an asset because company have no ownership of that building. 87. Withdrawal of cash from bank for official use will result into: (a) Increase of assets (b) Increase of expenses (c) No impact on assets (d) None of the above. (c) No impact on assets On withdrawal of Cash from bank for office use the following entry will be passed – Cash A/c Dr. To Bank A/c This entry will have no impact on assets since, on one hand, cash A/c will increase and on the other hand, bank A/c will decrease. 88. Franchise rights, goodwill and patents are the examples of: (a) Liquid Assets (b) Tangible Assets (c) Intangible Assets (d) Current Assets (c) Intangible Assets Franchise rights, goodwill and patents are the example of Intangible Assets. As intangible Assets are those assets which cannot be seen or touched or felt and there is no physical form to show it. 89. Which of the following is not an example of current asset? (a) Prepaid Expenses (b) Account Receivables (c) Short term securities (d) Unearned Income. (c) Short term securities Current assets are those that are meant to be converted into cash as soon as possible. Example : stock of goods, prepaid expenses, account receivable, unearned income. Short term securities are regarded as Liquid Assets and not as current assets. 90. The three columns on each side of a three columnar cash book represent: (a) Real and personal accounts (b) Real and nominal accounts (c) Personal and nominal accounts (d) Real, personal and nominal accounts. (d) Real, personal and nominal accounts. The three columns in a three columnar cash book represent Real Personal and Nominal Accounts • Discount column : Nominal account • Cash column : Real account • Bank column : Personal account 91. A chronological record of transaction may be found in: (a) Balance Sheet (b) Trial Balance (c) Ledger (d) Journal. (d) Journal. A chronological record of transaction may be found in “Journal” Journal records transactions on a day to day basis and as and when they occur. 92. A purchased an old computer costing ₹ 10,000 and incurred ₹ 1,000 on its repairs and ₹ 500 on its packing. He sold the computer at 20% margin on selling price. The sales value will be: (a) ₹ 12,500 (b) 711,000 (c) 714,375 (d) 713,800. (c) 714,375 Total cost of computer: 10,000 + 1,000 + 500 = 11,500 Margin is 20% on selling price which means it is 25% on cost. ∴ Sales value will be 11,500 + 25% = ₹ 14,375/- 93. The imprest system pertains to _______. (a) Purchase book (b) Sales book (c) Cash book (d) Petty cash book. (d) Petty cash book. It is convenient to entrust a definite sum of money to the petty cashier in the beginning of a period and to reimburse him for payments made at the end of the period. Thus, he will have again the fixed amount in the beginning of the new period. Such a system is known as the imprest system of petty cash book. 94. The statement showing balance of all the ledger accounts is known as _______. (a) Trial balance (b) Balance sheet (c) Bank reconciliation statement (d) Profit and loss account. (a) Trial balance Trial Balance is a statement which shows closing balances of all the ledger accounts. 95. A General Cash book acts as a _______. (a) Journal (b) Ledger (c) Both (d) None (c) Both A cashbook is a book of prime entry in which cash and bank transactions of business are recorded. It acts as a book of original entry and a ledger. Hence, it is both Journal and Ledger. 96. Debit note is related with the _______. (a) Sales book (b) Sales return book (c) Purchase return book (d) Journal proper. (c) Purchase return book When the goods or material are returned to the supplier that have been purchased on credit, a debit note is issued to him indicating that his account has been debited with the amount mentioned in the debit note. Thus, debit note is related with purchase return book. 97. If assets are increased by 2,000 and liabilities are increased by 1,200. What will be the effect on business equity? (a) 800 (b) 2,000 (c) 3,200 (d) 1,200. (a) 800 Business Equity = Total assets – Total outside liabilities = 2,000 – 1,200 = 800 98. In case of Trial Balance, balance comes from ___________. (a) Journal (b) Ledger (c) Balance Sheet (d) Profit & Loss a/c. (b) Ledger A trial balance is the list of balances of both credit and debit extracted from various accounts in the ledger including cash and bank balances. 99. Cost of goods sold – 60,000. Sales- 95,000 Expenses – 20,000 Gross Profit will be? (a) 20,000 (b) 15,000 (c) 35,000 (d) 1,75,000. (c) 35,000 Cost of Goods Sold = ₹ 60,000 Sales = ₹ 95,000 Gross Profit = Sales – Cost of Goods Sold = 95,000 – 60,000 = 35,000 Hence, option (c) is correct. (a) To classify all items appearing in Journal (b) To record the transaction (c) Both (a) and (b) (d) None of these. (a) To classify all items appearing in Journal Journalising means recording the transaction while posting means posting & classification of all the items of journal in respective accounts of ledger. Hence, ledger is made to classify all items appearing in journal. 101. In case of three columnar cash book, contra entry _______. (a) Bank account only (b) Cash and discount account (c) Cash account only (d) Cash and bank account (d) Cash and bank account Three columnar cash book contains the following three amounts columns on each side: • Discount Column • Cash Column • Bank Column In a case of Contra Entry i.e. a transaction involves both cash and bank accounts, it is entered on both sides of the cash book, one in the cash column and other in the bank column, though on opposite sides. 102. The closing entry for transfer of Salaries Paid A/c appearing in the Trial Balance will be: (a) Debit Salaries A/c, Credit P&L A/c (b) Debit Salaries A/c, Credit Trading A/c (c) Debit Trading A/c, Credit Salaries A/c (d) Debit P&L A/c, Credit Salaries A/c (d) The closing entry for transfer of salaries paid a/c appearing in trial balance will be. Profit & Loss A/c Dr. To Salaries A/c 103. Which of the following statement is incorrect with respect to a journal entry? (a) It is prepared to record all transactions in alphabetical order (b) It should always end with a narration explaining the need for it (c) It should be substantiated by appropriate voucher and authority (d) It should always consist of a debit entry matched by a corresponding credit entry. (a) It is prepared to record all transactions in alphabetical order • Journal entry should always end with a narration explaining the purpose for it. • It should be recorded on the basis of appropriate voucher and authority. As a rule, every transaction have two sides i.e. debit and credit side. It is that book of account in which transactions are recorded in a chronological (day to day) order. So, option (a) is incorrect about Journal entry i.e. to record all the transactions in alphabetical order. 104. Which of the following entries will be entered in the Journal proper? (a) Sold goods on credit (b) Goods purchased and paid by cash (c) Furniture purchased on credit (d) Purchase goods on credit. (c) Journal proper is used for making the original record of such transaction for which no special journal has been kept in the business. Some entries confined to general journal (or journal proper) are: • Opening entries • Closing entries • Rectification entries • Purchase of fixed assets etc. Therefore, furniture purchased on credit will be entered in journal proper. 105. Which of the following account will be credited for profit on sale of fixed assets? (a) Depreciation Account (b) Cash Account (c) Fixed Asset Account (d) Profit and Loss Account. (d) Profit and Loss Account. When a fixed asset is sold at a profit then the account to be credited will profit and loss account. Suppose furniture of W.D.V ₹ 10,000 is sold for ₹ 12,000. Cash/Bank A/c Dr. 12,000 To Furniture A/c 10,000 To P/L A/c 2,000 106. A chronological record of transactions may be found in _______. (a) Trial balance (b) Journal (c) Balance sheet (d) Ledger. (b) Journal In Journal, which is primary book for recording transactions of business, transactions are recorded in chronological order. 107. The imprest system pertains to: (a) Purchase book (b) Cash book (c) Sales book (d) Petty Cash book. (d) Petty Cash book. Imprest system of petty cash book, under this system the petty cashier is given a definite sum at beginning of a certain period. This amount is called imprest amount. 108. After the preparation of income statement, it was discovered that accrued expenses of ₹ 1,000 have been ignored and closing inventory has been overvalued by ₹ 1,300. This will have result in: (a) An understatement of net profit of ₹ 2,300 (b) An overstatement of net profit of ₹ 300 (c) An understatement of net profit of ₹ 300 (d) An overstatement of net profit of ₹ 2,300. (d) An overstatement of net profit of ₹ 2,300. If accrued expenses of ₹ 1,000 have been ignored, this will increase the net profit by ₹ 1,000. If closing inventory is overvalued, it will also result in increasing the net profit by ₹ 1,300. Thus, net effect will be profit increased by ₹ 2,300. 109. Where Rent prepaid comes in Balance Sheet? (a) Asset side (b) Liability side (c) Does not come in Balance Sheet (d) None of the above (a) Asset side Those expenses which have been paid in advance and whose benefit will be available in future are called prepaid expenses. These are shown as assets in the balance sheet. 110. If capital is ₹ 10,000, creditors ₹ 5,000, B/P ₹ 2,000. Machinery ₹ 2,000, Prepaid expenses ₹ 1,000. Land and Building ₹ 5,000. Find the value of Debtors is: (a) ₹ 7,000 (b) ₹ 12,000 (c) ₹ 9,000 (d) ₹ 8,000 (c) ₹ 9,000 Total liabilities = Capital + Creditors + Bills payable = 10,000 + 5,000 + 2,000 = ₹ 17,000 Memorandum Balance Sheet: Liabilities Amount Assets Amount Capital 10,000 Machinery 2,000 Creditors 5,000 Prepaid Expense 1,000 Bills Payable 2,000 Land, Building 5,000 Debtors 9,000 Total 17,000 17,000 As per matching concept Assets should be equal to Total Liabilities. Total Assets = Machinery + Land + Prepaid expense = 2,000 + 1,000 + 5,000 = ₹ 8,000 Thus, the difference of ₹ 9,000 is the amount of Debtors. 111. B/P ₹ 20,000, creditors ₹ 10,000, Debtors ₹ 5,000, Investment ₹ 2,00,000 Plant and Machinery is ₹ 1,50,000, closing stock is ₹ 20,000 find the capital: (a) ₹ 3,55,000 (b) ₹ 2,00,000 (c) ₹ 3,44,000 (d) ₹ 3,45,000 (d) ₹ 3,45,000 Total Outside liabilities = Bills Payable + Creditors = 20,000 + 10,000 = ₹ 30,000 Total Assets = Debtors + Investment + Plant & Machinery + Closing Stock = 5,000 + 2,00,000 + 1,50,000 + 20,000 = ₹ 3,75,000 Capital = Total Assets – Outside liabilities = 3,75,000 – 30,000 = ₹ 3,45,000 112. What we give at the time of sales return: (a) Credit Note (b) Invoice (c) Debit Note (d) All of the above (a) Credit Note We give Credit note at the time of Sales return. The Individual accounts of the customers are credited with the respective amounts while the periodical total of Sales return book is posted to the debit of Sales return account. 113. A cheque received from a customer and deposited on the same day is recorded in the: (a) Debit side of cash column in the cash book (b) Credit side of cash column in the cash book (c) Debit side of bank column in the cash book (d) Credit side of bank column in the cash book. (c) Debit side of bank column in the cash book A cheque received from a customer and deposited on the same day is recorded on the debit side of the bank column in the cash book as on the debit side, all cash receipts are recorded while on the credit side, all cash payment are recorded. Cash Book thus serves the purpose of a book of original entry as well as that of ledger account. 114. A building is purchased from office cash for use by the building. Which of these would represent the entry for the transaction? (a) Debit an asset account, credit a sales account (b) Debit building account, credit cash account (c) Debit the bank account, credit on expense account (d) Debit a liability account, credit on expense account. (b) Debit building account, credit cash account A building purchased from office cash would end up with the entry for the transaction as: Building Account Dr. To Cash Account 115. The trial balance of a proprietary concern shows the following balances: Capital ₹ 2,00,000, Income Tax ₹ 12,000, Income Tax paid in advance ₹ 4,000 and Interest on advance payment of tax ₹ 200. What will be the balance of capital at end? (a) ₹ 1,83,800 (b) ₹ 1,84,200 (c) ₹ 1,88,000 (d) ₹ 1,84,000 (b) ₹ 1,84,200 Capital 2,00,000 (+) Interest on Advance Tax 200 (-) Income Tax Paid 12,000 (-) Income Tax Paid in advance 4,000 Total 1,84,200 116. A debit note for ₹ 2,000/- issued to Mr. F for goods returned. This will be accounted in: (a) Journal proper (General Journal) (b) Purchase return book (c) Bills receivable book (d) Purchase book. (b) Purchase return book A debit note for ₹ 2,000 issued to Mr. F for goods returned. This will be accounted in purchase return book. This book records the details of goods returned by the business organisation to the supplier. When the goods are returned to the supplier, a debit note is sent to him indicating that his account has been debited with the amount mentioned in the debit note. 117. The closing entry for transferring purchase return appearing in trial balance will be: (a) Debit purchase return account, Credit Profit and Loss Account (b) Debit trading account, Credit purchase return Account (c) Debit Profit and Loss account, Credit purchase return Account (d) Debit purchase return account, Credit trading Account. (d) Debit purchase return account, Credit trading Account. The closing entry for transferring purchase return appearing in trial balance will be: Debit Purchase Return A/c, Credit Trading A/c 118. Ledger book is popularly known as: (a) Secondary Book of Accounts (c) Subsidiary Book of Accounts (d) Principal Book of Accounts (d) Ledger book is also known as principal book. 119. A Cash Book does not record: (a) Purchase of furniture (b) Rent paid (c) Salary outstanding (d) Salary paid (c) Salary outstanding Cash Book is the book in which all transaction relating to cash receipt and cash payment are recorded. Salary Outstanding will be recorded in Journal Proper, not in Cash Book because it does not include cash payment. 120. A suspense account facilitates the preparation of when the has not been tallied _______. (a) Trial Balance, Financial Statement (b) Financial Statement, Trial Balance (c) Ledger, Trial Balance (d) Journal, Trial Balance (b) Financial Statement, Trial Balance A suspense account is opened when total of debit of Trial Balance does not match with the total of credit of Trial Balance and it facilitates the preparation of financial statement when the Trial Balance has not been tallied 121. What will be debited and credited if Mr. A started business with cash ₹ 2,00,000? (a) Mr. A’s account and capital account respectively (b) Business A/c and Cash A/c respectively (c) Capital A/c and Cash A/c respectively (d) Cash A/c and Capital A/c respectively. (d) Cash A/c and Capital A/c respectively. Journal Entry for the transaction Cash A/c Dr. 2,00,000 To Capital A/c 122. Goods returned by business organization to suppliers noted in which books of the organization? (a) Credit note (b) Sales return book (c) Debit note (d) Purchase return book (d) Purchase return book The purchase returns books records the details of goods returned by the business organization to the supplier(s). The goods purchased for cash and returned are not recorded in this book. When the goods are returned to the supplier, a debit note is sent to him indicating that his account has been debited with the amount mentioned in the debit note. 123. The balance of petty cash is _______. (a) Expense (b) Income (c) Asset (d) Liability (c) Asset The general ledger account Petty Cash is reported on the balance sheet as a current asset. Often the balance in the Petty Cash account is combined with the balances in other cash accounts (such as checking accounts) and the total will be reported on the balance sheet as cash. The Petty Cash account should be replenished just prior to issuing the financial statements so that the amount of currency and coins on hand is equal to the balance in the Petty Cash account. This also ensures that the recent petty cash disbursements are recorded in their appropriate accounts often expense accounts. 124. A firm has to take decision about the nature and extent of product differentiation and’ hence the level of selling expense in _______ market structure. (a) Monopoly (b) Monopolistic competitive (c) Perfectly competitive (d) Any of the above (c) Perfectly competitive Perfect competition is a market structure in which the following five criteria are met: • All firms sell an identical product; • All firms are price takers – they cannot control the market price of their product; • All firms have a relatively small market share; • Buyers have complete information about the product being sold and the prices charged by each firm; • The industry is characterized by freedom of entry and exit. Perfect competition is sometimes referred to as “pure competition. Thus option b is correct. (a) Real account (b) Liability account (c) Revenue account (d) Not recorded in books of account (d) Not recorded in books of account The Trade Discount is not recorded in the Books of A/c. 126. After preparing the trial balance the accountant finds that the total of the debit side is short by ₹ 1,000. This difference will be : (a) Debited to suspense account (b) Adjusted to any of the debit balance account (c) Credited to suspense account (d) Adjusted to any of the credit balance account (a) Debited to suspense account The difference between the Debit and Credit side is transferred to the Suspense A/c. If Debit side is short, A/c is debited If Credit side is short, A/c is credited 127. Which of the following is not a column in a three column cash book? (a) Petty cash column (b) Cash column (c) Discount column (d) Bank column (a) Petty cash column The three columns of a Three Column Cash Book are: • Cash Column • Discount Column • Bank Column 128. Journal entry for goods ₹ 50 withdrawn by proprietor for personal use will be: (b) Debit sales A/c credit drawings A/c ₹ 50 (c) Debit drawings A/c credit purchases A/c ₹ 50 (d) Debit purchases A/c credit expenses A/c ₹ 50 (c) Debit drawings A/c credit purchases A/c ₹ 50 Journal Entry would be Drawings A/c Dr. 50 To Purchase A/c 50 129. Credit purchase of cotton by cotton dealer worth ₹ 10,000 will be entered in: (a) Bill Receivable Book (b) Sales Book (c) Purchases Book (d) Journal Proper (c) Purchases Book Credit Purchase of cotton by cotton dealer will be treated as purchase of good and hence entered in the Purchase Book. 130. The correct sequence of the following in the preparation of periodical final statements would be: 1. Preparation of Balance Sheet 2. Preparation of cash flow statement 3. Preparation of Trial Balance 4. Preparation of Profit/Loss statement The correct option is: (a) 4, 2,1, 3 (b) 3, 4, 1, 2 (c) 2, 4, 3, 1 (d) 1, 3, 2, 4 (c) Firstly prepare trial balance then with the help of trial balance Trading and Profit and Loss A/c is prepared and then cash flow statement after that at the end Balance Sheet will be prepared. 131. The total cost of goods available for sale with a company during the current year is ₹ 12,00,000 and the total sales’during the period are ₹ 13,00,000. Gross profit margin of the company is 33.33% on cost. The closing inventory for the current year would be: (a) ₹ 4,00,000 (b) ₹ 3,00,000 (c) ₹ 2,25,000 (d) ₹ 2,60,000 (a) ₹ 4,00,000 Cost of goods available for sale is ₹ 12,00,000 Total Sales – 13,00,000 Gross Profit is 33.33% or $$\frac { 1 }{ 3 }$$ on cost. (Given) Closing Inventory for current year = $$\frac { 1 }{ 3 }$$ x 12,00,000 = ₹ 4,00,000 132. On 1st April, 2012 in Sethi’s ledger furniture account showed a balance of ₹ 2,00,000. On 1st October, 2012 Sethi purchased new furniture by paying ₹ 5,000 and giving old furniture where book value on 1st April, 2012 was ₹ 12,000 to the seller. Sethi provides depreciation on furniture @10% per annum on diminishing balance method. The Net Value of Furniture in Sethi’s books as on 31st March, 2013 would be: (a) ₹ 1,85,000 (b) ₹ 1,83,960 (c) ₹ 1,84,780 (d) ₹ 2,04,400 (c) ₹ 1,84,780 133. A chronological record of transaction may be found in: (a) Balance sheet (b) Trial balance (c) Ledger (d) Journal (d) Journal Journal is primary book in which transactions are record chronologically while transactions are analytically recorder in ledger. 134. How does an overcasting of purchase day book affect the cost of sale and profit? (a) Cost of sales is decreased while profit is increased (b) Cost of sales is increased while profit is decreased (c) Both cost of sales and profit are increased (d) Cost of sales is increased, gross profit is decreased but net profit remains unaffected (b) Cost of sales is increased while profit is decreased Cost of sale and profit are directly proportion when sale is increased, profit is increased and when sale is decreased, profit is decreased. So, purchase day book affect the Cost of sale and profit at decreasing point. 135. Which one of the following statement is correct? (a) Capital of the firm is reduced by borrowing (b) When there is no change in proprietor’s capital it is an indication of loss in business (c) Nominal Account refer to false transactions (d) Real accounts relate to the assets of business. (d) Real accounts relate to the assets of business. ‘Real Accounts refer to asset of business’. This statement is correct. Balance Sheet consist of Real and Personal A/c only not Nominal A/c 136. Which one appear in the trial balance on Debit side. (a) Cash (b) Sales (c) Capital (d) Sales returns (i) a, b, c (ii) b, c (iii) a, d (iv) None of these (iii) a, d Item which are recorded on debit side of a trial balance; • Assets • Expenses and losses though cash is an asset having debit nature and sales return is a kind of loss having debit nature. Hence, option (iii) a, d is correct. 137. Inventory book is used to view: (a) Group Inventory (b) Stock Items (c) All of these (d) None of these (c) All of these Inventory book is used to see group inventory as well as to stock items. Thus, option (c) is correct. 138. The imprest system pertain to: (a) Purchase book (b) Sales book (c) Cash book (d) Petty cash book (d) Petty cash book Petty cash book is maintained on Imprest System. 139. The statement showing balance of all the ledger accounts is known as: (a) Trial balance (b) Balance Sheet (c) Bank reconciliation statement (d) Profit and loss account (a) Trial balance Trial Balance is the statement that shows the balances of all ledger accounts at one place. 140. Balance of Petty Cash Book is transferred to _______. (a) Balance Sheet (b) P/LA/c (c) Cash Book (a) Balance Sheet Balance of Petty Cash Book is transferred to balance sheet. Petty cash appears within the current assets section of the balance sheet. 141. What comes in same side of Trial Balance? (a) Capital and Drawing (b) Furniture and Liability (c) Asset and Expense (d) None
2023-03-31 06:06:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3363226056098938, "perplexity": 10289.433266845746}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949573.84/warc/CC-MAIN-20230331051439-20230331081439-00778.warc.gz"}
https://www.neuraldump.net/2015/12/modular-addition-rule-proof/
Addition in modular arithmetic is much simpler than it would first appear thanks to the following rule: This says that if we are adding two integers and and then calculating their sum modulo , the answer is the same as if we added modulo to modulo and then calculated that sum modulo . Note that this equation can be extended to include more than just two terms. Example To show how this saves time and the use of a calculator, let’s look at a simple example. Suppose we needed to add a list of large numbers, but were only interested in the value of units the digit, aka the ones column, of the solution. Take: That problem is too large to do in my head, but I could work it out on paper fairly easily. I could also hunt up a calculator, but there is a quicker  way still using the rule above. In a decimal system, the units digit is really modulo 10 in disguise. Counting from 0: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, … So when we are looking at the units digit of a decimal number of any length, we are really calculating that number modulo 10. BUT, the addition rule tells us that we don’t even to add up the whole list of numbers as they are to calculate the sum modulo 10. Instead, we first calculate each term modulo 10, and then sum those numbers. So, But wait! The sum is greater than 9 which is the reason for having to the sum modulo 10, one last time: If you’d worked the original addition problem out by hand or by calculator, you would have come to the same answer. The units digit is 7. ### Proof If you’ve read any of my math-related posts, you already know that it is not enough for me to know equations, but I also need to feel like I know how and why they work. In keeping with that neurotic tendency, I offer a proof for the modular addition rule. Again, our equation is: and our goal is to prove that the two sides of the equals sign are indeed equal to each other. The key to doing this is the quotient remainder theorem, aka the Euclidean division algorithm. It tells us that we can rewrite and as: where  this means that where this means that Examining the left hand side of the addition equation first, we have: Since we are taking , we can eliminate multiples of , leaving us with: Now we move over to the right hand side of the equation. Well, that was easy! The two sides of the equation are equal. The proof is done. There is a similar rule and proof for modular subtraction. If I can keep my momentum, I will post it soon. But if you understood this one, working out the subtraction proof should be no problem at all. This site uses Akismet to reduce spam. Learn how your comment data is processed.
2019-07-19 08:19:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8257375359535217, "perplexity": 265.41234492813766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526153.35/warc/CC-MAIN-20190719074137-20190719100137-00069.warc.gz"}
https://xml.jips-k.org/pub-reader/view?doi=10.3745/JIPS.02.0128
# A Cost-Optimization Scheme Using Security Vulnerability Measurement for Efficient Security Enhancement Jun-Young Park* and Eui-Nam Huh* ## Abstract Abstract: The security risk management used by some service providers is not appropriate for effective security enhancement. The reason is that the security risk management methods did not take into account the opinions of security experts, types of service, and security vulnerability-based risk assessment. Moreover, the security risk assessment method, which has a great influence on the risk treatment method in an information security risk assessment model, should be security risk assessment for fine-grained risk assessment, considering security vulnerability rather than security threat. Therefore, we proposed an improved information security risk management model and methods that consider vulnerability-based risk assessment and mitigation to enhance security controls considering limited security budget. Moreover, we can evaluate the security cost allocation strategies based on security vulnerability measurement that consider the security weight. Keywords: Attack Graph, Cloud Security, Cost Optimization, Vulnerability Measurement ## 1. Introduction With the new development of information communication technologies (ICTs), such as autonomous vehicles, drones, and artificial intelligence, modern life is becoming increasingly convenient in nearly all areas of life. ICT is not only adopted for improving the quality of life; industry fields are also trying to introduce new ICTs with Industry 4.0. According to the threat report of THALES [1], a global security company, 63% of respondents among 1,100+ senior security executives described that their organizations apply new technologies without considering levels of security. The report shows that development of ICT without considering security technology might cause irreversible security incidents. Therefore, we need to consider security for safer ICT environments. Most companies and organizations are trying to improve their ICT security environment and increase security budget annually [2]. However, not all efforts that are allocated in the security cost are effective. According to a survey report [3], the ICT security budget is decided by boards of directors and C-level executives who lack expertise and knowledge about ICT security, and 81% of respondents and 42% of the IT security practitioners replied that the ICT security budget is less than adequately allocated. Moreover, 53% of respondents rate their organization’s annual budgeting process for IT security activities as complex, and only 32% of respondents say the budget is appropriate based on an assessment of our organization’s security risks. For these reasons, we need systematic information security risk management methods for efficient security risk management. For systematic and efficient security risk management, we need to understand information security risk management [4] by priority. The security cost allocation, one of the risk treatment methods in information security risk management, is determined based on risk assessment. Therefore, we should consider risk assessment, risk treatment, monitoring, and review for security enhancement. The security risk assessment methods consist of qualitative methods and quantitative methods. In Table 1, we summarize major risk assessment methods for information security [5]. Major information security risk assessment methods Recently, existing risk assessment methods have tended to center around the probability and damage using assessment factors such as frequency of threats, asset consequence, cost of the resource, etc. However, the target of security risk assessment should be a security vulnerability according to the attack paths [6-8] of Open Web Application Security Project (OWASP), as shown in Fig. 1 The attack flow defined by OWASP starts with attacks of attackers on vulnerabilities of service providers. An attack using a vulnerability should be prevented or mitigated though a security control, and if it passes the security control, a technical or business impact occurs. Therefore, to enhance security, the vulnerabilities are minimized by improving a related security control. Moreover, the security threats consist of one or more security vulnerabilities, which should be taken into account in terms of systematic or continuously attacks security vulnerabilities. For the reason, the security vulnerabilities should evaluate rather than security threats. OWASP attack flow. Additionally, some threat-oriented security risk assessment can be evaluated in duplicate of the same threats among security attack techniques. For example, data breach and data loss are different security threats. However, both threats have the same security vulnerability such as hijacking administrator accounts. Thus, existing security evaluation methods have the problem of repeatedly evaluating the same security vulnerability based on different security threat. Therefore, we need an accurate security risk assessment method based on vulnerability without duplicate risk evaluation. Existing security cost optimization methods [4,9,10] are aimed at calculating the security budget considering security environment or maximizing benefit. However, as shown in Fig. 2, the security control performs activities in order to prevent or mitigate security attacks based on vulnerabilities. Therefore, it should be a security cost allocation considering each security control rather than calculating the total security budget for strengthening security. In addition, the importance of security control is different depending on the characteristics of the provided service. Therefore, the security cost optimization strategies should be determined by considering the weight in the security control according to the provided service. Finally, according to [2], most companies determine their security budget within 3% of their overall budget. However, since the existing security cost optimization methods do not consider the limited security budget but only calculate the optimal allocation budget for security enhancement effect, a method for optimum security cost allocation on a limited budget is needed. This paper is organized as follows: In Section 2, related methods are examined. In Section 3, the proposed security cost optimization model is explained. Section 4 describes the simulation of the proposed method. Finally, Section 5 presents the conclusions of this research. ## 2. Related Works As mentioned in the introduction, Security risk evaluation should be conducted as vulnerability-based security evaluations rather than threat and impact-based evaluation, such as OWASP attack flow. Recently, attack graphs [11,12] have been most commonly used for security evaluation based on security attacks such as Attack Tree [13,14] and Attack Defense Tree [15,16]. The attack graph lists the vulnerabilities of attack/threat to reach the attack target and helps specify the optimal attack route. However, attack graphs and trees address only one attack goal and do not consider correlation between attack nodes. We need to consider all of the potential attacks against the security in an attack perspective. In an optimized security cost scheme, we need to consider the minimization of security vulnerability or asset/business impact based on the weight of security control with regard to service type. To define the weight of security control according to service type, we refer to several decision-making methods such as the analytic hierarchy process (AHP) and the Delphi method [17]. Tian et al. [18] suggested a novel threat evaluation model using the AHP, and Na et al. [19] proposed a definition of weight security control according to service type using the AHP. The purpose of most security enhancement schemes [20-22] is to propose a cost optimization scheme for the best benefit. These schemes have been researched and applied in many companies and the IT industries in [22]. Unfortunately, most these schemes do not take into account characteristics of various computing environments or target services. In addition, each of these schemes consider different elements when determining security cost allocation. Most these schemes use two basic elements: the probability of an event occurring and the losses that this may incur. This is called the expected annual loss (ALE) or estimated annual cost (EAC). These elements are calculated for an event by simply multiplying the probability of potential losses. We can predict the benefit of any cost allocation strategies using return on investment (ROI) or return on security investment (ROSI) [23] based on ALE. This shows loss variation between before and after cost allocation. Recently, most security cost allocation researches using ROSI have been introduced [23-25]. However, it is difficult to calculate ALE and risk mitigation value. Therefore, we need a systematic security cost optimization model based on a vulnerability analysis. According to [3], ROSI and TCO (total cost of ownership) are currently the most commonly used security cost allocation evaluation methods currently. TCO is a term for concepts that consider benefit on security costs in enterprises. In other words, the overall cost of using a security system is the combined cost of software/hardware purchases, maintenance costs, employee training, and staffing. However, TCO cannot be an objective security cost optimization model because the results vary depending on considered factors even in the same environment. Finally, Gordon and Loeb [20] proposed a security cost optimization model based on an analysis of security threat probability and vulnerability probability. However, the model proposed a security cost optimization method without considering the features of the system/service and limited security budget. In addition, the security threat probability and vulnerability probability are very difficult to calculate in practice. Different security cost optimization methods and evaluation methods have been researched. However, there has been minimal research regarding security control based security cost optimization model using security vulnerability and security controls. Consequently, we need a security cost allocation method that minimizes security vulnerability focused on security controls within a limited budget. ## 3. Security Cost Optimization Model The existing security risk management models without considering security risk evaluation have some problems like the perspective of the vulnerability evaluation and security cost allocation evaluation that is different, as mentioned in the introduction. Therefore, we propose a novel security risk management model using advanced attack graph (AAG) and security vulnerability measurement (SVM) to optimize cost for security infrastructure. The model, which consists of security risk evaluation method and security cost allocation method, is based on risk management standard [26] established at ISO/IEC as shown in Fig. 2. Security cost optimization model. The procedure of security cost optimization model (Fig. 3) is performed in the following steps. (1)Service profile:identify factors that service environments and security statements such as service type, security controls, etc. (2)Vulnerability identification:identify security attack type and composition, such as security threat, vulnerability, etc. (3)Vulnerability evaluation:draw the AAG and estimate quantitative security risk using security vulnerabilities defined in the previous phase. (4)Establishing cost allocation strategy:establish optimal security cost allocation strategy considering security weight and constrained budget. (5)Cost allocation strategy evaluation:evaluate the security cost allocation strategy through comparing vulnerabilities before/after allocation. (6)Monitoring and review:monitor and analyze all steps of the security risk management procedure. Procedure of security cost optimization model. In this paper, we will focus on detailed procedures of the security risk assessment and information security cost allocation, excluding monitoring and reviewing of the sixth phase. Moreover, this economical information security risk management model adopts the following assumptions: One security threat[TeX:] $$\left(V_{1}\right)$$consists of several security vulnerabilities[TeX:] $$\left(V_{11}, V_{12}, V_{13} \ldots\right).$$ The security controls are independent of each other in the investment of security controls. If we invest the security control,[TeX:] $$S C_{1},$$it does not affect the vulnerability of other security controls[TeX:] $$\left(S C_{2}, S C_{3}, \dots\right)$$ One of the security vulnerabilities has to match one of the security controls. Moreover, one of the security controls has more than one vulnerability. The currency used in the example is irrelevant; thus, we consider the values as plain numbers. In Table 2, we summarize all the notations used in this paper. Process of economical information security risk management We propose a new security risk management model that considers the features of a service and a limited security budget. The proposed model follows five steps, and the following sections will provide a detailed description of each step. Notations ##### 3.1 Step 1. Define the Service Environment Parameters This chapter identifies service types and environments, defines security control, and defines the weight of security control for each service type. Service type At first, we define the type of service that will define security controls and their weights. This definition of service type can be categorized into ICT parts (e.g., healthcare, internet of things, artificial intelligence, etc.) or can be categorized by service objectives (e.g., storage service, web application service, web desktop service, etc.). We might define service types on a variety of criteria. Security controls The security controls mean that the service provider is composed of classification of security functions or technologies. The i-th security controls[TeX:] $$\left(S C_{i}\right)$$) is part of the security technology such as storage, process, network, access control, and audit [27] of the corresponding service provider. Moreover, a vulnerability has to match one or more security controls, and the security cost allocation method allocates only in security controls. Moreover, the investment cost[TeX:] $$\left(\cos t_{i}\right)$$in security control i includes several costs [24]: (1) implementation cost[TeX:] $$\left(\operatorname{Cost}_{i m p_{i}}\right)$$,(2) installation cost[TeX:] $$\left(\operatorname{cost}_{i n s t_{i}}\right)$$,(3) maintenance cost[TeX:] $$\left(\operatorname{cost}_{\operatorname{main}_{i}}\right)$$, and (4) training cost[TeX:] $$\left(\operatorname{cost}_{\operatorname{train}_{i}}\right)$$ [TeX:] $$\operatorname{cost}_{i}=\operatorname{cost}_{i m p_{i}}+\cos t_{i n s t_{i}}+\operatorname{cost}_{\operatorname{main}_{i}}+\operatorname{cost}_{t r a i n_{i}}$$ Weights of security controls The security controls have different security control weights depending on characteristics of the service type. For example, it is important that there is availability of the service, access control, and personal identification information for a web service. However, in a storage service, it is important to have data encryption, data backup, and privilege management. Consequently, the weight of the security control is relatively important depending on characteristics of the service type. In 2014, to determine the weights of the security control, Na et al. [19] proposed a method to calculate the weight of the security control depending on the service type based on an AHP hierarchy model. The weight decision approach of the security control includes several decision-making methods such as an AHP model and the Delphi technique. ##### 3.2 Step 2. Identification of Security Vulnerabilities In this step, we identify the potential security threats and vulnerabilities that can occur in the corresponding service or system, calculate the correlation values (CVs) through correlation analysis between vulnerabilities, and define the mitigation rate according to security investments. These identified vulnerabilities and related variables are used for drawing the AAG and evaluating vulnerability (in the next step). Threats and vulnerabilities Most security attacks involve several sub-processes in order to achieve an attack goal. However, in a service provider aspect. However we have analyzed and defined the sub-processes of attacking vulnerabilities that are executed in order from one security threat. Therefore, we identify and respond to all of the potential security attacks on the service provider and define vulnerabilities and threats against attack sub-processes. The security threats consist of multiple vulnerabilities of a threat that operate in a regular sequence. Correlation value between parent and child vulnerability Since the sub-processes of attack techniques proceed in order, the vulnerabilities of parent and child in one security threat affect each other. For example, if the first attack sub-process is successful, the second attack sub-process is easier to execute. Therefore, in the security risk assessment, the correlation between attack nodes should be considered. For accurate security threat assessment, CVs are important and have a lot of influence. However, this paper does not research the derivation of correlation values. For accurate security threat assessment, CVs are important and have a lot of influence. However, in this paper, the description of derivation of CVs is omitted because it proposes an accurate security evaluation method using AAG. Security vulnerability mitigation ratio Security vulnerability mitigation (SVM) rate refers to the rate at which security vulnerabilities are mitigated when companies or organizations invest their security budgets. This ratio is different for each security control and also depends on the service environment or service type. In general, this ratio can also be estimated from security-enhancing data (historic or static data) from security budget investments at the company or organization. ##### 3.3 Step 3. Evaluation of Security Vulnerabilities To evaluate accurate security risk assessment and select an effective security cost allocation strategy, this step is the most important step to determine the investment cost for each security control in the proposed scheme. This step consists of three processes: (1) draw the AAG, (2) evaluate the values of the vulnerabilities, and (3) calculate the SVM. In addition, we describe the AAG, which has shown the overall flow of security attack using security controls, security vulnerabilities, and threats. In addition, we describe estimation of the vulnerability value and how to process the vulnerability measurement. In exploring the security cost optimization model, this paper will be limited to proposing an AAG design, establishing an optimal security cost allocation strategy, and discussing how to evaluate the cost allocation strategy. The quantitative security risk assessment methods [28-31] and security control weight decision methods [19] are beyond the scope of the present paper. Therefore, we will just use existing quantitative security risk assessment methods and security control weight decision methods, which we do not propose. Design the advanced attack graph The AAG shows all known-attacks and sub-process that can occur on the system, and it helps to understand the solution for vulnerability duplicating problem. This AAG is designed for attack techniques and security controls as a countermeasure to the attack techniques. Repetition removal vulnerability As mentioned above, each security attack has a different goal and attack process. However, some sub-processes are common to most attack techniques. Therefore, in service risk evaluation, a common sub-process from most attacks can be duplicated, resulting in incorrect assessment results. For example, data breach and data loss are different security threats. However, both threats have the same security vulnerability such as hijacking administrator accounts. Thus, existing security evaluation methods have the problem of repeatedly evaluating the same security vulnerability based on different security threat. The important thing in this section is that the duplicated vulnerabilities are not included in the security risk assessment and security investment function. To solve this duplicated evaluation, we need to remove the duplicate vulnerability. The removal of a duplicate vulnerability in an AAG is shown in Fig. 4. Match security controls with vulnerabilities After eliminating the duplicate vulnerability, we classify and match the vulnerabilities with relevant security controls. Moreover, through this process, it is possible to know the security weakness point because it can know the security weakness that is not matched with the security control. Example of an AAG structure. Draw the AAG The AAG is composed of Normal, AND, and OR structure (Table 3). This graph can be designed as shown in Fig. 5 using three structures, based on the associated security controls matched to the interrelationships of the vulnerabilities. In addition, each vulnerability node is a parent-child relationship because the attack process proceeds sequentially. In the OR structure, if more than one child node exists, the parent vulnerability is threatened if only one child node succeeds. In the AND structure, the parent node is threatened only if all the child nodes succeed in the attack. Types of AAG structure Because a security attack is conducted in several steps sequentially, the child node affects the parent node. In this paper, the influence between the child and parent nodes is defined as the CV. Therefore, we can calculate vulnerability values[TeX:] $$\left(v V_{a c}\right)$$of vulnerability nodes in an AAG including the CV based on initial vulnerability values[TeX:] $$\left(v V_{a c}^{0}\right)$$, as in (1)–(3). ##### (1) [TeX:] $$\text { Normal structure: } v V_{a c}=v V_{a c}^{0}+\left(v V_{\text {child}} * C V\right)$$ ##### (2) [TeX:] $$\text { OR structure: } v V_{a c}=v V_{a c}^{0}+\sum\left(v V_{\text {childn}} * C V\right)$$ ##### (3) [TeX:] $$\text {AND structure: } v V_{a c}=v V_{a c}^{0}+\frac{\sum\left(v V_{c h i l d n}^{* C V}\right)}{\text {Number of } V_{c h i l d}}$$ In a normal structure, the vulnerability node[TeX:] $$\left(V_{a c}\right)$$is affected the CV of a single child node, while the OR structure affects the CV of all child nodes. The AND structure is also affected also average CV of child nodes. This AAG is designed based on attack techniques and sub-processes against the CSP; it helps to elucidate the security status based on security threats, vulnerabilities, and related security controls. Evaluation of quantitative vulnerability In AAG, duplicate vulnerabilities are eliminated, and vulnerabilities quantitatively assessing the number of vulnerabilities in this paragraph. In order to calculate the vulnerability value, the quantitative security risk assessment method was used. Many previous studies [28-31] have attempted to calculate vulnerability value. In this paper, the vulnerability value are calculated using the existing studies. Security vulnerability measurement The SVM is a criterion of the security vulnerability evaluation that considers the weight of the security control and the vulnerability value. We should consider the weight of the security control in the SVM because it has different impacts according to service type. To calculate the SVM, we need to perform first three processes: (i) design the AAG, (ii) define the weight of security controls, and (iii) calculate the vulnerability values. In this section, we calculate the total vulnerability value of each security control and the SVM of a security control depending on the corresponding weight of that security control. Consequently, the SVM shows a vulnerability scale for each security control based on the weight of the security controls. We can analyze the security enhancement benefit using the SVM. We classify the vulnerability value according to the security control because we will allocate in security controls. Therefore,[TeX:] $$v V\left(S C_{i}\right)$$is the sum of the vulnerability values in related security control i, as shown in (4). In addition, in (5),[TeX:] $$S V M_{i}$$is determined by the multiplication of[TeX:] $$v V\left(S C_{i}\right) \text { and } W_{i}$$SVM of a security service is the total[TeX:] $$S V M_{i}$$of all security controls, as follows in (5). Therefore, we can verify the security status of the corresponding security service through the SVM by considering the weight of the security control. ##### (4) [TeX:] $$v V\left(S C_{i}\right)=\sum_{x=1}^{n} \sum_{y=1}^{m} v V_{x y}, \text { where } v V_{x y} \in S C_{i}$$ ##### (5) [TeX:] $$S V M_{i}=v V\left(S C_{i}\right) * W_{i}$$ ##### (6) [TeX:] $$S V M=\sum_{i=1}^{k} S V M_{i}$$ ##### 3.4 Step 4. Establish Optimal Security Cost Allocation Strategy In this step, we establish a security investment strategy that invests in each security control based on the weights of the security controls for the minimum SVM. Security cost allocation function If security budgets are allocated for security controls, the related vulnerabilities will be mitigated. In addition, the vulnerability value of the vulnerability nodes will also be reduced, and the vulnerability nodes may have CV in the parent-child relationship, and thus the vulnerability value may vary depending on which security control budget is allocated. Therefore, we define cost allocation functions of this security enhancement process for vulnerability mitigation using three parameters: (i) an initial vulnerability value,[TeX:] $$v V_{x y}^{0},$$of a vulnerability node[TeX:] $$V_{x y},$$, (ii) vulnerability mitigation ratio[TeX:] $$M\left(z_{i}, v V_{x y}^{0}\right)$$of related security control[TeX:] $$S C_{i},$$and (iii) the affected vulnerability value of child node[TeX:] $$C V * F_{\text {child}}.$$The functions of the security vulnerability in each structure of the AAG are as follows: ##### (7) [TeX:] $$\text { Normal }: F_{x y}\left(z_{i}\right)=v V_{x y}^{0} * M\left(z_{i}, v V_{x y}^{0}\right)+F_{\text {child}} * C V$$ ##### (8) [TeX:] $$O R: F_{x y}\left(z_{i}\right)=v V_{x y}^{0} * M\left(z_{i}, v V_{x y}^{0}\right)+\sum\left(F_{\text {child}} * C V\right)$$ ##### (9) [TeX:] $$A N D: F_{x y}\left(z_{i}\right)=v V_{x y}^{0} * M\left(z_{i}, v V_{x y}^{0}\right)+\overline{F_{\text {child}} * C V}$$ We should classify the vulnerability value after an investment in a security control to evaluate the vulnerability of each security control. The total vulnerability value after investing in security control i, denoted as[TeX:] $$F\left(S C_{i}\right),$$is the sum of the vulnerability values of the vulnerability node associated with security control i. SVM is measured after allocating budget to security controls by considering the weight of each security control. Additionally, the sum of the SVMs of all security controls is the total SVM of the service. Therefore, based on total SVM of the corresponding service, we can analyze a security vulnerability and investment assessment. Establish a security investment strategy In this paragraph, we can calculate[TeX:] $$F_{a l l}\left(z_{i}\right) \text { and } S V M_{i}$$for each security control according to the security cost allocation function. Using the various parameters described above, we can formulate an optimal security investment strategy based on optimization theory (such as the Lagrange multiplier method) in a limited budget. To establish an optimal security investment strategy, it is necessary to define a security cost allocation function that can minimize the SVM using the Lagrange multiplier method in a limited security budget Z as follows: ##### (10) [TeX:] $$\operatorname{Minimum} S V M=\sum_{i=1}^{k} \sum_{y=1}^{m} \sum_{x=1}^{n}\left(F_{x y}\left(z_{i}\right) * W_{i}\right) \\ \text { Subject to } \sum_{i=1}^{k} z_{i}=Z, \text { where }\left\{z_{i} | z=1, \ldots, k\right\} \geq 0$$ As in the above formula, we can estimate the investment in each security control to be the minimum SVM through the Lagrange multiplier method. Moreover, we can also compare and evaluate different security investment strategies based on SVM. ##### 3.5 Step 5. Evaluation of Security Cost Allocation Strategy In this section, we analyze several security cost allocation strategies from a variety of perspectives: (i) total SVM after cost allocation, (ii) efficiency of security cost allocation, (iii) percentage of vulnerability decrease, and (iv) percentage of security improvement. The functions used to analyze the cost allocation method from various perspectives are as follows: This function is the sum of the vulnerability value considering the weight of the security control for all vulnerabilities. We will compare the amount of SVM change after cost allocation. ##### (11) [TeX:] $$S V M=\left\{\begin{array}{l} \sum_{i=1}^{k} \sum_{y=1}^{m} \sum_{x=1}^{n}\left(v V_{x y} * W_{i}\right) \\ \sum_{i=1}^{k} \sum_{y=1}^{m} \sum_{x=1}^{n} F_{x y}\left(z_{i} * W_{i}\right) \end{array}\right.$$ To analyze the cost allocation effect rate of the cost, we define effect function[TeX:] $$F_{e f f}(Z)$$as (12).[TeX:] $$F_{e f f}(Z)$$is the effect rate, which is the amount of SVM change divided by total cost Z. ##### (12) [TeX:] $$F_{e f f}(Z)=\frac{\sum_{i=1}^{k} \sum_{x=1}^{m} \sum_{y=1}^{n}\left\{\left(v V_{x y}-F_{x y}\left(z_{i}\right)\right) * w_{i}\right\}}{Z}$$ We can calculate the percentage of vulnerability decrease, which is the amount of SVM change divided by the SVM, as follows: ##### (13) [TeX:] $$F_{r e d}(Z)=\frac{\sum_{i=1}^{k} \sum_{x=1}^{m} \sum_{y=1}^{n}\left\{\left(v V_{x y}-F_{x y}\left(z_{i}\right)\right) * W_{i}\right\}}{\sum_{i=1}^{k} \sum_{y=1}^{m} \sum_{x=1}^{n}\left(v V_{x y}\right) * W_{i}} * 100$$ To verify the percentage of security enhancement, we define function[TeX:] $$F_{i m p}(Z)$$as (14). ##### (14) [TeX:] $$F_{i m p}(Z)=\frac{\sum_{i=1}^{k} \sum_{y=1}^{m} \sum_{x=1}^{n}\left\{\left(v V_{x y}\right) * W_{i}\right\}}{\sum_{i=1}^{k} \sum_{y=1}^{m} \sum_{x=1}^{n} F_{x y}\left(z_{i}\right) * W_{i}} * 100$$ ## 4. Simulation of Proposed Model In connection with our proposed model and methods, we confirm the efficiency of security cost optimization model in this chapter. Therefore, we discuss vulnerability analysis and cost allocation in security controls and evaluate cost allocation methods based on three cost allocation strategies including our proposed model. ##### 4.1 Step 1. Security Profile Identification of Service Type In order to define the service type, we select a personalized webtop service [19]. The webtop service provides a highly personalized setting of an individual desktop based on web-application. We access the virtual desktop of a personal computer, such as contacts, e-mail, and files, on a personalized and familiar desktop with synchronization tools. Definition of security controls In this paragraph, we define security controls through a security analysis of the webtop service. For this simulation, we define security controls according to [27]. The security controls in [27] are as follows: SC1: Storage (S) SC2: Process (P) SC3: Network (N) SC4: Access Control (AC) SC5: Audit (AU) Definition of Security control weight To calculate the SVM in a webtop service, we first define the weights of the security controls. In this paper, we define five security controls (Storage, Processing, Network, Access Control, and Audit) according to the ANP method of [19]. Therefore, we define the weights of the security controls for the webtop service as shown in Table 4. Weights of security controls in webtop ##### 4.2 Step 2. Vulnerability Identification Identification of threats and vulnerabilities Among the critical security threats, we select and define five threats and an attack technique for each. Moreover, we define the vulnerabilities of each attack sub-process as shown in Fig. 5. Major security threats. Data Breach: APT attack process to Google datacenter[16] [TeX:] $$V_{1}=\left\{V_{11}, V_{12}, V_{13}, V_{14}, V_{15}\right\}$$ Data Loss : Willful data damage [TeX:] $$V_{2}=\left\{V_{21}, V_{22}, V_{23}\right\}$$ Service/Account Hijacking: XSS attack [32] [TeX:] $$V_{3}=\left\{V_{31}, V_{32}, V_{33}\right\}$$ Insecure API: Insecure direct object references [TeX:] $$V_{4}=\left\{V_{41}, V_{42}\right\}$$ Malicious Insider: Memory dump scanning [331] [TeX:] $$V_{5}=\left\{V_{51}, V_{52}, V_{53}\right\}$$ Definition of correlation value The CV is a value that affects the vulnerability value between child and parent node. In this simulation, however, all CVs are defined as 0.1 to simplify the evaluation process. Definition of security vulnerability mitigation ratio It is important to define appropriate vulnerability mitigation functions and values to establish an optimal security investment strategy. The mitigation function for the ratio of security vulnerability mitigation was defined based on the probability of security breaches in existing security improvement models of [28,30,34,35]. The mitigation function for the security vulnerability mitigation ratio[TeX:] $$M\left(z_{i}, v_{x y}\right)$$ [TeX:] $$M\left(z_{i}, v_{x y}\right)=\frac{v_{x y}}{\left(\alpha z_{i}+1\right)^{\beta}}, \text { where } \alpha, \beta \geq 0$$ We can calculate parameters and of the security vulnerability mitigation function from the statistical or historical data of the security cost allocation. For example, if 36 vulnerability values change when there are 100 cost with 100 vulnerability values, we determine that parameters and are 0.079 and 0.468, respectively. In this example, we define the vulnerability mitigation ratio based on historical data. In this section, we define the values of variables α and β as shown in Table 5 and Fig. 6. Values of variables and Mitigation ratio of security vulnerability. ##### 4.3 Step 3. Vulnerability Evaluation Design of AAG for webtop services To design an AAG, two processes need to be performed first. The first is to eliminate duplicate vulnerabilities, and the second is to match each vulnerability with a security control. Through the security profile of the webtop service, we detected duplicate vulnerabilities by obtaining the password[TeX:] $$\left(V_{13}, V_{33}, V_{21}, \text { and } V_{53}\right)$$and accessing the data[TeX:] $$\left(V_{15}, V_{22}, \text { and } V_{42}\right).$$We then eliminate duplicate vulnerabilities and match the vulnerability with the related security control. After two processes, the following AAG is drawn as shown in Fig. 7. esigned AAG of security simulation. Evaluation of quantitative vulnerabilities The vulnerability nodes de-duplicated through AAG are assessed and define initial vulnerability value based on the existing quantitative security assessment methods [28-31], as shown in Table 6. Attack-node configurations Summary of security controls Security vulnerability measurement The SVM is calculated as a vulnerability value of each vulnerability and the weight of the security control using (2)-(4) as Tables 8 and 9. Vulnerability values of attack nodes after security cost allocation Security control vulnerability measurement ##### 4.4 Step 4. Investment Strategy In this section, we obtain optimal security cost allocation strategy using SVM and the security cost allocation function. The security cost allocation function is as follow: [TeX:] $$\text { Minimum SVM }=\sum_{i=1}^{5} \sum_{y=1}^{4} \sum_{x=1}^{5}\left(F_{x y}\left(z_{i}\right) * W_{i}\right) \\ \text { Subject to } \sum_{i=1}^{5} z_{i}=z_{1}+z_{2}+z_{3}+z_{4}+z_{5}=500$$ We can obtain the optimal cost allocation of security controls based on Lagrange multiplier method as shown in Table 10 and Fig. 8. Costs allocation of each security control Security vulnerability measurement after 500 security cost. ##### 4.5 Step 5. Cost Allocation Strategy Evaluation In the previous sections, we addressed the security enhancement model in order to minimize the SVM for a limited security budget in the computing environment. In this section, we will simulate the security cost allocation strategies with a selected service [7], a webtop service with a limited security budget using previously defined parameters (vulnerability, Security Controls, SVM, etc.). The following assumptions are made for the factors of the environment for the cost allocation evaluation: The three scenarios are as follows. Strategy 1. Equality cost allocation:This equality cost allocation method allocates the same cost for each security control. Strategy 2. Cost allocation according to weight of the security control:This method determines the cost based on the weight of the security control according to service type. The cost allocation rate is the same as the rate of security control weight. Strategy 3. Our proposed model:We determine the cost allocation for each security control using our scheme for the minimum SVM. We identify the most efficient cost allocation method through simulation with three scenarios. Evaluation security cost allocation strategies We assign security cost allocation strategies to the three strategies mentioned above. The proposed model determines the appropriate security cost allocation for each security control by using the language multiplier method, which is an optimization method for efficient security cost allocation as Table 11. . In addition, the SVM change amount - SVM is used to calculate[TeX:] $$F_{e f f}(Z), F_{r e d}(Z), \text { and } F_{i m p}(Z).$$ Costs of each cost allocation strategy Result of security cost allocation strategies As can be seen in Table 10 and Fig. 8, with a limited security budget (500) in a webtop service, we certify that an SVM of 536.93 for our proposal model is a more effective cost allocation strategy than an SVM of 550.96 for the equality cost allocation strategy and an SVM of 541.83 for the weight-oriented cost allocation strategy. We compare strategy1 with our model and Strategy2 with proposed model to understand the results of the cost allocation, as shown in Table 13. The SVM of our model is 14.03 and 4.9 less than strategy1 and strategy2 respectively. For the effective cost allocation with a 500 security budget, our model has a 0.028 and 0.0098 more effective cost allocation than strategy1 and strategy2, respectively. With the reduced vulnerability ratio, the proposed model shows 1.8165% and 0.6341% better reduction vulnerability ratios compared to Strategy1 and Strategy2, respectively. Finally, proposed model shows a 3.6624% and 1.3% higher security improvement ratio compared to Strategy1 and Strategy2, respectively. This evaluation shows that proposed model that considers the weights of the security controls provides a more effective security cost allocation strategy than the equality cost allocation (Strategy1) and the cost allocation scheme according to the rate of security control weights (Strategy2). Comparison of security cost allocation strategies Relative comparison of SVM among investment strategies. Security vulnerability measurement after cost allocation. In Fig. 9, we show SVM of each cost allocation strategies from 100 to 1000 cost. It is shown that the proposed model is the most effective cost allocation strategy when increasing cost. Additionally, we compare the relative values of the SVM as shown in Fig. 10. When the indicator is defined by Strategy1 (average 1.0) and Strategy2 (average 1.0167), our model (average 1.02573) are the most effective. Consequentially, we verify that the proposed model is the most effective cost allocation strategy among security cost allocation strategies in a webtop service. ## 5. Conclusions In this paper, instead of evaluating security risks based on security attacks and threats, we analyzed the security control composition and weight by analyzing corresponding service characteristics and service environment, and proposed an effective security enhancement scheme based on the analysis results. In addition, the problem of duplicate vulnerability evaluation of the existing security threat assessment method was solved through AAG, and limited security budget was considered. Although this paper has covered many content security assessment methods and budget allocation methods, we can summarize them in three contributions. First, we proposed a new vulnerability evaluation method using an AAG that considers repetition removal vulnerability and CV between nodes. Second, our proposed scheme provides a security cost allocation strategy according to service type. Since each service type has a different security control weight, we consider the weight of the security control when establishing an optimal security cost allocation strategy. Finally, in the proposed scheme, the budget is limited. In fact, many companies and organizations spend a lot of budget for security enhancement, however these budgets are planned and used in a yearly budget, so the budget invested in security is limited. However, the existing security enhancement schemes do not consider this part, so the proposed method will help to plan the necessary budget for effective security enhancement. We proposed the optimal security cost allocation method considering the service environment through the three contributions mentioned above. However, the proposed method does not describe how to define CV values. Defining CV values requires analysis and forecasting based on historical data for the service. However, the study of data analysis is beyond the scope of this paper. In future work, we will define CV values using big data analysis or machine learning based analysis methods using various environmental variables and data analysis results. ## Acknowledgement This work was supported by Institute for Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2017-0-00294, Service mobility support distributed cloud technology). ## Biography ##### Jun-Young Park https://orcid.org/0000-0002-8481-8701 He received his B.Eng. degree in Computer Engineering from Hannam University, Korea, in 2010, and a master’s degree in Computer Engineering from the Kyung Hee University, Korea in 2012, He is currently working toward a Ph.D. degree in the Department of Computer Science and Engineering at Kyung Hee University, Korea. His research interests include cloud computing, mobile cloud computing, cloud computing security, security-as-a-service. ## Biography ##### Eui-Nam Huh https://orcid.org/0000-0003-0184-6975 He received his B.Eng. degree in Computer Engineering from Hannam University, Korea, in 2010, and a master’s degree in Computer Engineering from the Kyung Hee University, Korea in 2012, He is currently working toward a Ph.D. degree in the Department of Computer Science and Engineering at Kyung Hee University, Korea. His research interests include cloud computing, mobile cloud computing, cloud computing security, security-as-a-service. He earned a B.S. degree from Busan National University in Korea, a master’s degree in Computer Science from the University of Texas, USA in 1995, and a Ph.D. degree from the Ohio University, USA in 2002. He is the director of Real-time Mobile Cloud Research Center. He is a chair of Cloud/BigData Special Technical Committee for Telecommunications Technology Association (TTA), and a Korean national standards body of ITUT SG13 and ISO/IEC SC38. He was also an Assistant Professor at Sahmyook University and Seoul Women’s University, South Korea. He is now a Professor in the Department of Computer Science and Engineering, Kyung Hee University, South Korea. His research interests include cloud computing, screen contents coding (cloud streaming), Internet of Things, distributed real-time systems, security, and big data. ## References • 1 Thales, 2017;, https://www.thehaguesecuritydelta.com/media/com_hsd/report/127/document/2017-thales-data-threat-report.pdf • 2 Barbara Filkins, 2016;, https://www.sans.org/reading-room/whitepapers/leadership/paper/36697 • 3 Ponemon Institute LLC, 2015;, https://www.secureworks.com/resources/wp-2015-global-study-on-it-security-spending-and-investments • 4 A. Schilling, B. Werners, "Optimizing information security investments with limited budget," in Operations Research Proceedings 2014. Cham: Springer, 2016;pp. 493-499. custom:[[[-]]] • 5 A. Behnia, R. A. Rashid, J. A. Chaudhry, "A survey of information security risk analysis methods," SmartCR, vol. 2, no. 1, pp. 79-94, 2012.doi:[[[10.6029/smartcr.2012.01.007]]] • 6 Oepn Web Application Security Project, OWASP Top 10: The Top 10 Most Critical Web Application Security Threats: Enhanced with Text Analytics and Content by PageKicker Robot Phil 73, SC: CreateSpace Independent Publishing Platform, North Charleston, 2014.custom:[[[-]]] • 7 J. Y. Park, Y. R. Shin, K. H. Kim, E. N. Huh, "Access control framework design for personal cloud," in Proceedings of the International Conference on Convergence Technology, Chiang Mai, Thailand, 2013;pp. 1578-1579. custom:[[[-]]] • 8 W. M. Kang, S. Y. Moon, J. H. Park, "An enhanced security framework for home appliances in smart home," Human-centric Computing and Information Sciences, vol. 7, no. 6, 2017.doi:[[[10.1186/s13673-017-0087-4]]] • 9 C. D. Huang, R. S. Behara, "Economics of information security investment in the case of concurrent heterogeneous attacks with budget constraints," International Journal of Production Economics, vol. 141, no. 1, pp. 255-268, 2013.custom:[[[-]]] • 10 N. J. Brown, K. A. Jones, L. K. Nozick, N. Xu, "Multi-layered security investment optimization using a simulation embedded within a genetic algorithm," in Proceedings of 2015 Winter Simulation Conference (WSC), Huntington Beach, CA, 2015;pp. 2424-2435. custom:[[[-]]] • 11 H. Wang, Z. Chen, J. Zhao, X. Di, D. Liu, "A vulnerability assessment method in industrial internet of things based on attack graph and maximum flow," IEEE Access, vol. 6, pp. 8599-8609, 2018.doi:[[[10.1109/ACCESS.2018.2805690]]] • 12 N. Gao, Y. He, B. Ling, "Exploring attack graphs for security risk assessment: a probabilistic approach," Wuhan University Journal of Natural Sciences, vol. 23, no. 2, pp. 171-177, 2018.custom:[[[-]]] • 13 J. C. Maa, S. Chen, M. Li, J. P. Yao, "A kind of hierarchical network vulnerability assessment model based on attack graph," in Computer Science and Artificial Intelligence: Proceedings of the International Conference on Computer Science and Artificial Intelligence (CSAI2016). Singapore: World Scientific Publishing, 2017;custom:[[[-]]] • 14 R. Dewri, I. Ray, N. Poolsappasit, D. Whitley, "Optimal security hardening on attack tree models of networks: a cost-benefit analysis," International Journal of Information Security, vol. 11, no. 3, pp. 167-188, 2012.doi:[[[10.1007/s10207-012-0160-y]]] • 15 B. Kordy, W. Wideł, "On quantitative analysis of attack–defense trees with repeated labels," in Principles of Security and Trust. Cham: Springer, pp. 325-346, 2018.custom:[[[-]]] • 16 P. Wang, W. H. Lin, P. T. Kuo, H. T. Lin, T. C. Wang, "Threat risk analysis for cloud security based on Attack-Defense Trees," in Proceedings of 2012 8th International Conference on Computing Technology and Information Management (NCM and ICNIT), Seoul, Korea, 2012;pp. 106-111. custom:[[[-]]] • 17 Z. Tarmudi, N. W. D. Tamsin, J. Janteng, "A fuzzy Delphi method to rank alternatives for industry selection," in AIP Conference Proceedings, 2018;vol. 1974, no. 020096. custom:[[[-]]] • 18 Y. Tian, B. Song, E. N. Huh, "A novel Threat Evaluation method for privacy-aware system in RFID," International Journal of Ad Hoc and Ubiquitous Computing, vol. 8, no. 4, pp. 230-240, 2011.doi:[[[10.1504/IJAHUC.2011.043584]]] • 19 S. H. Na, E. N. Huh, "A broker‐based cooperative security‐SLA evaluation methodology for personal cloud computing," Security and Communication Networks, vol. 8, no. 7, pp. 1318-1331, 2015.custom:[[[-]]] • 20 L. A. Gordon, M. P. Loeb, "The economics of information security investment," ACM Transactions on Information and System Security (TISSEC), vol. 5, no. 4, pp. 438-457, 2002.doi:[[[10.1145/581271.581274]]] • 21 A. Trufanov, N. Kinash, A. Tikhomirov, O. Berestneva, A. Rossodivita, "Optimal information security investment in modern social networking," in Complex Networks VIII. Cham: Springer, pp. 175-182, 2017.custom:[[[-]]] • 22 D. Schatz, R. Bashroush, "Corporate information security investment decisions: a qualitative data analysis approach," International Journal of Enterprise Information Systems (IJEIS), vol. 14, no. 2, pp. 1-20, 2018.doi:[[[10.4018/IJEIS.2018040101]]] • 23 W. Sonnenreich, J. Albanese, B. Stout, "Return on security investment (ROSI)-a practical quantitative model," Journal of Research and Practice in Information Technology, vol. 38, no. 1, pp. 45-56, 2006.custom:[[[-]]] • 24 N. Tsalis, M. Theoharidou, D. Gritzalis, "Return on security investment for cloud platforms," in Proceedings of 2013 IEEE 5th International Conference on Cloud Computing Technology and Science, Bristol, UK, 2013;pp. 132-137. custom:[[[-]]] • 25 A. Schilling, B. Werners, "A quantitative threat modeling approach to maximize the return on security investment in cloud computing," in Proceedings of the 1st International Conference on Cloud Security Management (ICCSM), Seattle, WA, 2013;pp. 68-78. custom:[[[-]]] • 26 Information technology - Security techniques - Information security risk management, ISO/IEC 27005:2011, 2011, ISO/IEC 27005:, Information technology - Security techniques - Information security risk management, 2011.custom:[[[-]]] • 27 K. Bernsmed, M. G. Jaatun, P. H. Meland, A. Undheim, "Security SLAs for federated cloud services," in Proceedings of 2011 6th International Conference on Availability, Reliability and Security, Vienna, Austria, 2011;pp. 202-209. custom:[[[-]]] • 28 N. Al-Safwani, Y. Fazea, H. Ibrahim, "ISCP: in-depth model for selecting critical security controls," Computers & Security, vol. 77, pp. 565-577, 2018.doi:[[[10.1016/j.cose.2018.05.009]]] • 29 M. S. Lund, B. Solhaug, K. Stolen, Model-Driven Risk Analysis: The CORAS Approach, Heidelberg: Springer, 2010.custom:[[[-]]] • 30 A. Aviad, K. Wecel, W. Abramowicz, "Semantic risk assessment for cybersecurity," in Proceedings of International Conference on Cyber Warfare and Security, Washington, DC, 2018;pp. 513-520. custom:[[[-]]] • 31 A. Sharma, V. Pal, N. Ojha, R. Bajaj, "Risks assessment in designing phase: its impacts and issues," in Analyzing the Role of Risk Mitigation and Monitoring in Software Development. HersheyPA: IGI Global, pp. 46-60, 2018.custom:[[[-]]] • 32 N. Chauhan, N. Singh, B. Nagpal, "A survey on the detection of SQL injection attacks and their countermeasures," Journal of Information Processing Systems, vol. 13, no. 4, pp. 689-702, 2017.doi:[[[10.3745/JIPS.03.0024]]] • 33 M. D. Nguyen, N. T. Chau, S. Jung, S. Jung, "A demonstration of malicious insider attacks inside cloud IaaS vendor," International Journal of Information and Education Technology, vol. 4, no. 6, pp. 483-486, 2014.custom:[[[-]]] • 34 P. Wang, M. Ratchford, "Integrated methodology for information security risk assessment," in Information Technology-New Generations. Cham: Springer, pp. 147-150, 2018.custom:[[[-]]] • 35 J. Kar, M. R. Mishra, "Mitigating threats and security metrics in cloud computing," Journal of Information Processing Systems, vol. 12, no. 2, pp. 226-233, 2016.doi:[[[10.3745/JIPS.03.0049]]] Table 1. Major information security risk assessment methods Risk evaluation method Main metric Qualitative OCTAVE Loss = Impact/consequence * Probability CORAS Loss = Impact * Probability Quantitative ISRAM Risk = Probability of OSB * Consequence of OSB CORA ALE = Consequence * Frequency RiskWatch Risk = Frequency of a threat in a year * Cost of the resource Table 2. Notations Notation Description [TeX:] $$V_{x y}$$ The y-th vulnerability in x-th threat[TeX:] $$\mathrm{V}=\left\{V_{x y} | x=1, \ldots, n, y=: 1, \ldots, m\right\}$$ [TeX:] $$S C_{i}$$ The i-th security control[TeX:] $$\mathrm{sc}=\left\{s C_{i} | i=1, \ldots, k\right\}$$ [TeX:] $$C V_{a b-c d}$$ The correlation value between[TeX:] $$V_{a b} (parent node) and V_{c d} (child node)$$ [TeX:] $$v V_{a b}^{0}$$ The initial vulnerability value of vulnerability[TeX:] $$V_{a b}$$ [TeX:] $$v V_{a b}$$ The vulnerability value of vulnerability[TeX:] $$V_{a b}$$after AAG formula [TeX:] $$v V\left(S C_{i}\right)$$ Total vulnerability value of the security control[TeX:] $$S C_{i}$$ [TeX:] $$\mathrm{W}_{i}$$ Weight of[TeX:] $$S C_{i}, where \Sigma W_{i}=10 and W_{i} \geq 0$$\$ [TeX:] $$Z_{i}$$ The allocated cost for security control[TeX:] $$S C_{i}$$[TeX:] $$\mathrm{z}=\sum\left\{z_{i} | i=1, \ldots, k\right\}$$ [TeX:] $$S V M_{i}$$ The SVM of the security control[TeX:] $$S C_{i}$$[TeX:] $$S V M=\sum\left\{S V M_{i} | i=1, \ldots, k\right\}$$ [TeX:] $$v V_{\text {child}}$$ The total vulnerability value of child nodes of vulnerability [TeX:] $$F_{c h t l d}$$ The total vulnerability value of child nodes of vulnerability after investment [TeX:] $$\mathrm{F}\left(S C_{i}\right)$$ The total vulnerability value of the security control 〖SC〗_i after investment [TeX:] $$F_{x y}\left(z_{i}\right)$$ The vulnerability value after investing investment cost z_i in vulnerability xy [TeX:] $$F_{\text {eff}}(Z)$$ The efficiency of security investment [TeX:] $$F_{\text {red}}(Z)$$ The reduction ratio of vulnerability [TeX:] $$F_{i m p}(Z)$$ The improvement ratio of security Table 3. Types of AAG structure Normal structure AND structure OR structure Table 4. Weights of security controls in webtop Security control S P N AC AU Total Weight 58 2.14 4.17 2.04 1.07 10 Table 5. Values of variables and [TeX:] $$\boldsymbol{S} \boldsymbol{C}_{\boldsymbol{i}}$$ [TeX:] $$\boldsymbol{S} \boldsymbol{C}_{1}$$ [TeX:] $$\boldsymbol{S} \boldsymbol{C}_{2}$$ [TeX:] $$\boldsymbol{S} \boldsymbol{C}_{3}$$ [TeX:] $$\boldsymbol{S} \boldsymbol{C}_{4}$$ [TeX:] $$\boldsymbol{S} \boldsymbol{C}_{\mathbf{5}}$$ 0.078 0.073 0.077 0.047 0.068 0.103 0.123 0.162 0.236 0.249 Table 6. Attack-node configurations Vulnerability Parent node Child node [TeX:] $$\boldsymbol{v} \boldsymbol{V}^{0}$$ Related SC [TeX:] $$V_{11}$$ [TeX:] $$V_{12}$$ - 45 SC2 [TeX:] $$V_{12}$$ [TeX:] $$V_{13}$$ [TeX:] $$V_{11}$$ 21 SC4 [TeX:] $$V_{13}$$ [TeX:] $$V_{14}$$ [TeX:] $$V_{12}, V_{32}, V_{52}$$ 56 SC1 [TeX:] $$V_{14}$$ [TeX:] $$V_{15}$$ [TeX:] $$V_{13}$$ 17 SC4 [TeX:] $$V_{15}$$ [TeX:] $$V_{23}$$ [TeX:] $$V_{14}, V_{41}$$ 13 SC4 [TeX:] $$V_{23}$$ - [TeX:] $$V_{15}$$ 15 SC2 [TeX:] $$V_{31}$$ [TeX:] $$V_{32}$$ - 57 SC3 [TeX:] $$V_{32}$$ [TeX:] $$V_{13}$$ [TeX:] $$V_{31}$$ 21 SC4 [TeX:] $$V_{41}$$ [TeX:] $$V_{15}$$ - 63 SC5 [TeX:] $$V_{51}$$ [TeX:] $$V_{52}$$ - 24 SC2 [TeX:] $$V_{52}$$ [TeX:] $$V_{13}$$ [TeX:] $$V_{51}$$ 21 SC4 Total 353 The vulnerability values are calculated by considering the initial vulnerability value and the CV of the corresponding vulnerability node. In addition, it classifies vulnerability nodes according to each security control and calculates the total vulnerability value in each security control based on Table 6, as shown in Table 7. Table 7. Summary of security controls Security control Correlated vulnerabilities Sum of[TeX:] $$\boldsymbol{v} \boldsymbol{V}^{0}$$ Sum of[TeX:] $$v V$$ SC1 [TeX:] $$V_{13}$$ 56 63.56 SC2 [TeX:] $$V_{11}, V_{23}, V_{52}$$ 81 85.564 SC3 [TeX:] $$V_{31}$$ 57 57 SC4 [TeX:] $$V_{12}, V_{32}, V_{51}, V_{14}, V_{15}$$ 96 121.192 SC5 [TeX:] $$V_{41}$$ 63 63 Total 353 390.316 Table 8. Vulnerability values of attack nodes after security cost allocation Vulnerabilities Vulnerability values (vV) SVM Security controls [TeX:] $$V_{11}$$ 45 96.3 SC2 [TeX:] $$V_{12}$$ 25.5 52.02 SC4 [TeX:] $$V_{13}$$ 63.56 36.865 SC1 [TeX:] $$V_{14}$$ 23.356 47.646 SC4 [TeX:] $$V_{15}$$ 21.636 44.137 SC4 [TeX:] $$V_{23}$$ 17.164 36.73 SC2 [TeX:] $$V_{31}$$ 57 237.69 SC3 [TeX:] $$V_{32}$$ 26.7 54.468 SC4 [TeX:] $$V_{41}$$ 63 67.41 SC5 [TeX:] $$V_{51}$$ 24 48.96 SC4 [TeX:] $$V_{52}$$ 23.4 50.076 SC2 Total 390.316 772.302 Table 9. Security control vulnerability measurement Security controls Vulnerabilities SVM SC1 [TeX:] $$V_{13}$$ 36.865 SC2 [TeX:] $$V_{11}, V_{23}, V_{52}$$ 183.106 SC3 [TeX:] $$V_{31}$$ 237.69 SC4 [TeX:] $$V_{12}, V_{32}, V_{51}, V_{14}, V_{15}$$ 247.231 SC5 [TeX:] $$V_{41}$$ 67.41 Total 772.302 Table 10. Costs allocation of each security control [TeX:] $$S C_{1}$$ [TeX:] $$S C_{2}$$ [TeX:] $$S C_{3}$$ [TeX:] $$S C_{4}$$ [TeX:] $$S C_{5}$$ Total Investment cost 8.832 102.428 159.943 167.366 61.432 500 Table 11. Costs of each cost allocation strategy Storage Process Network Access control Audit Total Strategy 1 100 100 100 100 100 500 Strategy 2 29 107 208.5 101.5 54 500 Our model 8.832 102.428 159.943 167.366 61.432 500 Table 12. Result of security cost allocation strategies Strategy1 Strategy2 Our model SVM 772.3 772.3 772.3 iSVM 550.96 541.83 536.93 [TeX:] $$F_{\text {eff}}(Z)$$ 0.4427 0.4609 0.4707 [TeX:] $$F_{r e d}(Z)(\%)$$ 28.6595 29.8419 30.4760 [TeX:] $$F_{i m p}(Z)(\%)$$ 140.1728 142.5352 143.8352 Table 13. Comparison of security cost allocation strategies Comparison with our model Strategy1 Strategy2 iSVM -14.03 -4.9 [TeX:] $$F_{\text {eff}}(Z)$$ 0.028 0.0098 [TeX:] $$F_{\text {red}}(Z)(\%)$$ 1.8165 0.6341 [TeX:] $$F_{i m p}(Z)(\%)$$ 3.6624 1.3 OWASP attack flow. Security cost optimization model. Procedure of security cost optimization model. Example of an AAG structure. Major security threats. Mitigation ratio of security vulnerability. esigned AAG of security simulation. Security vulnerability measurement after 500 security cost. Relative comparison of SVM among investment strategies. Security vulnerability measurement after cost allocation.
2021-08-03 00:57:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44752267003059387, "perplexity": 3842.7077907533057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154408.7/warc/CC-MAIN-20210802234539-20210803024539-00129.warc.gz"}
https://www.physicsforums.com/threads/solving-problems-in-function-notation.889416/
# Solving problems in function notation 1. Oct 16, 2016 ### Daringpear 1. The problem statement, all variables and given/known data Given that: g (1)=3 g (2x+1)=4g (x)+x+1 Find g (3) 2. Relevant equations The awnser is 14. (Taken out of a PSAT workbook) 3. The attempt at a solution I assume that g(2x+1) refers to a series of transformations (horizontal dilation, up 1) of g(x) Once g(x) is found, g(3) can easily be solved. The problem is, that I have no idea how to find it. 2. Oct 16, 2016 ### pasmith The second equation tells you how to find $g(2x + 1)$ if you already know $g(x)$. For which value of $x$ do you already know $g(x)$? 3. Oct 16, 2016 ### Staff: Mentor To me it appears you think a little bit too complicated. Can you write $3$ as $2x+1$ and what does it tell you about $x$? 4. Oct 16, 2016
2017-11-18 18:35:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6116000413894653, "perplexity": 1214.2129717401906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805008.39/warc/CC-MAIN-20171118171235-20171118191235-00520.warc.gz"}
https://gamedev.stackexchange.com/questions/65904/flashing-candle-light
# Flashing candle light What is a simple way to simulate flashing candle / torch / fire light? I'm not asking about animating the flames, I'm only interested in the light surrounding the fire, similar to what this device does: https://www.youtube.com/watch?v=dPsVr4pU8Tg double nexLightIntensity(lastIntensity)
2020-01-26 22:40:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25644397735595703, "perplexity": 6441.689431377387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690379.95/warc/CC-MAIN-20200126195918-20200126225918-00146.warc.gz"}
https://elektro-shop.info/oldyoung/anissa-kate-latex.php
» » Anissa kate latex Most Viewed • Akiran wrote 21.07.2019, 16:39: #1 I think, that you commit an error. • Akilkree wrote 22.07.2019, 13:39: #2 I am sorry, that I can help nothing. I hope, you will be helped here by others. • Sashura wrote 20.07.2019, 03:55: #3 Absolutely with you it agree. In it something is also to me it seems it is excellent idea. I agree with you. • Nikojar wrote 15.07.2019, 04:25: #4 Willingly I accept. An interesting theme, I will take part. Together we can come to a right answer. • Goltigrel wrote 22.07.2019, 15:12: #5 I apologise, but, in my opinion, you are not right. I can defend the position. Write to me in PM, we will talk.
2020-09-25 19:20:46
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8592033386230469, "perplexity": 12893.834161367788}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228707.44/warc/CC-MAIN-20200925182046-20200925212046-00258.warc.gz"}
https://math.stackexchange.com/questions/1169649/how-to-best-simplify-a-chain-product-rule-with-lots-of-trig-functions
# How to best simplify a chain/product rule with lots of trig functions? I've found the derivative of the following: $$g(x) = \sec(8x)\tan(5x^9)$$ to be $$g'(x) = 8\sec(8x)\tan(8x)\tan(5x^9) + 45x^8 \sec(8x)(\sec(5x^9))^2$$ I'm aware that the trig identities are interchangeable to an extent, so tan(8x) might be written as sin(8x)/cos(8x). However, I'm not sure if such rules would help simplifying this problem. Once I've got all these trig functions in here, is there any point to fooling around with the identities to try and condense it? Also, I notice that sec(8x) appears on both sides -- can this be consolidated into (2*sec(8x))? Or, for that matter, take out the 2 and multiply by the 8 to get a 16 in front? • You can factor out $\sec(8x)$, not add them. – Namaste Feb 28 '15 at 23:21 • Oh, whoops! You're totally right. Score one for simplification! Thanks! – barney Feb 28 '15 at 23:31 Factoring out a common factor $\sec(8x)$: \begin{align} g'(x) & = 8\sec(8x)\tan(8x)\tan(5x^9) + 45x^8 \sec(8x)(\sec(5x^9))^2\\ \\ &=\sec(8x)\Big(8\tan(8x)\tan(5x^9) + 45x^8\sec^2(5x^9)\Big)\\ \\ \end{align}
2019-09-22 16:49:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9964330792427063, "perplexity": 1170.3743664185813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575596.77/warc/CC-MAIN-20190922160018-20190922182018-00001.warc.gz"}
https://zbmath.org/?q=an:1161.54301
# zbMATH — the first resource for mathematics Extremal sets as fractals. (English) Zbl 1161.54301 Despite the title fractals do not feature in this work whose purpose is to find conditions under which a map $$F:\mathcal K (X)\to\mathcal K(X)$$ from the family of compact sets in a Hausdorff space to itself has invariant sets. If there is an $$A\in\mathcal K(X)$$ with $$F(A)\subseteq A$$ and $$F$$ is monotone then $$A$$ contains a minimal (via Zorn’s Lemma) and a maximal set with the same property (the first transfinite iterate $$F^\gamma(A)$$ with $$F^\gamma(A)=F^{\gamma+1}(A))$$. This is applied to (compact-valued) multifunctions. ##### MSC: 54C60 Set-valued maps in general topology 28A80 Fractals 37B99 Topological dynamics 54B20 Hyperspaces in general topology 54H25 Fixed-point and coincidence theorems (topological aspects)
2021-09-21 23:51:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6932514309883118, "perplexity": 811.06102097175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057274.97/warc/CC-MAIN-20210921221605-20210922011605-00397.warc.gz"}
http://math.stackexchange.com/questions/228451/can-we-always-find-a-normal-subgroup-isomorphic-to-a-quotient-group
# Can we always find a normal subgroup isomorphic to a Quotient group? I'm not very good with English terms of group theory but here is the question : $$\forall H\trianglelefteq G \rightarrow \exists H' \trianglelefteq G : {G\over H} \approx H'$$ is above statement always true? or should there be some other constraints? - Consider the additive group of integers. – Karolis Juodelė Nov 3 '12 at 21:43 @KarolisJuodelė what about it? I mean for whatever $\Bbb{Z}_n$ you choose there is always $n\Bbb{Z}$ to satisfy above guess and vise versa, and I honestly can't think of any other subgroup of $\Bbb{Z}$. – Ali.S Nov 3 '12 at 21:47 $n\mathbb{Z}$ is not isomorphic to $\mathbb{Z}_n$. – wj32 Nov 3 '12 at 21:49 @wj32 it's not supposed to, but $n\Bbb{Z}$ is isomorphic to ${\Bbb{Z}\over\Bbb{Z}_n}$ – Ali.S Nov 3 '12 at 21:50 @Gajoo: $\mathbb{Z}_n$ is not even a subgroup of $\mathbb{Z}$! – wj32 Nov 3 '12 at 21:52 This is not true in general. The smallest counterexample can be found in the quaternion group $Q_8$. There $Q_8/Z(Q_8)$ is isomorphic to the Klein $4$-group $\mathbb{Z}_2 \times \mathbb{Z}_2$, but every subgroup of order $4$ in $Q_8$ is cyclic. However, if we assume that $G$ is finite and abelian, then the statement is true. - Um, the only nontrivial quotient of $S_3$ is of order two, but the order-two subgroups there are nonnormal. – Lubin Nov 3 '12 at 22:35 @Lubin: Ah, you're right, I forgot we wanted a normal subgroup here.. – Mikko Korhonen Nov 3 '12 at 22:36 I think this is still a strong example, because it shows that $G/H$ not only does not have to be isomorphic to a normal subgroup of $G$, but $G/H$ does not have to be isomorphic to a subgroup of $G$ at all (a common mistake). – Alexander Gruber Nov 3 '12 at 22:50 Hint: The symmetric group $S_5$ has exactly three normal subgroups: $\{1\}$, $S_5$ and the alternating group $A_5$, which has index 2 in $S_5$. - All you really need here is that $A_5$ has index $2$ in $S_5$, since normal subgroups of order $2$ are always central and $S_5$ has trivial center. – Mikko Korhonen Nov 3 '12 at 22:06
2016-06-25 09:03:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8299502730369568, "perplexity": 264.66877641095846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00112-ip-10-164-35-72.ec2.internal.warc.gz"}
https://buypf.com/site/8ox7oy.php?tag=b75345-pregnancy-accommodation-laws-by-state
Nonviolent protests are twice as likely to succeed as armed conflicts – and those engaging a threshold of 3.5% of the population have never failed to bring about change. 240 Volume of the solution is 200 mL. The formula for percent decrease is the same as that of percentage change. −40 Hence, the percent decrease in the distance from his home to his workplace is 43%. Solution: % = 3.5% For converting a percentage to … Find the percent of increase. Solution: Using the formula: $Percentage\;Change=\frac{New\;Value-old\;Value}{Old\;Value}\times 100$ $Percentage\;Change=\frac{77-82}{82}\times 100$ = -6.09. For example, when you buy an item with tax, the tax is the percent change from the original price to what you paid. |Old Value| So, his profit percent is 20. $6 is 120% of$5), Step 3: Subtract 100%: 120% − 100% = 20%, and that means a. Volume of solute is 25 mL. To calculate the percentage change between two numbers in Excel, execute the following steps. Step 2 : Find the percent decrease. 45.5 - 35 hours = 10.5 hours. To compute percent change from the same period in the previous year, use LAG and DIF functions with a lagging period equal to the number of periods in a year. Percent of change (Profit percentage) : = (24 / 120) ⋅ 100 % = 20%. For example, to calculate the Monthly Change and Total Change. Example 1. This lesson will present real world examples that involve solving for a percent of change. But no! Percent Change = 5000 / 15000. Percent Word Problems Worksheet With Answers About. For example, if the price of a gallon of gasoline was $2.999 yesterday on your drive home and it rose to$3.199 this morning when you filled your tank, you could calculate the percentage of change by plugging those values into the formula. × 100% = 20%, 200 − 240 Multiply your answer by 100. In doing so, we find the following percent change (100% + 5%) Example 2 (continued) What was the cumulative percent change for the two months? Example 2. Required fields are marked *. The base of each successive percent change is the result of the preceding percent change. The change, 15, is over the original, 125.) Therefore: 75 + 5% * 75 = 75 * (100% + 5%). We know, for example, that 70 percent of change programs fail to achieve their goals, largely due to employee resistance and lack of management support. 1. Percent changes are often seen in word problems and are useful in many real world situations, for example when shopping sales or when calculating a tip. Round to the … 1b. Let's look at some more examples of percent increase and decrease. Since you want a percent, you need to change the decimal to a percent. 00:25. On the Home … The Percentage Change Calculator (% change calculator) will quantify the change from one number to another and express the change as an increase or decrease. Sometimes to know the discounts change, the change in prices of products or income, percentage change formula comes in handy. We explain Percent of Change Real World Problems with video tutorials and quizzes, using our Many Ways(TM) approach from multiple teachers. Some people think that a percentage increase can be "reversed" by the same percentage decrease. For example, if you want to identify the percentage of change in the number of orders received from one month to the next. To decrease a number by a percentage, simply change the plus sign to a minus sign. Improve your math knowledge with free questions in "Percent of change: word problems" and thousands of other math skills. Next lesson. Amount of decrease is 6 − 4 = 2. The method recognizes revenues and expenses in proportion to the completeness of the contracted project. Show that as a Percentage. Original amount is 6. When there is a change in values, to know the percent change in the values, the percent change formula is used. Percentage Change: show that change as a percent of the old value ... so divide by the old value and make it a percentage: So the percentage change from 5 to 7 is: 2/5 = 0.4 = 40%. Let us take the example of a company’s total asset size to illustrate the computation of percentage change. Lots of things in this world change their value such as cars, video games, and computers. But by now the price has been slashed to $199. Percent of change = 20/100. 0.3333 × 100 = 33.33, so the answer is 33.333333%. Profit = 144 - 120. Let’s go over how to calculate percent change in step-by-step process, using an example. Show Next Step . To do this, move the decmial place 2 places to the right and add a percent sign. In Example 1, we divided by 10, which was the lower number. In mathematics, the infinite set of whole numbers are not percentages, as well as quadratic equations and exponents. To find the percent of change, set-up the fraction below and then convert it into a percent: original difference first number subtract the numbers → → Example 1: The fruit juice company Red Bantam has increased the size of their bottles from 12 oz to 15 oz. 225 - 25 = 200. Percent Change: Transcript. Percent increase and percent decrease are measures of percent change, which is the extent to which something gains or loses value. Calculate its new price. ... First, the percent change is 33.59%, i.e., current revenue is 21611, and the previous revenue is 16177, so revenue has been increased from previous to this current year, so it is increased by 33.59%. The change is a percentage, so I need to convert this decimal to percentage form: She lowered her weight by 12%. A percent change is the variation, expressed in percent, of a quantity over time. We explain Percent of Change Real World Problems with video tutorials and quizzes, using our Many Ways(TM) approach from multiple teachers. Therefore: 75 + 5% * 75 = 75 * (100% + 5%). An example would be the following: FY2002 = 1,349 FY2003 = 1,391 FY2004 = 1,324 FY2005 = 805 FY2006 = 900. Population growth rate – example on how to find percent change. Then assume that, due to downsizing, the business now has only 16 employees. As per the formula, Old Value is Previous Week Number, and New Value is Current Week Number. In that way you can compare the percentage change between groups. Two months, anywhere a collectors ' comic book is worth$ 120 in 1994, and new value Current... = 33.33, so the answer is 33.333333 %, world-class education to anyone, anywhere, he... Knowledge with free questions in percent of the preceding percent change ⋅. Without running a large number of individuals of a percentage change, and. [ \large Percentage\ ; Change=\frac { New\ ; Value-Old\ ; value } Old\. Received from one month to the next month out how much something has by! 2 6 - 2 is indeed equal to 4 { New\ ; Value-Old\ ; }... All about comparing old to new values encounter in our daily life is the change in the convert..., 15, is over the Previous year reversed '' by the price! You want to use the percentage change in step-by-step process, using an example of calculating change. Not percentages, as a fraction of 100 downsizing, the percent difference formula and calculation 4! 82 kg by $0.30, so I need to convert this decimal 100. 12.5 % in the weight of Krishna, if he had decreased to 77 from., so we divide the price difference by the old number percentage over the value. Are not percentages, as well as quadratic equations and exponents DIF12. … let ’ s go how! This free math video tutorial by Mario 's math Tutoring cars, games. Need some actual data preschool decreased by 8 % during the next month the... Such as cars, video games, and in percent of change example its value$. = 20 % percentage increase and decrease of a stereo increased from $200 to$ million... A collectors ' comic book is worth $120 in 1994, and in 1995 its value Current. Is worth$ 120 in 1994, and computers 10 % from the old number in cell and! Video games, and in 1995 its value is Current Week number, and in 1995 value... A computer software retailer used a markup rate of 40 % think a. Bicycles percent of change example spoons to do this, move the decmial place 2 places to the right and a... All about comparing old to new values we need some actual data =.! The base of each successive percent change is the variation, expressed in percentage, it can be done follows-! 100 % + 5 % ) a business involve solving for a percent and by... Are two conditions to use the percentage change in step-by-step process, using an example sort of join but! M costs $50,000 and car L costs$ 40,000 last year sales and this year sales number cell... Which means per 100 or 27 % the Total asset size stood at $375 million compared to yesterday value. 100\ ] as I have not done these before from his home to his workplace is 43.... Percent difference formula and calculation word Problems '' and thousands of other math skills a over. Difference between new and old Explanation & examples difference formula and calculation express 50 percent as %. Set of whole numbers are not percentages, as a percentage from one period to another period new and numbers... Formula shown below ago, you had 20 employees employed by a percentage, simply change decimal. Then it is 30 percent more percent of change example to stick in 1970 was 270... Hence the percent difference formula and calculation change Analysis shows how two percent of change example changed as a percentage simply... ( for quarterly data, use LAG4 and DIF4 / 15000 difference are the most common terms we encounter our! Same percentage decrease be interpreted as value has increased by$ 0.30, so we divide the has... Size stood at \$ 375 million compared to yesterday 's value: =. = 0.125 best to report the change in prices of products or,. Sheet accounts are … let ’ s go over how to calculate difference.
2022-08-08 08:23:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5375727415084839, "perplexity": 1051.6924608533213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00675.warc.gz"}
https://tobydevlin.com/installing-latex-on-windows/
# Installing LaTeX on Windows MiKTex + VS Code + Git = a semi working, compiling, version controlled version of LaTex. Try not to break anything on your journey tho; this worked for me so it will probably work for you… Heres how to do it: ### First Thing: Get The Stuff: Install both of these to wherever your preferred location is; Once this is done it might be useful to add them to your path. Control Panel > System and Security > System > Advanced System Settings > Environment Variables. You should restart your machine before step 2. ### Then: Set up VS Code In VS Code you’ll need to install the LaTaX Workshop using the tools in VS Code. Once this is done and you’ve restarted VS Code the compile User Settings will need changing before it will work. (It might work; it depends if you have perl, I don’t and I wanted to make the number of installs minimal) "latex-workshop.latex.toolchain": [ { "command": "latexmk", "args": [ "-synctex=1", "-interaction=nonstopmode", "-file-line-error", "-pdf", "%DOC%" ] } ] needs changing to: "latex-workshop.latex.toolchain": [ { "command": "pdflatex", "args": [ "-synctex=1", "-shell-escape", "-interaction=nonstopmode", "-file-line-error", "%DOC%" ] }, { "command": "bibtex", "args": [ "%DOCFILE%" ] }, { "command": "pdflatex", "args": [ "-synctex=1", "-shell-escape", "-interaction=nonstopmode", "-file-line-error", "%DOC%" ] }, { "command": "pdflatex", "args": [ "-synctex=1", "-shell-escape", "-interaction=nonstopmode", "-file-line-error", "%DOC%" ] } ] Save this and now go to your file you want to compile (if you want an example there is some code that will compile at the end of this tutorial). If youre checking out from Git you’ll need to have done this all ready with git checkout http://url/path.git. If its not working there might be a problem with your git install or adding it to your PATH. note: "-shell-escape" is required for the package minted Also "latex-workshop.latex.clean.enabled": false, can be changed to true to delete files after the project is built ### Finally: Get everything built and looking smart: Building your project should be easy enough; either run the build with Right Click > Build LaTeX project or Ctrl + Alt + L > Build LaTeX project or just save the file after editing it. Then, if settings were changed then 4 steps should run and everything should work: If it fails checking the compiler logs will tell you why. Its probably a syntax error (them pesky buggers get me every time), or a an issue with packages (make sure you tab over and install them the first time they’re being used). GOOD LUCK! \documentclass[12pt]{article} \begin{document} Yay it worked! $f(x) = \fract{3}{2x} - 8$ \end{document} Or try a more complicated one!
2018-12-10 15:23:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29258108139038086, "perplexity": 3617.3894003740747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823348.23/warc/CC-MAIN-20181210144632-20181210170132-00151.warc.gz"}
http://clay6.com/qa/20362/the-rate-law-for-a-reaction-a-rightarrow-b-is-rate-k-a-m-b-n-on-tripling-th
Browse Questions # The rate law for a reaction $A\rightarrow B$ is rate=$K[A]^m[B]^n$.On tripling the concentration of A and halving the concentration of B,the ratio of new rate to the earlier rate of the reaction will be as $\begin{array}{1 1}(a)\;\large\frac{3^m}{2^n}&(b)\;\large\frac{2^n}{3^m}\\(c)\;\large\frac{3^n}{2^m}&(d)\;\large\frac{2^m}{3^n}\end{array}$ $\large\frac{R_2}{R_1}=\large\frac{K[3\pi]^m[1/2]^n}{K[A]^m[B]^n}$ $\Rightarrow \large\frac{3^m}{2^n}$ edited Jul 26, 2014
2017-02-23 09:25:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8521795868873596, "perplexity": 1604.332166808979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00080-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/distribution-of-charge-along-a-straight-line.538474/
# Distribution of Charge along a straight line 1. Oct 9, 2011 ### Instinctlol 1. The problem statement, all variables and given/known data A straight nonconducting plastic wire 8.50 cm long carries a charge density of +175nC/m distributed uniformly along its length. It is lying on a horizontal table top (a) Find the magnitude and direction of the electric field this wire produces at a point 7.00cm directly above its midpoint. Q = +175nC/m (charge density) L = .085m (length of charge) D = .060m (Distance from charge) r^2 = x^2 + H^2 2. Relevant equations dQ = (Q / L) dx (Total charge over Total distance) dEy = kdQ/r^2 sin θ sin θ = D/r dEx = 0 k = electric constant 3. The attempt at a solution Ey = kQD/L(x^2 + H^2) ∫ 1 / (x^2 + D^2) ^(3/2) Integrand from 0 to L = KQD/(D^2(D^2 + X^2)^(1/2)) Answer I got was 2.5 x 10^5 N/C This is wrong because the charge is a density C/m. I tried to multiply that answer with the total distance to cancel the m but answer still came out wrong. What did I do wrong here? Last edited: Oct 9, 2011 2. Oct 9, 2011 ### WJSwanson If you don't mind, I'll first make sure I've got your procedure right by translating it into TeX (because for some reason my eyes don't deal well with ASCII equations and I want to make sure I'm helping you find the right errors. So I'm seeing $E_{y} = \int^{x = L/2}_{x = -L/2} \frac{\lambda D dx}{4\pi\epsilon_{0}(x^{2}+D^{2})^{3/2}} = \frac{\lambda D}{4\pi\epsilon_{0}} \int^{x = L/2}_{x = -L/2} \frac{dx}{(x^{2}+D^{2})^{3/2}}$ Is this what you had? Because if it is, I'm pretty sure you're on the right track. If not, can you see how I derived it? If you don't see it and it's not what you got, just let me know so I can go back and help you derive it step-by-step. Anyway, once you've got that you can solve the integral and evaluate it at its proper bounds and it should yield the correct answer. (You can simplify things substantially by taking advantage of the symmetry of the situation by setting the midpoint of the charged line segment at x = 0, thereby putting your bounds of integration to |x| <= L/2. Note also that by symmetry all of the horizontal components of the electric field vector will cancel each other.) 3. Oct 9, 2011 ### Instinctlol This is what I did Ey = kQHdx ||||| D || (x2+ H2)3/2 My integral is from 0 to D. Since Q is a charge density and not the actual charge it self, what do I have to do to convert it so it fits the equation of E? Ignore the white lines, that my bad attempt to make the equation look clear for you guys. Btw, how do you type like that? 4. Oct 9, 2011 ### WJSwanson What are you setting D equal to? The distance between x and the midpoint of the rod? Because from the picture you gave, when you integrate from 0 to D you're integrating an x-dependence over an interval that includes a y-value. ; 5. Oct 10, 2011 ### Instinctlol Sorry I meant from 0 to L, I wrote one thing and drew another. I'm still confused with the given charge density, how to I turn that into a charge? 6. Oct 10, 2011 ### WJSwanson Okay, so your linear density of charge (which you've been representing as Q and I've been representing as $\lambda$) is, by definition, the charge per unit length through a differential element dx of the charged rod. Recall that the electric field is given by $E_{y} = \int\frac{dq sin\theta}{4\pi\epsilon_{0}r^{2}}$. Because your differential charge dq is given by $dq = \lambda dx$ and the distance r between an arbitrary point x along the rod and the point (L/2, D) is given by $r^{2} = (x - L/2)^{2} + D^{2}$ and $sin\theta$ is given by $sin\theta = \frac{D}{r} = \frac{D}{\sqrt{(x - \frac{L}{2})^{2} + D^{2}}}$ the integral $E_{y} = \int\frac{dq sin\theta}{4\pi\epsilon_{0}r^{2}}$ becomes $E_{y} = \int^{L}_{0}\frac{\lambda D dx}{4\pi\epsilon_{0}( (x - \frac{L}{2})^{2} + D^{2})^{3/2}}$ which by symmetry about the midpoint $\frac{L}{2}$ actually just becomes $E_{y} = 2\int^{\frac{L}{2}}_{0}\frac{\lambda D dx}{4\pi\epsilon_{0}( (x - \frac{L}{2})^{2} + D^{2})^{3/2}} = \frac{\lambda D}{2\pi\epsilon_{0}}\int^{\frac{L}{2}}_{0}( (x-\frac{L}{2})^{2} + D^{2})^{\frac{-3}{2}}dx$ So the tl;dr version would simply be this: Using the notation $\lambda$ (or $Q$, depending on preference) $= \frac{q}{L}$ we see that $q = \lambda L (= QL)$. We can also conclude from the analytical definition of linear charge density that the differential charge (dq) through a differential length element (dx) is given by $Q = \lambda = \frac{dq}{dx} \Rightarrow dq = \lambda dx$ or $Q dx$ depending on the notation you prefer. You can verify this by integrating $\Sigma q = \int^{L}_{0}\lambda dx$ which yields $\Sigma q = \lambda L (= QL)$ which is what we should expect because the linear charge density is constant/uniform (or in this case simply x-invariant) -- and it does in fact conform to our previous derivation. This confirms the argument that you can use your linear charge density to impute the differential charge via the relationship $dq = \lambda dx$. 7. Oct 11, 2011 ### Instinctlol Ah, I see. Thank you very much!
2017-09-25 03:16:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7911390662193298, "perplexity": 733.5885669497769}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690307.45/warc/CC-MAIN-20170925021633-20170925041633-00537.warc.gz"}
https://community.wolfram.com/groups/-/m/t/887456
# [WSS16] Visualizing Binaries for Low-level File-analysis Posted 2 years ago 3264 Views | 3 Replies | 14 Total Likes | New file formats are emerging to encode data for storage, and our tools have to constantly play catch up to support these. The file formats specify how the bits are used to encode the information in the digital storage medium. There are no great ways of analyzing the information independently of structure. Currently, most file analysis have been through observing the hex editors, disassembler and debuggers. Though hex data is an exact representation of the data file, it is difficult to sift through. Humans are naturally good at processing spatial information. Our hopes is to use Mathematica as tool to analyze a file's binary much faster if we are able to translate it into some visual representation we can see. Given this, we want to explore different ways to which we can analyze the files visually by transforming its binary values into 2D/3D spatial information independent of file type. This is very useful in scenarios which require low-level analysis such as... • Identify unknown file format • Compare files • Analyze files for vulnerability • Locate and extract hidden content or metadata (Forensic analysis) • Find keys or password in files (Cryptanalysis) • Locating malicious codes within files ## Byte View One simple method we can visualize files is by coloring corresponding byte values. This view is great for observing the byte distribution and file layout. We read in the files as chunks of 8-bit character encoding. This gives us 256 different byte values. We read contents of file into Mathematica in binary by using BinaryReadList, which brings in the file data as list of integers 0 to 255. We can then partition them into blocks and plot it by its default values. For a better understanding of the byte distribution in a file, we can choose our own color scheme. So, let's begin by picking a color scheme to classify the different bytes by defining our own ColorFunction. This covers the common class types in 8-bit encoding ASCII. Next, we want to visualize the stream of bytes as a 2D array where we can examine or zoom into parts of files by modifying the start and end point or block size by using Dynamic and Slider, allowing us to adjust the parameters of what we enter in ArrayPlot and observe in realtime. I produced the following plots by taking the following samples of different file extensions such as our beloved notepad++ editor executable (first). We noticed right away different formats have different byte view characteristics in the samples above. For example, PDF would show strips of red and blue colors, while a compressed file (ZIP) would show homogeneous distribution of red, blue, and green colors. We see a hidden image within Notepad++ executable, which shows that this view may be useful in stenography analysis. ## Digraph Dot Plot View Dot plot view is great way to determine the file format and visualize the byte distribution. We partition the stream of bytes into pairs with offset one using Partition. The plot also allows us to understand what sequence of bytes often appears net to each other. Let's take a close look at the digraph of an executable file below plotting 10,000 bytes at a time. Mmmm....The Matrix. We easily observe that the executable files have a variety of behaviors as we use Manipulate to adjust our starting point in the file. We can then take blocks of 2D digraph dot plot of the file and stack them on top of each other to give the layered digraph plot view. Here's a sample image with the exe file. We see different characteristics emerge when using different sample input files. The different sequence of bytes generates different structures. ## Hilbert Curve View With byte view, small-scaled features that are only a few lines long tend to get lost. We can solve this by using the Hilbert curve. The Hilbert curve method maps the 1D sequence of bytes to a 2D image while preserving their locality. We generate it by recursively creating curves using SubstitutionSystem with the encoded rules: "L" -> "+RF-LFL-FR+" and "R" -> "-LF+RFR+FL-". Here is a side-by-side comparison of Hilbert curve view versus byte view of exe file. versus ### Calculating Entropy and Malware Detection We can use the Hilbert curve to visualize local entropy of the neighboring bytes in different files. Think of entropy as the degree to which a chunk of data is disordered. If we have a data set where all the elements have the same value, the amount of disorder is nil, and the entropy is zero. If the data set has the maximum amount of heterogeneity (i.e. all possible symbols are represented equally), then we also have the maximum amount of disorder, and thus the maximum amount of entropy. I calculated it using Shannon entropy over a sliding window of 128 bytes. High-entropy data are of special interest to reverse engineers and penetration testers. The first is compressed data - finding and extracting compressed sections is a common task in many security audits. The second is cryptographic materials such as keys, certificates, and encrypted data. Malware analysis can also be carried out. Malware developers use obfuscation techniques that hides the malware's content from detection and analysis. This is done by bit manipulations or using advanced cryptographic standards (i.e. DES, AES, etc), makes binary data unreadable or hard to understand so their programs. But, the obfuscation introduces high-entropy in its area of residence. Image below was generated using a sample executable packed with a p2p virus. It's easy to find the location of the virus with this view. There's a defined block of region with high entropy where it resides. ## Open Problems Each file format shows a unique signature that can be easy for users to identify. There's some many ways one can expand upon this project. One is the problem of help automating the classifications of files, especially if it is unknown or corrupted. We can look into the subtle differences between similar file formats or determine different markers to increase classification accuracy. We can further adjust and refine the code to be able to handle larger blocks of files. We can also improve malware detection using our entropy view. I hope to make the interface more interactive to include real-time hex locator. There are other different low-level analysis applications we can explore to improve the process. 3 Replies Sort By: Posted 2 years ago Hi Angela,thank you for your interesting contribution! In particular I found the "Hilbert Curve View" most inspiring - and I could not resist trying it immediately.Regards -- Henrik
2018-12-18 23:18:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37518513202667236, "perplexity": 1385.9650822393126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829997.74/warc/CC-MAIN-20181218225003-20181219011003-00339.warc.gz"}
http://www.mathewpeet.org/howto/rotate-postscript/
# Rotate a postsript file 90 degrees ## So useful it deserves it's own page... I was experiencing problems including ps files inside my tex documents... this is a list of some of the solutions I've tried/used. I usually use gnuplot to generate graphs and have found that using the terminal types pstex or postscript eps usually gives satisfactory results (Method 7 below). Although some commands will rotate your postscript file, it may get rotated when including in the tex document to make it wrong again, for this reason it is best to use encapsulated postscript so that you have a clue about the position of the bounding box. ## WARNING: These methods may not work for you today ### Method 1: in the a tex file This is the way to do this when including an image inside a latex document; \begin{figure}[] \begin{center} \includegraphics[width=14cm]{relativepath/file1_90.ps} \caption{Hardness as a function of the isothermal transformation temperature.} \label{hardness_evol} \end{center} \end{figure} You can change the include graphics statement to be \includegraphics[angle=90,width=14cm]{relativepath/file1_90.ps} This can cause all sorts of problems when pdflatex or latex decide to handle images differently, or you dont know enough about .sty files like myself. Not that I am a style file. This method is reported to work with dvips but not with pdflatex ### Method 2: Image manipulation Open in image software, rotate image, save image. Pros: Works, Cons: Lossy and time consuming ### Method 3: convert or mogrify on the command line Read manual pages for convert and or mogrify. I used convert -rotate 90 file2.ps file2_90.ps. Pros: Works after you work out the command, Cons: Lossy, command options depend on system. ### Method 4: use postscript tools to rotate image on the command line Read manual pages for ps, ps2ps, psnup, search the flipping web. I used psnup -l file1.ps file1_90.ps. Pros: Works Cons: non. According to the forum post I read this command from, it doesn't work if you need to rotate more than 90 degrees, because it is changing a value in the ps file to make it landscape or portrait. psnup is a tool for printing multiple pages per sheet. The manual page says (amongst other things) this: The -l option should be used for pages which are in landscape orienta- tion (rotated 90 degrees anticlockwise). The -r option should be used for pages which are in seascape orientation (rotated 90 degrees clock- wise), and the -f option should be used for pages which have the width and height interchanged, but are not rotated. ### Method 5: cludgy resize for gnuplot/latex In gnuplot it is possible to generate the file in landscape/portriat mode, making the graph use the full page and then resize using 'height' and 'width' options when the image is included in the tex file. ### Method 6: use pstex for gnuplot The terminal pstex in gnuplot usually generates graphs that latex is happy to keep in the correct orientation, it also means the same fonts will be used for the graph as the rest of the document, avoiding having them mangled and resized in unsightly ways. ### Method 7: use eps in gnuplot I found that rotating images is a particular problem for postscript files generated by gnuplot, after successfully rotating them, they are rotated back into the wrong orientation when compiling the latex document. This problem can be avoided by using eps, this is done in gnuplot by specifying: set terminal postscript eps or set terminal postscript enhanced eps but not set terminal postscript enhanced ### Method 8: Convert to eps using ps2epsi - untested
2021-10-21 05:38:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8335756063461304, "perplexity": 3620.2701954172044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00457.warc.gz"}
https://bibli.cirm-math.fr/listRecord.htm?list=link&xRecord=19229952157910471349
m • E F Nous contacter 0 # Documents  Godefroy, Gilles | enregistrements trouvés : 16 O P Q Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Perturbations of the holomorphic functional calculus: differential operators versus general sectorial operators Portal, Pierre | CIRM Post-edited Research talks;Analysis and its Applications Nigel Kalton played a prominent role in the development of a holomorphic functional calculus for unbounded sectorial operators. He showed, in particular, that such a calculus is highly unstable under perturbation: given an operator $D$ with a bounded functional calculus, fairly stringent conditions have to be imposed on a perturbation $B$ for $DB$ to also have a bounded functional calculus. Nigel, however, often mentioned that, while these results give a fairly complete picture of what is true at a pure operator theoretic level, more should be true for special classes of differential operators. In this talk, I will briefly review Nigel's general results before focusing on differential operators with perturbed coefficients acting on $L_p(\mathbb{R}^{n})$. I will present, in particular, recent joint work with $D$. Frey and A. McIntosh that demonstrates how stable the functional calculus is in this case. The emphasis will be on trying, as suggested by Nigel, to understand what makes differential operators so special from an operator theoretic point of view. Nigel Kalton played a prominent role in the development of a holomorphic functional calculus for unbounded sectorial operators. He showed, in particular, that such a calculus is highly unstable under perturbation: given an operator $D$ with a bounded functional calculus, fairly stringent conditions have to be imposed on a perturbation $B$ for $DB$ to also have a bounded functional calculus. Nigel, however, often mentioned that, while these ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Coarse dimension reduction Naor, Assaf | CIRM H Post-edited Research talks;Analysis and its Applications Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Two-weight inequalities meet $R$-boundedness Hytönen, Tuomas P. | CIRM Multi angle Research talks;Analysis and its Applications One of my recent main interests has been the characterization of boundedness of (integral) operators between two $L^p$ spaces equipped with two different measures. Some recent developments have indicated a need of "Banach spaces and their applications" also in this area of Classical Analysis. For instance, while the theory of two-weight $L^2$ inequalities is already rich enough to deal with a number of singular operators (like the Hilbert transform), the $L^p$ theory has been essentially restricted to positive operators so far. In fact, a counterexample of $F$. Nazarov shows that the common "Sawyer testing" or "David-Journé $T(1)$" type characterization will fail, in general, in the two-weight $L^p$ world. What comes to rescue is what we so often need to save the $L^2$ results in an Lp setting: $R$-boundedness in place of boundedness! Even in the case of positive operators, it turns out that a version of "sequential boundedness" is useful to describe the boundedness of operators from $L^p$ to $L^q$ when $q < p$. - This is about my recent joint work with T. Hänninen and K. Li, as well as the work of my student E. Vuorinen. two-weight inequalities - boundedness - singular operators One of my recent main interests has been the characterization of boundedness of (integral) operators between two $L^p$ spaces equipped with two different measures. Some recent developments have indicated a need of "Banach spaces and their applications" also in this area of Classical Analysis. For instance, while the theory of two-weight $L^2$ inequalities is already rich enough to deal with a number of singular operators (like the Hilbert ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## The story of Kalton's last unpublished paper Castillo, Jesús M.F. | CIRM Multi angle Research talks;Analysis and its Applications I'd like to share with the audience the Kaltonian story behind [1], started in 2004, including the problems we wanted to solve, and could not. In that paper we show that Rochberg's generalized interpolation spaces $\mathbb{Z}^{(n)}$ [5] can be arranged to form exact sequences $0\to\mathbb{Z}^{(n)}\to\mathbb{Z}^{(n+k)}\to\mathbb{Z}^{(k)} \to 0$. In the particular case of Hilbert spaces obtained from the interpolation scale of $\ell_p$ spaces then $\mathbb{Z}^{(2)}$ becomes the well-known Kalton-Peck $Z_2$ space, and one gets from here that there are quite natural nontrivial twisted sums $0\to Z_2\to\mathbb{Z}^{(4)}\to Z_2 \to0$ of $Z_2$ with itself. The twisted sum space $\mathbb{Z}^{(4)}$ does not embeds in, or is a quotient of, a twisted Hilbert space and does not contain $\ell_2$ complemented. We will also construct another nontrivial twisted sum of $Z_2$ with itself that contains $\ell_2$ complemented. These results have some connection with the nowadays called Kalton calculus [3, 4], and thus several recent advances [2] in this theory that combines twisted sums and interpolation theory will be shown. Banach space - twisted sum - complex interpolation - Hilbert space I'd like to share with the audience the Kaltonian story behind [1], started in 2004, including the problems we wanted to solve, and could not. In that paper we show that Rochberg's generalized interpolation spaces $\mathbb{Z}^{(n)}$ [5] can be arranged to form exact sequences $0\to\mathbb{Z}^{(n)}\to\mathbb{Z}^{(n+k)}\to\mathbb{Z}^{(k)} \to 0$. In the particular case of Hilbert spaces obtained from the interpolation scale of $\ell_p$ spaces then ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## The Daugavet equation for Lipschitz operators Werner, Dirk | CIRM Multi angle Research talks;Analysis and its Applications We study the Daugavet equation $\parallel Id+T\parallel$ $=1$ $+$ $\parallel T\parallel$ for Lipschitz operators on a Banach space. For this we introduce a substitute for the concept of slice for the case of non-linear Lipschitz functionals and transfer some results about the Daugavet and the alternative Daugavet equations previously known only for linear operators to the non-linear case. numerical radius - numerical index - Daugavet equation - Daugavet property - SCD space - Lipschitz operator We study the Daugavet equation $\parallel Id+T\parallel$ $=1$ $+$ $\parallel T\parallel$ for Lipschitz operators on a Banach space. For this we introduce a substitute for the concept of slice for the case of non-linear Lipschitz functionals and transfer some results about the Daugavet and the alternative Daugavet equations previously known only for linear operators to the non-linear case. numerical radius - numerical index - Daugavet equation - ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Multi-norms and Banach lattices Dales, H. Garth | CIRM Multi angle Research talks;Analysis and its Applications I shall discuss the theory of multi-norms. This has connections with norms on tensor products and with absolutely summing operators. There are many examples, some of which will be mentioned. In particular we shall describe multi-norms based on Banach lattices, define multi-bounded operators, and explain their connections with regular operators on lattices. We have new results on the equivalences of multi-norms. The theory of decompositions of Banach lattices with respect to the canonical 'Banach-lattice multi-norm' has a pleasing form because of a substantial theorem of Nigel Kalton that I shall state and discuss. I shall also discuss brie y a generalization that gives 'pmulti-norms' (for $1\leq p\leq1$) and an extension of a representation theorem of Pisier that shows that many pmulti-norms are 'sous-espaces de treillis'. The theory is based on joint work with Maxim Polyakov (deceasead), Hung Le Pham (Wellington), Matt Daws (Leeds), Paul Ramsden (Leeds), Oscar Blasco (Valencia), Niels Laustsen (Lancaster), Timur Oikhberg (Illinois), and Vladimir Troitsky (Edmonton). multi-norms - equivalences - absolutely summing operators - tensor products I shall discuss the theory of multi-norms. This has connections with norms on tensor products and with absolutely summing operators. There are many examples, some of which will be mentioned. In particular we shall describe multi-norms based on Banach lattices, define multi-bounded operators, and explain their connections with regular operators on lattices. We have new results on the equivalences of multi-norms. The theory of decompositions of ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Ideals in $L(L_p)$ Johnson, William B. | CIRM H Multi angle Research talks;Analysis and its Applications I’ll discuss the Banach algebra structure of the spaces of bounded linear operators on $\ell_p$ and $L_p$ := $L_p(0, 1)$. The main new results are 1. The only non trivial closed ideal in $L(L_p)$, 1 $\leq$ p < $\infty$, that has a left approximate identity is the ideal of compact operators (joint with N. C. Phillips and G. Schechtman). 2. There are infinitely many; in fact, a continuum; of closed ideals in $L(L_1)$ (joint with G. Pisier and G. Schechtman). The second result answers a question from the 1978 book of A. Pietsch, “Operator ideals”. I’ll discuss the Banach algebra structure of the spaces of bounded linear operators on $\ell_p$ and $L_p$ := $L_p(0, 1)$. The main new results are 1. The only non trivial closed ideal in $L(L_p)$, 1 $\leq$ p < $\infty$, that has a left approximate identity is the ideal of compact operators (joint with N. C. Phillips and G. Schechtman). 2. There are infinitely many; in fact, a continuum; of closed ideals in $L(L_1)$ (joint with G. Pisier and G. ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## On a difference between two methods of low-distortion embeddings of finite metric spaces into non-superreflexive Banach spaces Randrianantoanina, Beata | CIRM H Multi angle Research talks;Analysis and its Applications;Geometry In a recent paper, the speaker and M.I. Ostrovskii developed a new metric embedding method based on the theory of equal-signs-additive (ESA) sequences developed by Brunel and Sucheston in 1970’s. This method was used to construct bilipschitz embeddings of diamond and Laakso graphs with an arbitrary finite number of branches into any non-superreflexive Banach space with a uniform bound on distortions that is independent of the number of branches. In this talk we will outline a proof that the above mentioned embeddability results cannot be obtained using the embedding method which was used for trees by Bourgain (1986) and for binary branching diamonds and Laakso graphs by Johnson and Schechtman (2009), and which is based on a classical James’ characterization of superreflexivity (the factorization between the summing basis and the unit vector basis of $\ell_1$). Our proof uses a “self-improvement” argument and the Ramsey theorem. Joint work with M.I. Ostrovskii. In a recent paper, the speaker and M.I. Ostrovskii developed a new metric embedding method based on the theory of equal-signs-additive (ESA) sequences developed by Brunel and Sucheston in 1970’s. This method was used to construct bilipschitz embeddings of diamond and Laakso graphs with an arbitrary finite number of branches into any non-superreflexive Banach space with a uniform bound on distortions that is independent of the number of ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Finite quotients in coarse geometry Khukhro, Anastasia | CIRM H Multi angle Research talks;Analysis and its Applications;Geometry The study of groups often sheds light on problems in various areas of mathematics. Whether playing the role of certain invariants in topology, or encoding symmetries in geometry, groups help us understand many mathematical objects in greater depth. In coarse geometry, one can use groups to construct examples or counterexamples with interesting or surprising properties. In this talk, we will introduce one such metric object arising from finite quotients of finitely generated groups, and survey some of its useful properties and associated constructions. The study of groups often sheds light on problems in various areas of mathematics. Whether playing the role of certain invariants in topology, or encoding symmetries in geometry, groups help us understand many mathematical objects in greater depth. In coarse geometry, one can use groups to construct examples or counterexamples with interesting or surprising properties. In this talk, we will introduce one such metric object arising from finite ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## $L^2$ spectral gap and group actions on Banach spaces de la Salle, Mikael | CIRM H Multi angle Research talks;Analysis and its Applications;Geometry Exploring the relations between algebraic and geometric properties of a group and the geometry of the Banach spaces on which it can act is a fascinating program, still widely mysterious, and which is tightly connected to coarse embeddability of graphs into Banach spaces. I will present a recent contribution, joint with Tim de Laat, where we give a spectral (hilbertian) criterion for fixed point properties on uniformly curved Banach spaces. Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Smoothness and renormings in banach spaces Deville, Robert ; Godefroy, Gilles ; Zizler , Václav | Longman Scientific And Technical 1993 Ouvrage - 376 p. ISBN 978-0-582-07250-3 Pitman monographs and surveys in pure and applied mathematics , 0064 Localisation : Ouvrage RdC (DEVI) espace de Banach # espace linéaire normé # géométrie et structure des espaces linéaires normés # lissage Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## L'aventure des nombres Godefroy, Gilles | Editions Odiles Jacob 1997 Ouvrage - 237 p. ISBN 978-2-7381-0422-9 Localisation : Ouvrage RdC (GODE) Archimède # Babylone # Cantor # Copernic # Galilée # Kepler # algèbre et algorithme # axiome de l'arithmétique # axiome de la théorie des ensembles # base de numération # bâton et pierre # géométrie # main # polynôme # quaternion # suite de Fibonacci # théorie des nombres Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## The adventure of numbers Godefroy, Gilles ; Kay, Leslie | American Mathematical Society 2004 Ouvrage - 194 p. ISBN 978-0-8218-3304-9 Mathematical world , 0021 Localisation : Collection 1er étage histoire des nombres # histoire des mathématiques # suite de Fibonacci # théorie des nombres # algorithme # quaternion # géométrie Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Leçons de mathématiques d'aujourd'hui. Vol. 2 Godefroy, Gilles ; Girard, Jean-Yves ; Tenebaum, Gérald ; Morain, François ; Waldschmidt, Michel ; David, Guy ; Bardos, Claude ; Karoubi, Max ; Fontaine, Jean-Marc ; Hindry, Marc ; Raynaud , Michel ; Keane, Michael | Cassini 2003 Ouvrage - 360 p. ISBN 978-2-84225-058-4 Le sel et le fer , 0012 Localisation : Ouvrage RdC (LECO) histoire des mathématiques # théorie de la démonstration # programme de Hilbert # logique linéaire # nombre entier # cryptologie # fonction modulaire # transcendance # ensemble rectifiable # controlabilité # topologie # forme différentielle # nombre p-adique # représentation de Galois # équation diophantienne # courbe algébrique # groupe fondamental # marche aléatoire Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Les mathématiques mode d'emploi Godefroy, Gilles | Odile Jacob 2011 Ouvrage - 238 p. ISBN 978-2-7381-2322-0 Sciences Localisation : Loisir RdC philosophie des mathématiques # histoire des mathématiques # figure # logique # théorème 01Axx Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Twelve landmarks of twentieth-century analysis Choimet, D. ; Queffélec, H. ; Monerau, Michaël ; Gibbons, Danièle ; Gibbons, Greg ; Godefroy, Gilles | Cambridge University Press 2015 Ouvrage - xv; 508 p. ISBN 978-1-107-65034-3 Localisation : Ouvrage RdC (CHOI) théorème de Tauberian # théorème des nombres premiers # fonction monotone nulle part dérivable # paradoxe de Banach-Tarski # fonction thêta # somme de caractères # somme exponentielle # conjecture de Littlewood # couronne # algèbre de Banach # sous-espaces complémentaires #### Filtrer ##### Codes MSC Ressources Electroniques (Depuis le CIRM) Books & Print journals Recherche avancée 0 Z
2019-08-20 15:41:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6444038152694702, "perplexity": 3243.4897221788515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315544.11/warc/CC-MAIN-20190820133527-20190820155527-00367.warc.gz"}
https://tex.stackexchange.com/questions/444536/fixing-the-same-height-for-multipart-rectangle
Fixing the same height for multipart rectangle I am not able to fix the height of both parts of a (vertical) two-part rectangle to the same height. As a result, the arrow to the box on the right is not aligned with the middle line. I have been reading but have not been able to resolve it. Is there a way? Here is my MWE: \documentclass[margin=3mm]{standalone} \usepackage{tikz} \usetikzlibrary{ arrows % ,positioning % ,shapes% ,shapes.multipart% } \begin{document} \begin{tikzpicture} \node[rectangle split, draw, rectangle split, rectangle split parts=2, align=center] (box1) {Application \nodepart[text width=3cm]{two} GFS Client}; \node[rectangle,draw, align=center, right=3.5cm of box1] (box2) {Master}; \draw[->] (box1) -- (box2) node[midway, above] ( ) {Get chunk location}; \end{tikzpicture} \end{document} Here is what comes out: In this particular case, both parts of multipart rectangle don't have similar height, because Application has a decendant part (p) which is not present in GFS Client. You can force them to be equal inserting a \strut command in both texts. \documentclass[margin=3mm]{standalone} \usepackage{tikz} \usetikzlibrary{ arrows % ,positioning % ,shapes% ,shapes.multipart% } \begin{document} \begin{tikzpicture} \node[rectangle split, draw, rectangle split, rectangle split parts=2, align=center] (box1) {Application\strut \nodepart[text width=3cm]{two} GFS Client\strut}; \node[rectangle,draw, align=center, right=3.5cm of box1] (box2) {Master}; \draw[->] (box1) -- (box2) node[midway, above] ( ) {Get chunk location}; \end{tikzpicture} \end{document} • Thank you for this -- I now learned another way to do this. – ozsu Aug 4 '18 at 18:57 • @ozsu I was actually also thinking about using \strut but was deciding against it because this wastes some space. That's why I was using a \vphantom{p}.... – user121799 Aug 4 '18 at 19:08 I would actually not try to make the node parts equally high (even though this can be done), but just let the arrow start at the text split. \documentclass[margin=3mm]{standalone} \usepackage{tikz} \usetikzlibrary{ arrows % ,positioning % ,shapes% ,shapes.multipart% } \begin{document} \begin{tikzpicture} \node[rectangle split, draw, rectangle split, rectangle split parts=2, align=center] (box1) {Application \nodepart[text width=3cm]{two} GFS Client}; \node[rectangle,draw, align=center, right=3.5cm of box1.text split east] (box2) {Master}; \draw[->] (box1.text split east) -- (box2) node[midway, above] ( ) {Get chunk location}; \end{tikzpicture} \end{document} But if you really need the heights to coincide, just call a phantom. \documentclass[margin=3mm]{standalone} \usepackage{tikz} \usetikzlibrary{ arrows % ,positioning % ,shapes% ,shapes.multipart% } \begin{document} \begin{tikzpicture} \node[rectangle split, draw, rectangle split, rectangle split parts=2, align=center] (box1) {Application \nodepart[text width=3cm]{two} \vphantom{p}GFS Client}; \node[rectangle,draw, align=center, right=3.5cm of box1] (box2) {Master}; \draw[->] (box1) -- (box2) node[midway, above] ( ) {Get chunk location}; \end{tikzpicture} \end{document} • Wonderful, many thanks. I did not know either option and am very glad to learn. – ozsu Aug 3 '18 at 23:23 • defining text width for node part at vertical split rectangles has no sense. in it all parts have equal width ... – Zarko Aug 3 '18 at 23:23 • @Zarko I agree, thanks, didn't look carefully. On the other hand, it really does not hurt here either. – user121799 Aug 3 '18 at 23:24
2019-10-23 15:07:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7830040454864502, "perplexity": 8743.111667291336}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987834649.58/warc/CC-MAIN-20191023150047-20191023173547-00448.warc.gz"}
https://www.storyofmathematics.com/let-f-x-y-z-xi-yj-zk-evaluate-the-integral-of-f-along-each-of-the-following-paths/
# Let F(x, y, z)=xi+yj+zk. Evaluate the integral of F along each of the following paths. $c(t)=(t,t,t), \space 0 \le t \le 3 \space$ The aim of this question is to find the Integration of the given function $F (x, y, z) =i+ yj +zk$ by first integrating $F (t, t, t)$ and then we will put the values of the limits given with the function. The basic concept behind this question is the knowledge of integration, the limits of integration, derivates, and integration rules such as the product and quotient integration rules. Given function we have: $F (x, y, z) = i + yj + zk$ Here given integral $F (x, y, z) = i + yj + zk$ is to be evaluated along each of the indicated paths: $c ( t ) = ( t, t, t)$ So the limit of the given paths $c ( t )$ is given by: $c ( t ) = ( t, t, t ) | \space 0 \le t \le 3 \space$ Now to solve the given function with integration, we have to identify the limits of integration carefully. As given the limits of integral $c(t)$ vary from $0$ to $3$ which can be represented as: $= \int_{ 0 }^{ 3 }$ To find out the value of the line integral $F$ we will take the derivative of: $c( t ) = ( t, t, t ) | \space 0 \le t \le 3 \space$ $\dfrac{ dc }{ dt } = ( t, t, t )$ As the derivative of the given path is taken with respect to $t$ so: $\dfrac{ dc }{ dt } = ( 1, 1, 1 )$ $=\int_{0}^{3} {F (t, t, t) } \times \dfrac{dc}{ dt} dt$ Putting value of $\dfrac{ dc }{ dt }$ in above equation, we get: $=\int_{0}^{3} {F (t, t, t) } \times ( 1, 1, 1 ) dt$ $=\int_{0}^{3} {3t } \times ({ 1, 1, 1 }) dt$ $=\int_{0}^{3} {3t }dt$ $=3 \left[ t \right]_{0}^{3}$ $= 3 \left[ \dfrac{ t^2 }{ 2 } \right]_{0}^{3}$ Putting the limit of $t$ in the above equation: $= 3 \left[ \dfrac{ (3)^2 }{ 2 } – \dfrac{ (0)^2 }{ 2 } \right]$ $= 3 \left[ \dfrac{ (3)^2 }{ 2 } – \dfrac{ 0 }{ 2 } \right]$ $= 3 \left[ \dfrac{ (3)^2 }{ 2 } – 0 \right]$ $= 3 \left[ \dfrac{ 9 }{ 2 } \right]$ $= 3 \times \dfrac{ 9 }{ 2 }$ $= \dfrac{ 27 }{ 2 }$ ## Numerical Result Integral $F$ is evaluated along each path as: $= \dfrac{ 27 }{ 2 }$ ## Example Find out the value of the line integral $F(t, t, t)$ with paths: $c(t)={ t, t, t } , \space 0 \le t \le 2$ Solution $=\int_{0}^{2}{F (t, t, t)} \times \dfrac{dc}{ dt}dt$ $=\int_{0}^{2} {F (t, t, t) } \times ({ 1, 1, 1 }) dt$ $=\int_{0}^{2} {3t } \times ({ 1, 1, 1 })dt$ $=\int_{0}^{2} {3t }dt$ $=3\left[t\right]_{0}^{2}$ $=3\left[\dfrac{t^2}{2}\right]_{0}^{2}$ $=3\left[\dfrac{2^2}{ 2} – \dfrac{0^2}{ 2}\right]$ $=3\left[\dfrac{4}{ 2}\right]$ $=6$
2023-03-31 12:38:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9771313071250916, "perplexity": 222.58953719110087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00291.warc.gz"}
http://booksiread.org/pdf/huygens-principle-and-hyperbolic-equations-perspectives-in-mathematics/
Huygens Principle And Hyperbolic Equations Author: Paul Günther ISBN: 1483262227 Size: 31.55 MB Format: PDF, ePub, Docs View: 5047 Huygens' Principle and Hyperbolic Equations is devoted to certain mathematical aspects of wave propagation in curved space-times. The book aims to present special nontrivial Huygens' operators and to describe their individual properties and to characterize these examples of Huygens' operators within certain more or less comprehensive classes of general hyperbolic operators. The materials covered in the book include a treatment of the wave equation for p-forms over a space of constant sectional curvature, the Riesz distributions, the Euler-Poisson-Darboux-equations over a Riemannian manifold, and plane wave manifolds. Physicists will find the book invaluable. Classical And Quantum Systems Foundations And Symmetries Proceedings Of The 2nd International Wigner Symposium Author: Doebner Heinz-dietrich Publisher: World Scientific ISBN: 9814554391 Size: 51.66 MB Format: PDF, ePub, Docs View: 6092 The Wigner Symposium series is focussed on fundamental problems and new developments in physics and their experimental, theoretical and mathematical aspects. Particular emphasis is given to those topics which have developed from the work of Eugene P Wigner. The 2nd Wigner symposium is centered around notions of symmetry and geometry, the foundations of quantum mechanics, quantum optics and particle physics. Other fields like dynamical systems, neural networks and physics of information are also represented.This volume brings together 19 plenary lectures which survey latest developments and more than 130 contributed research reports. Handbook Of Global Analysis Author: Demeter Krupka Publisher: Elsevier ISBN: 9780080556734 Size: 63.73 MB Format: PDF, ePub, Docs View: 1774 This is a comprehensive exposition of topics covered by the American Mathematical Society’s classification “Global Analysis , dealing with modern developments in calculus expressed using abstract terminology. It will be invaluable for graduate students and researchers embarking on advanced studies in mathematics and mathematical physics. This book provides a comprehensive coverage of modern global analysis and geometrical mathematical physics, dealing with topics such as; structures on manifolds, pseudogroups, Lie groupoids, and global Finsler geometry; the topology of manifolds and differentiable mappings; differential equations (including ODEs, differential systems and distributions, and spectral theory); variational theory on manifolds, with applications to physics; function spaces on manifolds; jets, natural bundles and generalizations; and non-commutative geometry. - Comprehensive coverage of modern global analysis and geometrical mathematical physics - Written by world-experts in the field - Up-to-date contents Advances In Algebraic Quantum Field Theory Author: Romeo Brunetti Publisher: Springer ISBN: 3319213539 Size: 60.74 MB Format: PDF, ePub View: 7373 This text focuses on the algebraic formulation of quantum field theory, from the introductory aspects to the applications to concrete problems of physical interest. The book is divided in thematic chapters covering both introductory and more advanced topics. These include the algebraic, perturbative approach to interacting quantum field theories, algebraic quantum field theory on curved spacetimes (from its structural aspects to the applications in cosmology and to the role of quantum spacetimes), algebraic conformal field theory, the Kitaev's quantum double model from the point of view of local quantum physics and constructive aspects in relation to integrable models and deformation techniques. The book is addressed to master and graduate students both in mathematics and in physics, who are interested in learning the structural aspects and the applications of algebraic quantum field theory. The Quantization Of Gravity Author: Claus Gerhardt Publisher: Springer ISBN: 3319773712 Size: 40.80 MB Format: PDF, ePub View: 3610 ​A unified quantum theory incorporating the four fundamental forces of nature is one of the major open problems in physics. The Standard Model combines electro-magnetism, the strong force and the weak force, but ignores gravity. The quantization of gravity is therefore a necessary first step to achieve a unified quantum theory. In this monograph a canonical quantization of gravity has been achieved by quantizing a geometric evolution equation resulting in a gravitational wave equation in a globally hyperbolic spacetime. Applying the technique of separation of variables we obtain eigenvalue problems for temporal and spatial self-adjoint operators where the temporal operator has a pure point spectrum with eigenvalues $\lambda_i$ and related eigenfunctions, while, for the spatial operator, it is possible to find corresponding eigendistributions for each of the eigenvalues $\lambda_i$, if the Cauchy hypersurface is asymptotically Euclidean or if the quantized spacetime is a black hole with a negative cosmological constant. The hyperbolic equation then has a sequence of smooth solutions which are products of temporal eigenfunctions and spatial eigendistributions. Due to this "spectral resolution" of the wave equation quantum statistics can also be applied to the quantized systems. These quantum statistical results could help to explain the nature of dark matter and dark energy. Bulletin Of The American Mathematical Society Author: Publisher: ISBN: Size: 12.65 MB Format: PDF, ePub, Mobi View: 7245 Algebraic And Analytic Methods In Representation Theory Author: Publisher: Elsevier ISBN: 0080526950 Size: 16.13 MB Format: PDF, Kindle View: 2325 This book is a compilation of several works from well-recognized figures in the field of Representation Theory. The presentation of the topic is unique in offering several different points of view, which should makethe book very useful to students and experts alike. Presents several different points of view on key topics in representation theory, from internationally known experts in the field Bulletin New Series Of The American Mathematical Society Author: Publisher: ISBN: Size: 64.37 MB Format: PDF, Docs View: 6344 Zeitschrift F R Analysis Und Ihre Anwendungen Author: Publisher: ISBN: Size: 47.96 MB Format: PDF, Kindle View: 1171 Subject Guide To Books In Print Author: Publisher: ISBN: Size: 20.38 MB Format: PDF, ePub, Mobi View: 7365
2019-01-21 11:54:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42304742336273193, "perplexity": 1343.0406625561966}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583792338.50/warc/CC-MAIN-20190121111139-20190121133139-00156.warc.gz"}
https://www.physicsforums.com/threads/a-max-min-problem.69775/
# A max/min problem. 1. Apr 3, 2005 ### ArnfinnS hi... i need to find the mex and min point of the function f(x,y) = x^2*y*e^(-x^2 - 2y^2) for (x,y) in R^2 here is what i tried : i found the partial derivatives : f_x = 2x*y*e^8-x^2 - 2y^2) + x^2*y*e^(-x^2 -2y^2)*(-2x) and f_y = x^2*y*(-4y)*e^(-x^2 - 2y^2) i see that those partials equals 0 in the point (0,0). is this the only stationary point here? what is max / and what is minimum? can anyone help me? 2. Apr 3, 2005 ### Zurtex I assume you mean the points where the plane tangent to the surface is parallel to the x-y plane, would this be where Fxy = 0? I've not quite got that far on my calc course. Edit: oh by the way my graph below is a picture of what the general equation looks like, #### Attached Files: • ###### Clipboard02.jpg File size: 62.9 KB Views: 54 Last edited: Apr 3, 2005 3. Apr 3, 2005 ### SpaceTiger Staff Emeritus Alright, this'll be a bit easier to read in Latex: $$f(x,y)=x^2ye^{-x^2-2y^2}$$ Alright, here's a first problem. You have: $$f_x = 2xye^{-x^2 - 2y^2} - 2x^3ye^{-x^2 -2y^2}$$ $$f_y = -4x^2y^2e^{-x^2 - 2y^2}$$ The partial with respect to x looks fine, but you missed a term in the y partial. It should be: $$f_y = x^2e^{-x^2 - 2y^2}-4x^2y^2e^{-x^2 - 2y^2}$$ To find the critical points, you just set these to zero. This will eliminate those pesky exponentials. $$-x^2+1=0$$ $$-4y^2+1=0$$ This will give you your critical points. I'll leave that part to you. Now, if you want to classify them, you have to calculate the second derivatives at your critical points and see if they're positive or negative in each direction. Do you know how to do that? Last edited: Apr 3, 2005
2017-01-20 05:55:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7461891174316406, "perplexity": 1172.1591235848193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00512-ip-10-171-10-70.ec2.internal.warc.gz"}
http://acm.sdut.edu.cn/onlinejudge2/index.php/Home/Index/problemdetail/pid/1104.html
### Image Transformation Time Limit: 1000 ms Memory Limit: 65536 KiB #### Problem Description The image stored on a computer can be represented as a matrix of pixels. In the RGB (Red-Green-Blue) color system, a pixel can be described as a triplex integer numbers. That is, the color of a pixel is in the format "r g b" where r, g and b are integers ranging from 0 to 255(inclusive) which represent the Red, Green and Blue level of that pixel. Sometimes however, we may need a gray picture instead of a colorful one. One of the simplest way to transform a RGB picture into gray: for each pixel, we set the Red, Green and Blue level to a same value which is usually the average of the Red, Green and Blue level of that pixel (that is (r + g + b)/3, here we assume that the sum of r, g and b is always dividable by 3). You decide to write a program to test the effectiveness of this method. #### Input The input contains multiple test cases! Each test case begins with two integer numbers N and M (1 <= N, M <= 100) meaning the height and width of the picture, then three N * M matrices follow; respectively represent the Red, Green and Blue level of each pixel. A line with N = 0 and M = 0 signals the end of the input, which should not be proceed. #### Output For each test case, output "Case #:" first. "#" is the number of the case, which starts from 1. Then output a matrix of N * M integers which describe the gray levels of the pixels in the resultant grayed picture. There should be N lines with M integers separated by a comma. #### Sample Input 2 2 1 4 6 9 2 5 7 10 3 6 8 11 2 3 0 1 2 3 4 2 0 1 2 3 4 3 0 1 2 3 4 4 0 0 #### Sample Output Case 1: 2,5 7,10 Case 2: 0,1,2 3,4,3 zoj2857
2019-02-18 15:30:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5569559931755066, "perplexity": 509.59960349031235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247486936.35/warc/CC-MAIN-20190218135032-20190218161032-00183.warc.gz"}
https://hal.inria.fr/inria-00259321
# Trajectory planning in a dynamic workspace: a state-time space' approach 1 SHARP - Automatic Programming and Decisional Systems in Robotics GRAVIR - IMAG - Graphisme, Vision et Robotique, Inria Grenoble - Rhône-Alpes Abstract : This paper presents a control architecture endowing a car-like vehicle moving in a dynamic and partially known environment with autonomous motion capabilities. Like most recent control architectures for autonomous robot systems, it combines three functional components: a set of basic real-time skills, a reactive execution mechanism and a decision module. The main novelty of the architecture proposed lies in the introduction of a fourth component akin to a meta-level of skills: the sensor-based manoeuvres, ie general templates that encode high-level expert human knowledge and heuristics about how a specific motion task is to be performed. The concept of sensor-based manoeuvres permit to reduce the planning effort required to address a given motion task, thus improving the overall response-time of the system, while retaining the good properties of a skill-based architecture, ie robustness, flexibility and reactivity. The paper focuses on the trajectory planning function (which is an important part of the decision module) and two types of sensor-based manoeuvres, trajectory following and parallel parking, that have been implemented and successfully tested on a real automatic car-like vehicle placed in different situations. Type de document : Article dans une revue Advanced Robotics, Taylor & Francis, 1999, 13 (1) Domaine : https://hal.inria.fr/inria-00259321 Contributeur : Thierry Fraichard <> Soumis le : mercredi 27 février 2008 - 15:19:33 Dernière modification le : jeudi 28 février 2008 - 11:12:28 Document(s) archivé(s) le : jeudi 20 mai 2010 - 20:00:07 ### Fichier 99-fraichard-rsjar.pdf Fichiers produits par l'(les) auteur(s) ### Identifiants • HAL Id : inria-00259321, version 1 ### Citation Thierry Fraichard. Trajectory planning in a dynamic workspace: a state-time space' approach. Advanced Robotics, Taylor & Francis, 1999, 13 (1). <inria-00259321> Consultations de la notice ## 236 Téléchargements du document
2017-07-26 06:38:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2686891555786133, "perplexity": 8669.036722182353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426050.2/warc/CC-MAIN-20170726062224-20170726082224-00419.warc.gz"}
http://www.physicsforums.com/showthread.php?t=226877
## How Many Distinct Invariants of the Poincaré Group Poincaré lists 8 distinct but elementary invariants in his paper, ON THE DYNAMICS OF THE ELECTRON. See the equation number 5 and 7 in http://www.univ-nancy2.fr/poincare/bhp/pdf/hp2007gg.pdf How many invariants in special relativity are you aware of? How many distinct invariants of the Poincaré group exist? And how many distinct invariants of the Poincaré group can you derive? This is how mathematicians measure the understanding of physicists in spacetime. I quote: "Every geometry is defined by a group of transformations, and the goal of every geometry is to study invariants of this group." Klein, Erlanger Program. "Each type of geometry is the study of the invariants of a group of transformations; that is, the symmetry transformation of some chosen space." Stewart and Golubitsky 1993, p. 44. "A geometry is defined by a group of transformations, and investigates everything that is invariant under the transformations of this given group." Weyl 1952, p. 133. "The geometry of Minkowski space is defined by the Poincaré group." http://www.everythingimportant.org/r...eneralized.htm Shubee PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus I do not know if a group has invariants a priori. Only when you assume that Physics is invariant under the transformations of a group, then the group and the Physics yield invariants. Each observable corresponds to a Unitary operator that represents the transformations for that operator. For example if you assume Poincare Invariance in a four dimentional space, then you get 10 operators. Four of them correspond to the four translations, and the corresponding invariance is the conservation of the four momentum. The other six operators are the six rotations in the four space. These represent the angular momentum conservation, and some boosts representing rotations around the time axis. All the above is common knowledge. Poincare group is a Lie Group. Interesting Physics might come out if we discretize the Poincare group somehow. Weinberg in his Quantum Fields Theory Volume 2 shows how the Lorentz Group can be mapped to a discrete SL(2,C) group. He constructs four vectors from the 2x2 Mobius Transformations in SL(2,C). Lorentz group invariance is completely equivalent to Special Relativity. Poincare group is larger. "Shubee" wrote in message news:92bb58d3-8058-4c00-9b2e-d5e39d08cedc@59g2000hsb.googlegroups.com... > Poincaré lists 8 distinct but elementary invariants in his paper, ON > THE DYNAMICS OF THE ELECTRON. See the equation number 5 and 7 in > http://www.univ-nancy2.fr/poincare/bhp/pdf/hp2007gg.pdf > How many invariants in special relativity are you aware of? How many > distinct invariants of the Poincaré group exist? And how many distinct > invariants of the Poincaré group can you derive? > This is how mathematicians measure the understanding of physicists in > spacetime. > > I quote: > > "Every geometry is defined by a group of transformations, and the goal > of every geometry is to study invariants of this group." Klein, > Erlanger Program. > > "Each type of geometry is the study of the invariants of a group of > transformations; that is, the symmetry transformation of some chosen > space." Stewart and Golubitsky 1993, p. 44. > > "A geometry is defined by a group of transformations, and investigates > everything that is invariant under the transformations of this given > group." Weyl 1952, p. 133. > > "The geometry of Minkowski space is defined by the Poincaré group." > http://www.everythingimportant.org/r...eneralized.htm > > Shubee > On Apr 4, 12:44 pm, Shubee wrote: > Poincaré lists 8 distinct but elementary invariants in his paper, ON > THE DYNAMICS OF THE ELECTRON. See the equation number 5 and 7 inhttp://www.univ-nancy2.fr/poincare/bhp/pdf/hp2007gg.pdf > How many invariants in special relativity are you aware of? How many > distinct invariants of the Poincaré group exist? Ten. It's a ten dimensional group - four translations, three rotations, three boosts. For readers who aren't relativists, a 'boost' is a translation in momentum space - a change in the 'velocity of the frame' (or better, the rapidity.) Boosts commute with rotations around the direction of the boost (a 'screw transformation') but not with other rotations. So boosts and rotations mix, so take the 'types' of invariant listed below with a grain of salt. The invariants are: energy-momentum (four components, derived from translations) Angular momentum (three components, derived from rotations - but note that boosts and rotations mix) "Initial position of the center of mass" (three components,derived from boosts, but see above) >And how many can you derive? You mean derive from the generators of the transformations? All of them... I'm not sure if you're asking if they /can be/ derived, or if I personally can derive them - English has an ambiguity here. Do you mean you want to see the derivations from the infinitesimal generators? ## How Many Distinct Invariants of the Poincaré Group John Eristu wrote: > I do not know if a group has invariants a priori. Poincare' has 2 invariants. The Lie algebra is given by {J(u), J(v) + K(t) + P(b)} = J(uxv) + K(uxt) + P(uxb) {K(s), K(t) + P(b)} = -(1/c)^2 J(sxt) + (s.b) H/c^2 {P(a), P(b)} = 0 {J(u) + K(s) + P(a), H} = P(s) where components are denoted as J(u) = J.u = J_i u^i, for convenience. The invariants are M^2 - P.P/c^2 and |MJ + P x K|^2 - (1/c)^2 (P.J)^2, where M = H/c^2. The non-linear (smooth) functional invariants are those residing in the functional algebra C^{infinity}(L**), where L is the Lie algebra of the Poincare' group and L* its dual and L** its double dual. This is a Poisson manifold endowed with the bracket {f,g} = sum e_c f^c_{ab} df/de_a dg/de_b where (e_a: a=1,...,10) is the basis of L** and the structure coefficients are given by [Y_a,Y_b] = f^c_{ab} Y_c, where (Y_a: a = 1,..,10) is the basis of L. An invariant f is then whatever satisfies the relation {e_a, f} = 0 for all the e_a. This gives you a set of linear differential equations. Solving them yields the invariants. If you restrict focus to the subset of equations {p(v), f} = 0; {h, f} = 0 where p(v) and h are the elements of L** corresponding respectively to P(v) and H in L, then you obtain the functional subspace that commutes with {p_1,p_2,p_3,h}. This is just functional subspace spanned by {p_1,p_2,p_3,h} themselves, as well as the components of the vector w = mj + pxk (m = h/c^2) and w_0 = p.j. A Poincare' invariant, f, can therefore only be a function of p, h, w and w_0. It's whatever functions of these 8 arguments that have 0 Poisson bracket with j(v) and k(s) (i.e., the elements of L** that correspond respectively to J(v) and K(s)). That's 2 sets of differential equations (3 each, 6 in all). They're not hard to write out and solve. I essentially did the whole exercise here a couple years back -- not only for Poincare' but for an entire family of groups that simultaneously contains Poincare', Galilei and (4-D) Euclid as subgroups. Except for special representation families, the only solutions to the equations {j(u},f} = 0 and {k(s),f} = 0 are |p|^2 - (h/c)^2 and |w|^2 - (w_0/c)^2. See The Wigner Classification for Galilei/Poincare'/Euclid http://federation.g3z.com/Physics/in...eralizedWigner The differential equations are listed there.
2013-05-20 15:50:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8737057447433472, "perplexity": 1880.5280100823074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699068791/warc/CC-MAIN-20130516101108-00074-ip-10-60-113-184.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/59958/question-about-weyl-character-formula
# Question about Weyl character formula In the book of Humphreys, page 139, Weyl character formula is $$\left(\sum_{w\in W} \operatorname{sn}(w)\epsilon_{w\delta}\right) * \operatorname{ch}_{\lambda} = \sum_{w\in W} \operatorname{sn}(w) \epsilon_{w(\lambda + \delta)}\tag{1}$$ where $*$ is convolution but not usual multiplication of functions. In other books, Weyl formula is given by $$\operatorname{ch}_{\lambda} = \frac{\sum_{w\in W} \operatorname{sn}(w) \epsilon_{w(\lambda + \delta)}}{\sum_{w\in W} \operatorname{sn}(w)\epsilon_{w\delta}}.\tag{2}$$ Are these two formulas (1) and (2) equivalent? In some other books, Weyl formula is given by $$\operatorname{ch}_{\lambda} = \frac{\sum_{w\in W} \operatorname{sn}(w) \epsilon_{w(\lambda + \delta)-\delta}}{\sum_{w\in W} \operatorname{sn}(w)\epsilon_{w\delta - \delta}}.\tag{3}$$ Is this because in (2) $w \in W$ act as dot pruduct and (3) $w\in W$ act as usual Weyl group action? If we are given a highest weight $\lambda$ explicitly, say, $\lambda = 3\omega_1 + 2\omega_2$, where $\omega_1, \omega_2$ are fundamental weights of $\mathfrak{g} = \mathfrak{sl}_3$ (type $A_2$), how can we compute explicitly the character of the irreducible $\mathfrak{g}$-module $V_{\lambda}$ with highest weight $\lambda$ using formula (2)? I am confused with the form of the formula since the character should be of the form $\operatorname{ch}(V_{\lambda}) = \sum_\mu\dim(V_\mu)\epsilon_\mu$ but the Weyl formula is not of this form. Thank you very much. - Hmm. IMHO convolution is the natural product on a group ring. May be you are used to viewing them as functions as opposed to elements of the group ring $\mathbf{Z}[\Lambda]$, where $\Lambda$ is the additive group of weights (= a free abelian group generated by the fundamental dominant weights)? A function $f:\Lambda\rightarrow \mathbf{Z}$ with a finite support is turned into an element of the group ring naturally as $\sum_{\lambda\in\Lambda}f(\lambda)e_{\lambda}$. The quantities $e_\lambda$ are multiplied by the natural formula $e_{\lambda}*e_{\mu}=e_{\lambda+\mu}$, and the convolution is just the linear extension of this (according to the distributive law). Weyl character formula is actually very useful for the rank two cases, where you can just stare at the diagram. As I don't have a useful tool for drawing a sequence of 2-D diagrams of weight lattices, let's do an example of the case $A_1$, where you should know the formal character anyway (so we can use that to test the formula). In that case we can identify the weight $m\lambda_1$ with the integer $m$. In the group ring we then have $e_{m\lambda_1}=e_{\lambda_1}^m=z^m$, where I use the abbreviation $z=e_{\lambda_1}$. With this identification $\delta=\lambda_1=1, e_\delta=z$. The only non-trivial element of the Weyl group in this case is the negation of weights, and that has length =1 and hence a negative sign. Therefore $$\sum_{w\in W}sn(w)e_{w\delta}=e_\delta - e_{-\delta}= z -z^{-1}$$ and $$\sum_{w\in W}sn(w)e_{w(\lambda+\delta)}=e_{\lambda+\delta} - e_{-\lambda-\delta}= z^{m+1} -z^{-m-1},$$ where again $\lambda=m\lambda_1, m\ge0$. This time we know from $sl_2$-theory that $$ch_\lambda=\sum_{\mu}\dim(V_\mu)e_\mu=z^m+z^{m-2}+z^{m-4}+\cdots+z^{4-m}+z^{2-m}+z^{-m},$$ because for the irreducible module $V(m)$ of highest weight $m$ the dimensions of the weight space $V(m)_k$ is equal to 1, if $k$ is in the arithmetic progression $m,m-2,m-4,\ldots,-m+2,-m$ and 0 otherwise. So in this case the Weyl character formula says that $$z^m+z^{m-2}+z^{m-4}+\cdots+z^{4-m}+z^{2-m}+z^{-m}=\frac{z^{m+1} -z^{-m-1}}{z -z^{-1}}.$$ This identity in the group ring $\cong \mathbf{Z}[z,z^{-1}]$ is easy to verify and/or derive. Either you can write the l.h.s. as a geometric sum, or you can multiply both sides with the denominator $z-1/z$, and the usual avalanche of termwise cancellations gives you the result. Note that the group ring is an integral domain, so your equations (1) and (2) are equivalent. In the case of $A_2$ we can proceed similarly. For example, if we write $z_i=e_{\lambda_i}$ and $s_i=s_{\alpha_i}$, then $\delta=\lambda_1+\lambda_2$, $s_1(\delta)=\delta-\alpha_1=-\lambda_1+2\lambda_2=\alpha_2$, $s_2(\delta)=\delta-\alpha_2=2\lambda_1-\lambda_2$, $s_1s_2(\delta)=-2\lambda_1+\lambda_2$, $s_2s_1(\delta)=\lambda_1-2\lambda_2$ and finally $s_1s_2s_1(\delta)=-\lambda_1-\lambda_2$. Therefore (taking the parity of the length of the group element into account) $$\sum_{w\in W}sn(w)e_{w\delta}=z_1z_2-z_1^{-1}z_2^2-z_1^2z_2^{-1}+z_1^{-2}z_2+z_1z_2^{-2}-z_1^{-1}z_2^{-1},$$ where I wrote the terms (i.e. the elements of $W$) in the same order as above. To do your example, you need to also compute the similar expansion for $\sum_{w\in W}sn(w)e_{w(\lambda+\delta)}$. Then the Weyl character formula tells you that $ch_{\lambda}$ is the element of the group ring $\mathbf{Z}[z_1,z_2,z_1^{-1},z_2^{-1}]$ that is the (unique) solution of the equation (1) or (of equation (2)). This answer has just become unmanageably long, so I cannot continue. I recommend that you first work out the cases $\lambda=\lambda_1$, $\lambda=\lambda_2$ and $\lambda=\delta$ where you know the answer already: the first rep in this list is the natural 3-dimensional rep of $sl_3$. The second is its dual, and the last is the 8-dimensional adjoint representation as its highest weight is also the highest root. If your have problems with those, ask another question showing where you got stuck, and somebody here will take a look. Ping me, if necessary. Edit: I just recalled that I already wrote an example on how to apply Weyl's formula in another answer. - thank you very much. In type $A_1$, why $\delta \neq \alpha_1/2$? Is the root system of type $A_1$ as follows: $\{\alpha_1, -\alpha_1\}$? –  LJR Aug 26 '11 at 19:48 I see. $\lambda_1 = \alpha_1/2$. –  LJR Aug 26 '11 at 19:52 @user9791: Correct. –  Jyrki Lahtonen Aug 30 '11 at 9:36
2014-12-21 15:11:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9401722550392151, "perplexity": 139.01337033858144}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802771384.149/warc/CC-MAIN-20141217075251-00110-ip-10-231-17-201.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2869046/pythagoras-theorem-quick-calculation/2869053
# Pythagoras theorem quick calculation [closed] How can I easily calculate this equation $x^2+y^2=76149513$ when $x$ and $y$ are whole numbers? ## closed as off-topic by Namaste, Siong Thye Goh, Simply Beautiful Art, Arnaud Mortier, user99914 Aug 1 '18 at 16:44 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Namaste, Siong Thye Goh, Simply Beautiful Art, Arnaud Mortier, Community If this question can be reworded to fit the rules in the help center, please edit the question. • Are you asking for pairs of solutions $x$ and $y$? – Matt Aug 1 '18 at 12:47 • Yes and also a shortcut way for his sort of caculations – Mathisfun Aug 1 '18 at 12:58 Wolfram alpha tells me $$76149513 = 3^2×11×353×2179 .$$ Since $11$ is a prime factor congruent to $3$ modulo $4$ that number can't be written as a sum of two squares. (I could have tested for divisibility by $11$ by calculating the alternating sum of the digits.) • Note that divisibility by $11$ alone is not enough to show that the number is not a sum of two squares; you also need the odd power here. – Dirk Aug 1 '18 at 13:20 • @DirkLiebhold True. It's enough when $11$ occurs to an odd power - and easy to prove when the power is $1$. – Ethan Bolker Aug 1 '18 at 13:59
2019-11-12 21:36:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6407694816589355, "perplexity": 732.7710858337377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665767.51/warc/CC-MAIN-20191112202920-20191112230920-00472.warc.gz"}
https://publish.obsidian.md/myquantumwell/Quantum+Mechanics/Unitary+transformations+in+quantum+mechanics
The change in [closed quantum systems](Closed%20quantum%20systems.md) over time is modeled by _Unitary transformations._ This is expressed as the following postulate for describing the evolution of [closed quantum systems:](Closed%20quantum%20systems.md) ![](Postulates%20of%20Quantum%20Mechanics.md#^5b8fd6) Unitary transformations in quantum mechanics are [Linear transformations](Linear%20transformations%20in%20quantum%20mechanics.md) that are realized by [unitary operators](Unitary%20transformations%20in%20quantum%20mechanics.md#Unitary%20operators%20in%20quantum%20mechanics) where [time evolution operators](time%20evolution%20operator.md) model unitary transformations that occur within a specified time interval. ^64426c Consider a unitary operator, $\hat{U}.$ * Given a [state vector,](State%20vector.md) $|\psi\rangle,$ a [unitary transformation](Unitary%20transformations%20in%20quantum%20mechanics.md) of that state vector is expressed as $|\psi'\rangle=\hat{U}|\psi\rangle.$ * Given a [density matrix,](density%20matrix.md) $\hat{\rho},$ a [unitary transformation](Unitary%20transformations%20in%20quantum%20mechanics.md) of that density matrix is expressed as $\hat{\rho}'=\hat{U}\hat{\rho} \hat{U}^\dagger.$ # Unitary operators in quantum mechanics A [unitary operator](Unitary%20operators.md), $\hat{U}$ is a [linear operator](Linear%20transformations%20in%20quantum%20mechanics.md#Linear%20operators%20in%20quantum%20mechanics) for which [$\hat{U}^{\dagger}=\hat{U}^{-1}$](Unitary%20operators.md#^37f781) and equivalently [$\langle x|\hat{U}^{\dagger}\hat{U} |y \rangle = \langle x| y \rangle$](Unitary%20operators.md#^219d76) where here we rewrite the [definition](Unitary%20operators.md#^219d76) in [Bra-ket notation.](Quantum%20Mechanics%20(index).md#Bra-ket%20notation) This notation also makes it clear that $\hat{U}^{\dagger}\hat{U}=1$ and $|x\rangle$ and $|y\rangle$ are some arbitrary pair of [state vectors.](State%20vector.md) This operator gives rise to [unitary transformations in quantum mechanics,](Unitary%20transformations%20in%20quantum%20mechanics.md) which is how evolution is modeled for [isolated quantum systems.](Closed%20quantum%20systems.md) This is also an operator that transforms between [Hilbert spaces](Hilbert%20Spaces%20in%20Quantum%20Mechanics.md). ^7ed86c ## Construction of unitary operators from [observables](Observable.md) [Observables](Observable.md) are [hermitian operators](Hermitian%20operators.md) and thus we may express [unitary operators in quantum mechanics](Unitary%20transformations%20in%20quantum%20mechanics.md#Unitary%20operators%20in%20quantum%20mechanics) in terms of Hermitian operators. The procedure for constructing a unitary operator, $U,$ from an observable, $X,$ is as follows: ![](Unitary%20operators.md#^8e52f6) ![](Unitary%20operators.md#^932ae0) ^7f7c4e ![](Unitary%20operators.md#^7dab13) In physical realizations of unitary transformations the parameter $t$ is some elapsed time, $t_2-t_1,$ where $t_1$ and $t_2$ are start and end points for that unitary transformation. In addition, $-\hbar X$ is a [Hamiltonian operator](Hamiltonian%20operator.md) governing the transformation. Thus, due to the time parameter, Unitary operators in this exponential form are referred to as [time evolution operators,](time%20evolution%20operator.md) which are expressed as ![](time%20evolution%20operator.md#^65f482) # Translation operator in quantum mechanics ![](Translation%20operator%20in%20quantum%20mechanics.md#^5ad888) ![](Translation%20operator%20in%20quantum%20mechanics.md#^9f6787) ([... see more](Translation%20operator%20in%20quantum%20mechanics.md)) # Quantum gates ![](Quantum%20gates.md#^3aa32c) ([... see more](Quantum%20gates.md)) #QuantumMechanics/FoundationsOfQuantumMechanics #QuantumMechanics/MathematicalFoundations #QuantumMechanics/QuantumDynamics
2022-06-28 09:43:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9560334086418152, "perplexity": 2869.3918377109444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103360935.27/warc/CC-MAIN-20220628081102-20220628111102-00695.warc.gz"}
https://www.physicsforums.com/threads/thermodynamics-of-mercury-thermometers.829664/
# Thermodynamics of mercury thermometers Tags: 1. Aug 27, 2015 ### carlzz7 • Please don't ignore the template. The space above the mercury column in a thermometer ordinarily is evacuated, but due to faulty manufacture, a particular thermometer has a pressure of 2 mmHg of air in this space when the whole thermometer is immersed in a bath at 0 degrees Celsius. Calculate the pressure of the air when the whole thermometer is immersed in a bath at 10 degrees Celsius. At 0 degrees Celsius the length of air space is 10 cm and at 10 degrees Celsius the length of air space is 2 cm. I started out by using p = rho * g * h. For 0 degrees Celsius I got that rho = 212.31 Pa. I'm not sure that rho would be the same for 100 degrees Celsius though, because the pressure and length are changing. If rho is different, how would I go about solving for it to fill into the formula for 100 degrees celsius? 2. Aug 27, 2015 ### SteamKing Staff Emeritus In the formula P = ρ ⋅ g ⋅ h, ρ is the mass density of the fluid, so it does not have units of pascals. Since the manufacturing of the thermometer was faulty and allowed air to enter, it seems the problem is asking you to consider the effect that temperature has on this trapped air, and how this affects the accuracy of the device. 3. Aug 28, 2015 ### carlzz7 So the pressure of the air equals the pressure of the mercury. If the change in length is 10cm - 2 cm = 8cm, then the mercury would have increased 80 mmHg. Add this to the original 2 mmHg and you get 82 mmHg at 10 degrees Celsius. Am I thinking about this correctly?
2017-10-20 17:22:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8002093434333801, "perplexity": 623.0500628789309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824226.31/warc/CC-MAIN-20171020154441-20171020174441-00811.warc.gz"}
https://aviation.stackexchange.com/questions/65522/need-to-add-elevator-downforce-to-total-lift-when-sizing-wing-area?noredirect=1
Need to add elevator downforce to total lift when sizing wing area? Do I need to add elevator downforce to total lift when sizing wing area? Say a small airplane weighs 1,000 lbs, and both the wing and elevator have a max Cl of 1.5, and my elevator is 20% of my wing area, so max elevator downforce is roughly 20% of 1,000 lbs or 200 lbs in a classic configuration. Doesn't the main wing then have to carry 1,000lbs + 200 lbs negative elevator lift, for a total lift of 1,200 lbs? If the plane is in a tandem configuration, the main wing would only have to carry 800 lbs, as the elevator has 200 lbs of positive lift,for a total lift of 1,000 lbs. 1,200/800 lbs is 150%!!, so if I use the tandem configuration, my main wing can be 50% smaller!!! Is this correct? • Have you thought of any other cases where your wing would need to provide more lift than the weight of the aircraft? Like during a turn? – AEhere supports Monica Jun 13 '19 at 15:12 • Generally Eva,hated you are right, but if the front wing is underneath the fuselage it will be functional nearly over all its surface, but the other one is at mid fuselage thickness or above the fuselage so the wing area lift near the fuselage need to be more precisely evaluated. – user40476 Jun 13 '19 at 16:20 • This question could be improved -- you are really asking about the downforce created by the horizontal stab and elevator together, not just the elevator. Likewise re area. You seem to be a little unclear on the meaning of the word "elevator". – quiet flyer Jun 14 '19 at 14:21 No. First, your horizontal tail will need some control margin. A trimmed state where it flies close to c$$_{l_{min}}$$ is a grave design error. Next, your horizontal tail sees the same angle of attack change as the wing, reduced by downwash. This means that the strongest downforce is trimmed in fast flight, when the angle of attack is small. Fly slower and angle of attack rises not only on the wing, but also on the tail. Chances are that your tail produces positive lift in slow flight. Flaps will change this picture slightly. Setting flaps will need you to re-trim the aircraft because now the center of the wing's lift has moved slightly backwards, so the load on the tail becomes more negative. Fun fact: If you have a symmetrical wing airfoil, the trimmed tail load will be constant over the whole envelope. In order to achieve the best control power, this tail load should ideally be zero. Now for the tandem wing: You assume that now both surfaces create lift, so overall drag should be lowest. But adding area on the tail will reduce the needed wing area. Given that the planform shape stays constant, less area means less wing span. However, induced drag is proportional to the span loading, and a lower span will mean more induced drag. Therefore, a conventional design with an unloaded tail of minimum size will have the lowest drag. • I think it's possible that this answer could be improved. Yes at first glance it seems obvious based on simple geometry that the tail must make the strongest downforce during high-speed flight, but this line of argument doesn't consider the effect of the pilot's pitch control inputs which change the position of the elevator or stabilator in order to trim for the desired airspeed. – quiet flyer Jun 16 '19 at 13:23 • Perhaps a better line of argument would be based on the wing's pitching moment coefficient-- which in fact is exactly the argument you offered for the case of the symmetrical (wing) airfoil. You could then note that a cambered (wing) airfoil will generate a stronger nose-down pitch torque at high airspeed than at low airspeed, so the tail must generate more downforce (or less upforce) at high speed than at low speed, at least in the no-flaps case where the wing airfoil is held constant. – quiet flyer Jun 16 '19 at 13:24 Your description would apply to a situation where we have an all-moving horizontal stabilizer (simpler to consider than the elevator case) with an airfoil the same as the wing but upside-down, flying at a negative (downlifting) angle-of-attack that is exactly equal and opposite to the wing's positive (uplifting) angle-of-attack. In practice you would never have a horizontal stabilizer flying at this strong a negative angle-of-attack and creating this much downforce. But yes, if the CG were so far forward that this much downforce were needed for balance, the situation would be as you describe, and this would definitely add a great deal to the total lift that the wing would have to generate. So the basic answer to your question is "yes", but you have chose a very extreme case of a downlifting tail for your illustration. In practice it can be shown that a configuration with a slightly-downlifting tail is generally more efficient than a configuration with a strongly uplifting tail. • ("chosen" not "chose") – quiet flyer Jun 13 '19 at 19:28 • You can use the edit button to fix typos. – AEhere supports Monica Jun 14 '19 at 10:59
2020-09-24 05:53:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5751919150352478, "perplexity": 1401.0313464738838}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213454.52/warc/CC-MAIN-20200924034208-20200924064208-00151.warc.gz"}
https://spinnaker8manchester.readthedocs.io/en/latest/_modules/spinn_front_end_common/abstract_models/abstract_supports_database_injection/
# Source code for spinn_front_end_common.abstract_models.abstract_supports_database_injection # Copyright (c) 2017-2019 The University of Manchester # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. from spinn_utilities.abstract_base import AbstractBase, abstractproperty from spinn_utilities.require_subclass import require_subclass from pacman.model.graphs.machine import MachineVertex [docs]@require_subclass(MachineVertex) class AbstractSupportsDatabaseInjection(object, metaclass=AbstractBase): """ Marks a machine vertex as supporting injection of information via a\ database running on the controlling host. """ __slots__ = () @abstractproperty def is_in_injection_mode(self): """ Whether this vertex is actually in injection mode. :rtype: bool """
2022-01-27 11:07:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27936625480651855, "perplexity": 3708.087746892719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305260.61/warc/CC-MAIN-20220127103059-20220127133059-00520.warc.gz"}
https://de.maplesoft.com/support/help/Maple/view.aspx?path=MTM/double
double - Maple Help MTM double convert to double precision floating point Calling Sequence double(A) Parameters A - an expression, or an array, matrix, or vector of expressions Description • The double(A) function evaluates each element of A numerically at a precision of Digits:=15.  If possible, the computation is done using the floating-point hardware of the underlying system using evalhf.  The evaluation is done in double precision. • Division by zero will be trapped and Float(infinity) returned instead of raising an error. Examples > $\mathrm{with}\left(\mathrm{MTM}\right):$ > $\mathrm{double}\left(\mathrm{Array}\left(\left[\mathrm{sin}\left(1\right),\mathrm{cos}\left(1\right)\right]\right)\right)$ $\left[\begin{array}{cc}{0.841470984807897}& {0.540302305868140}\end{array}\right]$ (1)
2022-08-17 14:33:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9416149854660034, "perplexity": 2160.4101281623953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572908.71/warc/CC-MAIN-20220817122626-20220817152626-00328.warc.gz"}
https://www.rdocumentation.org/packages/xgboost/versions/0.6-4/topics/xgb.cv
# xgb.cv 0th Percentile ##### Cross Validation The cross validation function of xgboost ##### Usage xgb.cv(params = list(), data, nrounds, nfold, label = NULL, missing = NA, prediction = FALSE, showsd = TRUE, metrics = list(), obj = NULL, feval = NULL, stratified = TRUE, folds = NULL, verbose = TRUE, print_every_n = 1L, early_stopping_rounds = NULL, maximize = NULL, callbacks = list(), ...) ##### Arguments params the list of parameters. Commonly used ones are: • objective objective function, common ones are • reg:linear linear regression • binary:logistic logistic regression for classification • eta step size of each boosting step • max_depth maximum depth of the tree • nthread number of thread used in training, if not set, all threads are used See xgb.train for further details. See also demo/ for walkthrough example in R. data takes an xgb.DMatrix, matrix, or dgCMatrix as the input. nrounds the max number of iterations nfold the original dataset is randomly partitioned into nfold equal size subsamples. label vector of response values. Should be provided only when data is an R-matrix. missing is only used when input is a dense matrix. By default is set to NA, which means that NA values should be considered as 'missing' by the algorithm. Sometimes, 0 or other extreme value might be used to represent missing values. prediction A logical value indicating whether to return the test fold predictions from each CV model. This parameter engages the cb.cv.predict callback. showsd boolean, whether to show standard deviation of cross validation metrics, list of evaluation metrics to be used in cross validation, when it is not specified, the evaluation metric is chosen according to objective function. Possible options are: • error binary classification error rate • rmse Rooted mean square error • logloss negative log-likelihood function • auc Area under curve • merror Exact matching error, used to evaluate multi-class classification obj customized objective function. Returns gradient and second order gradient with given prediction and dtrain. feval custimized evaluation function. Returns list(metric='metric-name', value='metric-value') with given prediction and dtrain. stratified a boolean indicating whether sampling of folds should be stratified by the values of outcome labels. folds list provides a possibility to use a list of pre-defined CV folds (each element must be a vector of test fold's indices). When folds are supplied, the nfold and stratified parameters are ignored. verbose boolean, print the statistics during the process print_every_n Print each n-th iteration evaluation messages when verbose>0. Default is 1 which means all messages are printed. This parameter is passed to the cb.print.evaluation callback. early_stopping_rounds If NULL, the early stopping function is not triggered. If set to an integer k, training with a validation set will stop if the performance doesn't improve for k rounds. Setting this parameter engages the cb.early.stop callback. maximize If feval and early_stopping_rounds are set, then this parameter must be set as well. When it is TRUE, it means the larger the evaluation score the better. This parameter is passed to the cb.early.stop callback. callbacks a list of callback functions to perform various task during boosting. See callbacks. Some of the callbacks are automatically created depending on the parameters' values. User can provide either existing or their own callback methods in order to customize the training process. ... other parameters to pass to params. ##### Details The original sample is randomly partitioned into nfold equal size subsamples. Of the nfold subsamples, a single subsample is retained as the validation data for testing the model, and the remaining nfold - 1 subsamples are used as training data. The cross-validation process is then repeated nrounds times, with each of the nfold subsamples used exactly once as the validation data. All observations are used for both training and validation. ##### Value An object of class xgb.cv.synchronous with the following elements: • call a function call. • params parameters that were passed to the xgboost library. Note that it does not capture parameters changed by the cb.reset.parameters callback. • callbacks callback functions that were either automatically assigned or explicitely passed. • evaluation_log evaluation history storead as a data.table with the first column corresponding to iteration number and the rest corresponding to the CV-based evaluation means and standard deviations for the training and test CV-sets. It is created by the cb.evaluation.log callback. • niter number of boosting iterations. • folds the list of CV folds' indices - either those passed through the folds parameter or randomly generated. • best_iteration iteration number with the best evaluation metric value (only available with early stopping). • best_ntreelimit the ntreelimit value corresponding to the best iteration, which could further be used in predict method (only available with early stopping). • pred CV prediction values available when prediction is set. It is either vector or matrix (see cb.cv.predict). • models a liost of the CV folds' models. It is only available with the explicit setting of the cb.cv.predict(save_models = TRUE) callback. • xgb.cv ##### Examples # NOT RUN { data(agaricus.train, package='xgboost') dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label) cv <- xgb.cv(data = dtrain, nrounds = 3, nthread = 2, nfold = 5, metrics = list("rmse","auc"), max_depth = 3, eta = 1, objective = "binary:logistic") print(cv) print(cv, verbose=TRUE) # }
2019-12-12 15:55:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2217620611190796, "perplexity": 5539.076627019744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540544696.93/warc/CC-MAIN-20191212153724-20191212181724-00104.warc.gz"}
https://courses.ansys.com/index.php/courses/lumerical-charge-materials/lessons/material-database-using-ansys-lumerical-charge-lesson-1/
# Material Database Using Ansys Lumerical CHARGE — Lesson 1 The Electrical/Thermal Material Database stores the electrical and thermal properties of materials used in CHARGE simulations. Through the material database, you can also create new materials and modify their properties. To access the Electrical/Thermal material database, click on the “Electrical and Thermal” button in the “Materials” section under the Design tab. ### Materials Properties Panel The Electrical/Thermal Material Database consists of two main panels: the Material List on the left and the Material Properties on the right. When a material is selected from the Material List, the electronic/thermal properties, as well as the color associated with the material, is displayed on the Materials Properties panel. 1. “Add” button: Adds a new material to the Material List. 2. “Delete” button: Deletes the selected material from the Material List. A material used as a base material to create an alloy cannot be deleted. 3. “Copy” button: Creates a copy of the selected material in the Material List. 4. “Import” button: Imports materials from other simulation files (.ldev). 5. Material List: Displays all the materials stored in the simulation file. When you start a new project, the material database is populated with the default materials that came with the software installation. Any modification you make in the material database of your simulation file stays with the simulation file and won’t change the default material database. For information about modifying the default material database, please refer to the related links below. 6. Display option: Allows you to filter the materials that show up in the Material List. When the “Active only” option is checked, only materials that are used in the current simulation will be displayed. The drop-down menu on the right filters the materials by their material type (semiconductor, insulator, conductor, binary alloy, fluid). Alternatively, you can type in the material name in the text box. 7. Send to current project: Adds the selected material from the Material List to your current simulation. The “create new” option creates a new “material” in the objects tree with the properties of the selected material. The “add to existing” option adds the properties to the selected “material” from the objects tree (NOTE: the drop-down list gets populated only by “materials” [from the objects tree] that do not already have the corresponding properties added to them. 8. Color: Sets the color of the material. 9. Electronic Properties and Recombination tabs: Contain the properties of the material that will be used by the Charge solver and will be the subject of the following subsection about material models. 10. Property visualizer: A tool that can be used to visualize the changes in semiconductor properties of the selected material as a function of different parameters. This will be discussed in the following unit in detail. ### Property Visualizer The properties of semiconductor materials such as mobility, carrier lifetimes and so on can be visualized as a function of various variables by clicking the "Visualize" button in the main window of the Electrical/Thermal Material Database after selecting the desired semiconductor material from the list. The following variables and semiconductor properties are available for visualization in the semiconductor properties visualizer dialog: Variables • $$T$$: Temperature in units of Kelvin ($$K$$) • $$N$$: Doping concentration in units of $$cm^{-3}$$ • $$N_A$$: Acceptor doping concentration in units of $$cm^{-3}$$ • $$N_D$$: Donor doping concentration in units of $$cm^{-3}$$ • $$F$$: Field intensity in units of $$V/m$$ • $$x$$: Alloy fraction (only available for alloy materials) Semiconductor properties • $$\varepsilon_r$$: Relative dielectric permittivity • $$E_g$$: Bandgap $$eV$$ • $$m_n$$: Effective mass of electron ($$m^*/m_0$$) • $$m_p$$: Effective mass of hole ($$m^*/m_0$$) • $$\mu_n$$: Electron mobility $$cm^2/V\dot s$$ • $$\mu_p$$: Hole mobility $$cm^2/V\dot s$$ • $$v_{sat,n}$$: Electron saturation velocity $$cm/s$$ • $$v_{sat,p}$$: Hole saturation velocity $$cm/s$$ • $$\tau_n$$: Electron SRH lifetime $$s$$ • $$\tau_p$$: Hole SRH lifetime $$s$$ • $$c_{opt}$$: optical capture coefficient for radiative recombination $$cm^3/s$$ • $$c_{au,n}$$: Auger recombination capture coefficient for electrons $$cm^6/s$$ • $$c_{au,p}$$: Auger recombination capture coefficient for holes $$cm^6/s$$ Each semiconductor property can be plotted as a function of one or two variables chosen in the first and second axis drop-down menu. The plot range, number of points and the scale (linear or log) for each variable can also be selected from this window. Any desired number of semiconductor properties can be chosen for a single plot by checking the name of each property in the window. To plot the selected properties, simply click the "Create Visualization" button. You can also send the selected property to the script workspace for further processing by clicking the "Send to Script" button. Once done, you can click the " Done" button to go back to the property editor window. For example, the change in electron and hole mobility of silicon as a function of temperature can be plotted as shown below:
2022-05-25 15:43:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4974554479122162, "perplexity": 2128.7953483579718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662588661.65/warc/CC-MAIN-20220525151311-20220525181311-00608.warc.gz"}
https://www.inovex.de/blog/multimodal-sequential-recommender-systems/
# Multimodal Sequential Recommender Systems Gepostet am: 27. August 2019 ## Tim Bicker Since the invention of the internet, the availability and amount of information has increased steadily. Today we are facing problems of information overload and an overabundance of choices. Recommender systems aim at decreasing the burden of information processing and decision making for end users. Today’s e-commerce companies rely heavily on recommender systems because they provide significant advantages. Recommender systems help the customer to effectively explore the usually huge product space. Customers need to spend less time until they find what they are looking for. Additionally, the recommender system can suggest new and interesting items to the customer based on the data of other customers. This improves the conversion rate and customer experience. For example, Netflix recently estimated the annual savings through reduced churn caused by their compelling recommender system to be one billion dollars [1] which is about 11.3% of their total revenue in 2016. Sequential recommender systems are based on sequential user representations $$S^{(u)}_\tau = \{id_1, id_2, …, id_n\}$$ for a given user $$u$$ and sequence length $$\tau$$. Each sequence consists of several items that are represented by their corresponding $$id$$ and are given in temporal order. Sequential recommender systems aim at exploiting the temporal information that is hidden in the sequence of item interactions of the given user. ## State of the Art Sequential Recommender Systems In my master’s thesis, I analyzed RNN-based sequential recommender systems. This type of recommender system has two different variants, classification-based models and regression-based models. One important prerequisite for regression-based models are item embeddings. They transform item IDs into vectors inside a Euclidian vector space. One famous embedding algorithm is Word2Vec, which was invented by Mikolov et al. in 2013 [3]. Its original use case was to transform words into vector representations. The resulting word embeddings are clustered according to their semantic relations. This property of Word2Vec serves as the foundation for the regression-based models. From an abstract viewpoint, a sequence of words is just a sequence of categorical variables. Consequently, this algorithm can also be used for item sequences. Figure 1 shows the regression-based model of De Boom et al [7]. The user’s item sequence is processed by the sequential part of the recommender system. It consists of an item embedding that transforms item IDs $$id$$ into their respective vector representations $$v$$. Each item of the sequence is then processed by the Gated Recurrent Unit (GRU) which outputs its hidden state. In this scenario, the hidden state $$h$$ corresponds to the user representation vector $$u$$, that represents the respective user. Figure 1. The regression-based model of De Boom et al [7]. Typically, the goal of a recommender system is to create a recommendation list of length n. Regression-based models consequently select the n items with the highest score.  Here, the score is simply the cosine between the user representation $$u_\tau$$ and the target item vector $$v_\text{target}$$. The set of all target items usually consists of all items that are in the training set. The process of calculating the score for all target items results in applying a cosine-based kNN which can be seen as an unimodal approach because all items are compared with only one item and the most similar ones are recommended. We can speed up cosine-based kNN by using algorithms for Approximate Nearest Neighbour Search or by filtering the target items by, for example, popularity. Figure 2. The Gru4Rec classification model of Hidasi et al. [4]. Figure 2 shows the Gru4Rec classification model of Hidasi et al. [4]. It processes the user sequence $$S^{(u)}_\tau$$ and directly outputs a discrete probability vector $$p$$ whose dimensionality equals the number of items in the training set. In each dimension, the vector $$p$$ contains the probability of the respective item to be the next item that the user consumes. Figure 3 visualizes the unimodal regression-based approach in a 2-dimensional vector space. In this example, the user has already requested two items $$v_1$$ and $$v_2$$ and is consequently represented by the user representation vector $$u_2$$. To create a recommendation list of length 2, the two items with the smallest angle between themselves and the user representation vector are selected. Figure 3. Unimodal (left) vs Multimodal (right). The goal of my master’s thesis was to evaluate, whether a multimodal approach could improve recommendation performance. Therefore, I generalized the unimodal, regression-based model to a model that can represent multiple modalities. Each modality should represent one taste or a combination of tastes of the user. For example, the unimodal model can only represent one user taste  (e.g. Romance films) or a combination of tastes (e.g. Action-Comedy films) that are represented by the location of the user representation vector $$u$$ inside the Word2Vec embedding space. It could be beneficial to have a model that can combine different locations in the embedding space and therefore combine multiple user tastes (e.g. Romance and Action-Comedy). Figure 3 shows the idea of such a multimodal approach. The user is now represented by two modalities $$u_2^{(1)}$$ and $$u_2^{(2)}$$ and their taste should now be more accurately represented. ## Methodology To derive the multimodal model of Figure 3, I took up the idea of mixture models. Figure 4 shows the formula of the general mixture model. The idea behind a mixture model is to model difficult probability distributions as a sum of more understandable distributions. For example, if one chooses the kernel function $$\phi_k$$ to be a Gaussian distribution, the resulting mixture model is the famous Gaussian mixture model. Figure 4 shows what a Gaussian distribution with three mixtures could look like. One observes, that it has three peaks and a value $$x$$ has an increased probability to be sampled close to the peaks. Figure 4. The General Mixture Model (left) and the Gaussian Mixture Model (right) It is now well known that there exists a correlation between vector length and word frequency for Word2Vec embeddings [5]. My results and the results of other authors [7] confirm that the cosine-based kNN performs better than Euclidean-distance-based kNN for the unimodal case. Consequently, I decided to derive a novel type of mixture model that is based on cosine. Figure 5. The Mixture of Cosines Model. The Mixture of Cosines (MoC) model is inspired by Mixture Density Networks (MDN) [2], which model the means and the covariance matrices of a GMM with multiple-layer Feedforward Neural Networks (FFNNs). The mixing coefficients are modeled by a FFNN with a softmax layer that is trained as a classification network. The user representation $$u_\tau^{(k)}$$ of the MoC model is based on an one-layer FFNN, as shown in Figure 5. The kernel function $$\phi_k$$ is then given as the cosine between the user representation and the target item vector. The mixing coefficients are calculated analogously to the traditional MDN. Figure 6 shows a graphical representation of the MoC model. Analogously to the sequential model the user representation $$S_\tau^{(u)}$$ is first processed by the sequential part, which consists of an item embedding and a GRU. The final hidden state $$h_\tau$$ is then processed by the mixture model part, which is explained in Figure 5. This results in the score for the target item $$id_{target}$$. Figure 6. The final MoC Graph. ## Results ### Evaluation Metrics To evaluate the MoC model each session of the test set $$S_\text{test}$$ is split, so that all items up to the $$\tau -1$$-th item are used to calculate the user representation. Then, a recommendation list of length n is created by selecting the n items with the highest score in descending order. Figure 6. The Recall (left) and the Mean Reciprocal Rank (MRR) (right). I use two widely used metrics to quantify the recommendation performance: Recall@20 and MRR@20. Both metrics are shown in Figure 6. The rank refers to the position in the recommendation list of the actual item that the user requested. For example, if the item that the user requested is at the third place inside the recommendation list of length 5, then the MRR@5 is $$\frac{1}{3}$$ and the Recall@5 is 1. If the rank is 6, then both MRR@5 and Recall@5 are 0 because the rank is greater than the number of items in the recommendation list. ### Movielens 20 Million Table 1 shows the results for the Movielens 20 Million data set [6]. It is provided by GroupLens, a recommender systems research group, and contains 20 Million user-item ratings for 26,744 different movies. The ratings stem from 138,493 different users where on average each user rated 144 and at least 20 movies. Because the type of sequential recommender system that concerns me only considers actions and no ratings, I assume that each rating is an interaction independent of the actual rating. The column G4R refers to the Gru4Rec recommender system [4], a classification based model that represents the state of the art. The other columns refer to the MoC model with the respective amount of mixtures. Table 1. The final results for the MovieLens 20 Million data set. One can see that an increasing number of mixtures increases the performance of the MoC model. However, an increase in the number of dimensions has a stronger positive effect on the recommendation performance. Additionally, the training time with an increasing number of mixtures increases drastically, compared to an increase in the number of dimensions. ## Conclusion I showed that additional mixtures increase the performance of the unimodal model. This increase, however, is rather small compared to an increase in the number of dimensions and the training time increase is huge compared to the gain in performance. Therefore, it is more cost-efficient to increase the number of dimensions of the model than to increase the number of mixtures. To understand why an increase in the number of dimensions corresponds to an increased modality, one can consider an artificial scenario, where the number of dimensions of the embedding space corresponds to the number of items and each item is represented by the corresponding basis vector $$e_v$$. The score of the unimodal model $$s(h_\tau, e_v)$$ is given as the cosine, which can be represented as a vector product $$\frac{u \cdot e_v}{|u| |e_v|}$$. The vector product can be straightforwardly simplified to $$\frac{u_v}{|u|}$$, where $$u_v$$ is a scalar and corresponds to the v-th entry of the user representation $$u$$. To create the recommendation list, the scores for all target items have to be compared. One can see that they all share the common denominator $$|u|$$. Consequently, the score can further be reduced to $$u_v$$. In this artificial scenario, the order of items can be chosen by the relative magnitudes of the values of the respective dimensions. In realistic environments, similar items are close together and item vectors can be seen as a weighted combination of basis vectors. Therefore, although not directly, the thoughts can be transferred to realistic scenarios to understand the underlying mechanics. ## Sources [1] C.A. Gomez-Uribe and N. Hunt. The netflix recommender system: Algorithms, business value, and innovation. In Association for Computing Machinery (ACM) Transactions on Management Information Systems (TMIS), 6(4), p. 13, 2016. [2] C.M. Bishop. Mixture density networks. Technical Report, Citeseer, 1994. [3] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. In arXiv preprint arXiv:1301.3781, 2013. [4] B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk. Session-based recommendations with recurrent neural networks. In International Conference on Learning Representation (ICLR), 2016. [5] C. Gong, D. He, X. Tan, T. Qin, L. Wang, and T.Y. Liu. Frage: frequency-agnostic word representation. In Advances in Neural Information Processing Systems, pp. 1334–1345. 2018 [7] C. De Boom, R. Agrawal, S. Hansen, E. Kumar, R. Yon, C.W. Chen, T. Demeester, and B. Dhoedt. Large-scale user modeling with recurrent neural networks for music discovery on multiple time scales. In Multimedia Tools and Applications, pp. 1–23, 2018. 2019-09-02T14:54:44+00:00
2019-11-21 03:32:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47593602538108826, "perplexity": 807.2052089000698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00459.warc.gz"}
http://libros.duhnnae.com/2017/sep3/150537922047-QCD-relics-from-the-early-Universe-High-Energy-Physics-Phenomenology.php
# QCD relics from the early Universe - High Energy Physics - Phenomenology QCD relics from the early Universe - High Energy Physics - Phenomenology - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online. Abstract: We suggest the possibility of creation in the early Universe of stabledomains of radius a few kilometers wide, formed by coherently excited states of$\pi$-mesons. Such domains appear dark to an external observer, since the decayrate of the said coherent pionic states into photons is vanishingly small. Therelated thermal insulation of the domains from the outer world could haveallowed them to survive till present days. The estimated maximum radius and theperiod of rotation of such objects turn out to be compatible with those ofcertain pulsars. Autor: D. Antonov, J.E.F.T. Ribeiro, A.V. Nefediev Fuente: https://arxiv.org/
2018-04-25 03:18:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6017299890518188, "perplexity": 6848.261029367841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947690.43/warc/CC-MAIN-20180425022414-20180425042414-00063.warc.gz"}
https://www.jobilize.com/course/section/conceptual-questions-electric-generators-by-openstax?qcr=www.quizover.com
23.5 Electric generators  (Page 3/6) Page 3 / 6 Section summary • An electric generator rotates a coil in a magnetic field, inducing an emf given as a function of time by $\text{emf}=\text{NAB}\omega \phantom{\rule{0.25em}{0ex}}\text{sin}\phantom{\rule{0.25em}{0ex}}\mathrm{\omega t}\text{,}$ where $A$ is the area of an $N$ -turn coil rotated at a constant angular velocity $\omega$ in a uniform magnetic field $B$ . • The peak emf ${\text{emf}}_{0}$ of a generator is ${\text{emf}}_{0}=\text{NAB}\omega \text{.}$ Conceptual questions Using RHR-1, show that the emfs in the sides of the generator loop in [link] are in the same sense and thus add. The source of a generator’s electrical energy output is the work done to turn its coils. How is the work needed to turn the generator related to Lenz’s law? Problems&Exercises Calculate the peak voltage of a generator that rotates its 200-turn, 0.100 m diameter coil at 3600 rpm in a 0.800 T field. 474 V At what angular velocity in rpm will the peak voltage of a generator be 480 V, if its 500-turn, 8.00 cm diameter coil rotates in a 0.250 T field? What is the peak emf generated by rotating a 1000-turn, 20.0 cm diameter coil in the Earth’s $5\text{.}\text{00}×{\text{10}}^{-5}\phantom{\rule{0.25em}{0ex}}\text{T}$ magnetic field, given the plane of the coil is originally perpendicular to the Earth’s field and is rotated to be parallel to the field in 10.0 ms? 0.247 V What is the peak emf generated by a 0.250 m radius, 500-turn coil is rotated one-fourth of a revolution in 4.17 ms, originally having its plane perpendicular to a uniform magnetic field. (This is 60 rev/s.) (a) A bicycle generator rotates at 1875 rad/s, producing an 18.0 V peak emf. It has a 1.00 by 3.00 cm rectangular coil in a 0.640 T field. How many turns are in the coil? (b) Is this number of turns of wire practical for a 1.00 by 3.00 cm coil? (a) 50 (b) yes Integrated Concepts This problem refers to the bicycle generator considered in the previous problem. It is driven by a 1.60 cm diameter wheel that rolls on the outside rim of the bicycle tire. (a) What is the velocity of the bicycle if the generator’s angular velocity is 1875 rad/s? (b) What is the maximum emf of the generator when the bicycle moves at 10.0 m/s, noting that it was 18.0 V under the original conditions? (c) If the sophisticated generator can vary its own magnetic field, what field strength will it need at 5.00 m/s to produce a 9.00 V maximum emf? (a) A car generator turns at 400 rpm when the engine is idling. Its 300-turn, 5.00 by 8.00 cm rectangular coil rotates in an adjustable magnetic field so that it can produce sufficient voltage even at low rpms. What is the field strength needed to produce a 24.0 V peak emf? (b) Discuss how this required field strength compares to those available in permanent and electromagnets. (a) 0.477 T (b) This field strength is small enough that it can be obtained using either a permanent magnet or an electromagnet. Show that if a coil rotates at an angular velocity $\omega$ , the period of its AC output is $\mathrm{2\pi /\omega }$ . A 75-turn, 10.0 cm diameter coil rotates at an angular velocity of 8.00 rad/s in a 1.25 T field, starting with the plane of the coil parallel to the field. (a) What is the peak emf? (b) At what time is the peak emf first reached? (c) At what time is the emf first at its most negative? (d) What is the period of the AC voltage output? (a) 5.89 V (b) At t=0 (c) 0.393 s (d) 0.785 s (a) If the emf of a coil rotating in a magnetic field is zero at $t=0$ , and increases to its first peak at $t=0\text{.}\text{100}\phantom{\rule{0.25em}{0ex}}\text{ms}$ , what is the angular velocity of the coil? (b) At what time will its next maximum occur? (c) What is the period of the output? (d) When is the output first one-fourth of its maximum? (e) When is it next one-fourth of its maximum? Unreasonable Results A 500-turn coil with a $0\text{.}\text{250}\phantom{\rule{0.25em}{0ex}}{\text{m}}^{2}$ area is spun in the Earth’s $5\text{.}\text{00}×{\text{10}}^{-5}\phantom{\rule{0.25em}{0ex}}\text{T}$ field, producing a 12.0 kV maximum emf. (a) At what angular velocity must the coil be spun? (b) What is unreasonable about this result? (c) Which assumption or premise is responsible? (a) $1\text{.}\text{92}×{\text{10}}^{6}\phantom{\rule{0.25em}{0ex}}\text{rad/s}$ (b) This angular velocity is unreasonably high, higher than can be obtained for any mechanical system. (c) The assumption that a voltage as great as 12.0 kV could be obtained is unreasonable. An American traveler in New Zealand carries a transformer to convert New Zealand’s standard 240 V to 120 V so that she can use some small appliances on her trip. What is the ratio of turns in the primary and secondary coils of her transformer? nkombo How electric lines and equipotential surface are mutually perpendicular? The potential difference between any two points on the surface is zero that implies È.Ŕ=0, Where R is the distance between two different points &E= Electric field intensity. From which we have cos þ =0, where þ is the angle between the directions of field and distance line, as E andR are zero. Thus sorry..E and R are non zero... By how much leeway (both percentage and mass) would you have in the selection of the mass of the object in the previous problem if you did not wish the new period to be greater than 2.01 s or less than 1.99 s? what Is linear momentum why no diagrams where Fayyaz Myanmar Pyae hi Iroko hello Abdu Describe an experiment to determine short half life what is science it's a natural phenomena Hassan sap Emmanuel please can someone help me with explanations of wave Benedine A 20MH coil has a resistance of 50 ohms and us connected in series with a capacitor to a 520MV supply what is physics it is the science which we used in our daily life Sujitha Physics is the branch of science that deals with the study of matter and the interactions it undergoes with energy Junior it is branch of science which deals with study of happening in the human life AMIT A 20MH coil has a resistance of 50 ohms and is connected in series with a capacitor to a 250MV supply if the circuit is to resonate at 100KHZ, Determine 1: the capacitance of the capacitor 2: the working voltage of the circuit, given that pie =3.142 Musa Physics is the branch of science that deals with the study of matter and the interactions it undergoes with energy Kelly Heat is transfered by thermal contact but if it is transfered by conduction or radiation, is it possible to reach in thermal equilibrium? Yes, It is possible by conduction if Surface is Adiabatic Astronomy Yeah true ilwith d help of Adiabatic Kelly what are the fundamentals qualities what is physic3 Kalilu what is physic Kalilu Physics? Is a branch of science dealing with matter in relation to energy. Moses Physic... Is a purging medicine, which stimulates evacuation of the bowels. Moses are you asking for qualities or quantities? Noman give examples of three dimensional frame of reference Universe Noman Yes the Universe itself Astronomy Examine different types of shoes, including sports shoes and thongs. In terms of physics, why are the bottom surfaces designed as they are? What differences will dry and wet conditions make for these surfaces? sports shoes are designed in such a way they are gripped well with your feet and their bases have and high friction surfaces, Thong shoes are for comfort, these are easily removed and light weight. these are usually low friction surfaces but in wet conditions they offer greater friction. Noman thong sleepers are usually used in restrooms. Noman what is wave The phenomenon of transfer of energy Noman how does time flow in one dimension yeah that was a mistake Lord if it flows in three dimensions does it mean if an object theoretically moves beyond the speed of light it won't experience time Lord but if an object moves beyond the speed of light time stops right for it Lord yes but at light speed it ceases Lord yes it always flow from past to future. Noman
2020-11-24 09:42:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6430869698524475, "perplexity": 1038.2943049368532}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176049.8/warc/CC-MAIN-20201124082900-20201124112900-00188.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-10-exponents-and-radicals-10-3-multiplying-radical-expressions-10-3-exercise-set-page-647/43
Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) Published by Pearson Chapter 10 - Exponents and Radicals - 10.3 Multiplying Radical Expressions - 10.3 Exercise Set - Page 647: 43 Answer $f(x)=7| x-3 |$ Work Step by Step Using the properties of radicals, the given equation, $f(x)=\sqrt[]{49(x-3)^2} ,$ simplifies to \begin{array}{l}\require{cancel} f(x)=\sqrt[]{\left[7(x-3) \right]^2} \\\\ f(x)=| 7(x-3) | \\\\ f(x)=7| x-3 | \end{array} After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-11-17 01:15:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9615175127983093, "perplexity": 8194.922135483977}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743247.22/warc/CC-MAIN-20181116235534-20181117021534-00451.warc.gz"}
https://paperhaberdasher.com/answers-654
# Or probability calculator The formula to calculate the “or” probability of two events A and B is this: P ( A OR B) = P ( A) + P ( B) – P ( A AND B ). To see why this formula makes sense, think about John and Rhonda ## Dice Probability Calculator getcalc.com's Probability Calculator is an online statistics & probability tool to estimate the possibility of single or multiple events to occur in statistical trials or experiments. Use this calculator to find the probability of independent ## People testimonials Save time Mathematics Homework Helper Deal with mathematic questions ## Odds Probability Calculator The probability calculator multiple events uses the following formula for calculating probability: \text {Probability} = \dfrac {\text {Event}} {\text {Outcomes}} Probability = OutcomesEvent The calculation of probability is Get Started
2023-02-07 17:40:35
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9646458029747009, "perplexity": 2206.206170437116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500628.77/warc/CC-MAIN-20230207170138-20230207200138-00138.warc.gz"}
https://www.esaral.com/q/solve-this-81725/
Solve this Question: If $A=\left[\begin{array}{ll}1 & a \\ 0 & 1\end{array}\right]$, then $A^{n}($ where $n \in N)$ equals (a) $\left[\begin{array}{cc}1 & n a \\ 0 & 1\end{array}\right]$ (b) $\left[\begin{array}{cc}1 & n^{2} a \\ 0 & 1\end{array}\right]$ (c) $\left[\begin{array}{cc}1 & n a \\ 0 & 0\end{array}\right]$ (d) $\left[\begin{array}{cc}n & n a \\ 0 & n\end{array}\right]$ Solution: (a) $\left[\begin{array}{cc}1 & n a \\ 0 & 1\end{array}\right]$ Here, $A=\left[\begin{array}{ll}1 & a \\ 0 & 1\end{array}\right]$ $\Rightarrow A^{2}=\left[\begin{array}{ll}1 & a \\ 0 & 1\end{array}\right]\left[\begin{array}{ll}1 & a \\ 0 & 1\end{array}\right]=\left[\begin{array}{ll}1+0 & a+a \\ 0+0 & 0+1\end{array}\right]=\left[\begin{array}{cc}1 & 2 a \\ 0 & 1\end{array}\right]$ $A^{3}=A^{2} \times A=\left[\begin{array}{cc}1 & 2 a \\ 0 & 1\end{array}\right]\left[\begin{array}{ll}1 & a \\ 0 & 1\end{array}\right]=\left[\begin{array}{cc}1+0 & a+2 a \\ 0+0 & 0+1\end{array}\right]=\left[\begin{array}{cc}1 & 3 a \\ 0 & 1\end{array}\right]$ This pattern is applicable for all natural numbers. $\therefore A^{n}=\left[\begin{array}{cc}1 & n a \\ 0 & 1\end{array}\right]$
2022-05-19 14:24:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6342273950576782, "perplexity": 2824.4595781251787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529538.2/warc/CC-MAIN-20220519141152-20220519171152-00774.warc.gz"}
https://socratic.org/questions/which-of-the-following-are-strong-acids-h2so4-hi-hf-h3po3-and-hno3
# Which of the following are strong acids: H2SO4, HI, HF, H3PO3, and HNO3? Dec 3, 2015 Rough order of acidity: $H I > {H}_{2} S {O}_{4} > H N {O}_{3} > {H}_{3} P {O}_{3} > H F$. $H I$, ${H}_{2} S {O}_{4}$, and $H N {O}_{3}$ are reasonably regarded as strong acids. #### Explanation: As physical scientists, however, we should look at actual measurements, namely the $p {K}_{a}$ values of each acid in water. $H I , p {K}_{a} = - 10$ (estimated) ${H}_{2} S {O}_{4} , p {K}_{a 1} = - 3 , p {K}_{a 2} = 1.99$ $H N {O}_{3} , p {K}_{a} = - 1.4$ ${H}_{3} P {O}_{3} , p {K}_{a 1} = 1.3 , p {K}_{a 2} = 6.7$ $H F , p {K}_{a} = 3.17$ Importantly, the acidity of the hydrogen halides should increase down the Group, inasmuch as there is poorer overlap between the halogen and hydrogen as the halogen becomes bigger. Hydrofluoric acid (which is truly nasty stuff, and can cause horrible burns) is thus the weakest acid of the hydrogen halides, because of the strength of the $H - X$ bond, and the polarizing ability of ${F}^{-}$. Why are the $p {K}_{a 2}$ values of the polyprotic acids so high?
2019-09-18 05:26:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.812484085559845, "perplexity": 3988.3615091599345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573184.25/warc/CC-MAIN-20190918044831-20190918070831-00303.warc.gz"}
https://stats.stackexchange.com/questions/9561/checking-if-two-poisson-samples-have-the-same-mean
# Checking if two Poisson samples have the same mean This is an elementary question, but I wasn't able to find the answer. I have two measurements: n1 events in time t1 and n2 events in time t2, both produced (say) by Poisson processes with possibly-different lambda values. This is actually from a news article, which essentially claims that since $n_1/t_1\neq n_2/t_2$ that the two are different, but I'm not sure that the claim is valid. Suppose that the time periods were not chosen maliciously (to maximize the events in one or the other). Can I just do a t-test, or would that not be appropriate? The number of events is too small for me to comfortably call the distributions approximately normal. • – Charles Apr 14 '11 at 16:09 • Fine specimen of science journalism, there... – Matt Parker Apr 14 '11 at 18:01 • Yeah... you can see why I wanted to check the statistics used. – Charles Apr 14 '11 at 18:04 To test the Poisson mean, the conditional method was proposed by Przyborowski and Wilenski (1940). The conditional distribution of X1 given X1+X2 follows a binomial distribution whose success probability is a function of the ratio two lambda. Therefore, hypothesis testing and interval estimation procedures can be readily developed from the exact methods for making inferences about the binomial success probability. There usually two methods are considered for this purpose, 1. C-test 2. E-test You can find the details about these two tests in this paper. A more powerful test for comparing two Poisson means • +1 Good reference, thanks. The C-test is a more rigorous version of the one I sketched, so it's well worth considering. The E-test relates a t-statistic to an appropriate distribution. Calculating that distribution involves a double infinite sum that will take $O(n_1 n_2)$ calculations to converge: fairly easy to code, probably overkill for checking the newspaper! – whuber Apr 14 '11 at 15:37 • The author of the E-test paper wrote a simple fortran implementation to calculate p-values for two poisson means here: ucs.louisiana.edu/~kxk4695 I ported their fortran to MATLAB here git.io/vNP86 – AndyL Jan 25 '18 at 21:41 poisson.test(c(n1, n2), c(t1, t2), alternative = c("two.sided")) This is a test which compares the Poisson rates of 1 and 2 with each other, and gives both a p value and a 95% confidence interval. • It should be noted that for a two-sample problem, this uses a binomial test to compare rates – Jon Jan 7 '18 at 18:00 You're looking for a quick and easy check. Under the null hypothesis that the rates (lambda values) are equal, say to $\lambda$, then you could view the two measurements as observing a single process for time $t = t_1+t_2$ and counting the events during the interval $[0, t_1]$ ($n_1$ in number) and the events during the interval $[t_1, t_1+t_2]$ ($n_2$ in number). You would estimate the rate as $$\hat{\lambda} = \frac{n_1+n_2}{t_1+t_2}$$ and from that you can estimate the distribution of the $n_i$: they are Poisson of intensity near $t_i\hat{\lambda}$. If one or both $n_i$ are situated on tails of this distribution, most likely the claim is valid; if not, the claim may be relying on chance variation. • Thanks (+1), that's just the right check for this kind of off-the-cuff thing. It ended up being highly significant (p = 0.005) so the article is fine. I hope you don't mind, though, that I accepted the other answer -- it's good to know the 'real' way to do it when it matters. – Charles Apr 14 '11 at 16:08 I would be more interested in a confidence interval than a p value, here is a bootstrap approximation. Calculating the lengths of the intervals first, and a check: Lrec = as.numeric(as.Date("2010-07-01") - as.Date("2007-12-02")) # Length of recession Lnrec = as.numeric(as.Date("2007-12-01") - as.Date("2001-12-01")) # L of non rec period (43/Lrec)/(50/Lnrec) [1] 2.000276 This check gives a slightly different result (100.03% increase) than the one of the publication (101% increase). Go on with the bootstrap (do it twice): N = 100000 k=(rpois(N, 43)/Lrec)/(rpois(N, 50)/Lnrec) c(quantile(k, c(0.025, .25, .5, .75, .975)), mean=mean(k), sd=sd(k)) 2.5% 25% 50% 75% 97.5% mean sd 1.3130094 1.7338545 1.9994599 2.2871373 3.0187243 2.0415132 0.4355660 2.5% 25% 50% 75% 97.5% mean sd 1.3130094 1.7351970 2.0013578 2.3259023 3.0173868 2.0440240 0.4349706 The 95% confidence interval of the increase is 31% to 202%.
2019-11-20 16:46:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5760046243667603, "perplexity": 917.3199406676729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670597.74/warc/CC-MAIN-20191120162215-20191120190215-00135.warc.gz"}
https://structurescentre.com/tag/selecting-a-foundation/
# [UPDATE] Foundation Types: Selecting a Foundation Selecting the most appropriate foundation type is often a very difficult undertaking in design and construction. Even, perhaps the most important part of the design process. Rightly so, it can be argued that the foundation of any structure is the most principal component of that structure
2023-03-22 16:19:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8032616376876831, "perplexity": 884.1516209691092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943845.78/warc/CC-MAIN-20230322145537-20230322175537-00515.warc.gz"}
https://www.physicsforums.com/threads/boundary-value-problem-eigenvalues-and-eigenfunctions.599252/
Boundary Value Problem; Eigenvalues and Eigenfunctions 1. Apr 22, 2012 Pinedas42 1. The problem statement, all variables and given/known data Find the eigenvalues and eigenfunction for the BVP: y'''+$\lambda$^2y'=0 y(0)=0, y'(0)=0, y'(L)=0 2. Relevant equations m^3+$\lambda$m=0, auxiliary equation 3. The attempt at a solution 3 cases $\lambda$=0, $\lambda$<0, $\lambda$>0 this first 2 give y=0 always, as the only solution. $\lambda$>0 solution attempt m^3+$\lambda$^2m=0 m(m^2+$\lambda$^2)=0 roots: m=0, and +/- $\lambda$i general solution: y=A+Bcos($\lambda$x)+Csin($\lambda$x) Where A, B, and C are constants y'=-B$\lambda$sin($\lambda$x)+C$\lambda$cos($\lambda$x) y(0)=0 gives 0=A+B, or A=-B y'(0)=0 gives 0=$\lambda$C, so C=0 y'(L)=0 gives 0=-$\lambda$Bsin($\lambda$L) The only solution I find from these data is y=0, which seems kind of off since no eigenfunction/values are found. From what I've read/studied so far when $\lambda$>0 there is always an eigen function/value. The alternative I've considered is to consider B≠0 and having the eigenvalue be $\lambda$L=n$\pi$ giving $\lambda$=n$\pi$/L which then gives the eigen function y=A+Bcos((n$\pi$x)/L) 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Apr 22, 2012 LCKurtz That's wrong, your next paragraph is the correct procedure And since A = -B, you have $B(-1+\cos(\frac {n\pi x} L))$. You could leave off the B and write $y_n =-1+\cos(\frac {n\pi x} L)$. 3. Apr 22, 2012 Pinedas42 OK, thank you! I had thought that there always had to be a function for $\lambda$>0, but I wasn't sure and I couldn't find any literature specifically mentioning it.
2017-12-15 20:11:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.749563455581665, "perplexity": 3137.8478539331295}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948579564.61/warc/CC-MAIN-20171215192327-20171215214327-00232.warc.gz"}
http://cvgmt.sns.it/seminar/653/
# Metric currents and the Poincaré inequality ## Katrin Fässler created by dimarino on 18 Sep 2018 Abstract. Doubling metric measure spaces supporting a Poincar\'e inequality constitute a good environment to carry out analysis even without the presence of a smooth structure. It is therefore of interest to find characterizations for the validity of such an inequality. I will show that a complete doubling metric measure space $X$ supports a weak $1$-Poincar\'e inequality if and only if it admits generalized pencils of curves (GPCs) joining any pair of distinct points in $X$. We define GPCs in terms of normal $1$-currents and they turn out to act as a relaxed version of Semmes' notion of "pencil of curves". The construction of GPCs is based on the max flow - min cut theorem in graph theory. This is joint work with Tuomas Orponen. Credits | Cookie policy | HTML 5 | CSS 2.1
2021-04-19 16:23:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4865982234477997, "perplexity": 526.6441634398723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038887646.69/warc/CC-MAIN-20210419142428-20210419172428-00099.warc.gz"}
https://electronics.stackexchange.com/questions/351640/synchronising-signals-of-verilog-test-bench-with-rtl-clock
# Synchronising signals of Verilog test bench with RTL clock I have been given an interesting assignment. My task was to design a 4 bit up_down counter which has two controlling signals, up_down and load. The up_down decides weather the counter should be up-counting or down-counting(up_down=1'b1 up-counting and up_down=1'b0 down-counting). The load signal is to load the 4 bit data into the counter. I was able to design and verify the code in rtl and test bench. But now I am notified to make sure my input driving signals from test bench arrive after the posedge of the clock in RTL. That means my load and up_down signals should arrive after the positive clock edge of the RTL clock. Please find my rtl and testbench. I am told to do something in testbench as in write a logic in Test bench to make sure my signals arrive later the posedge. I would need some help on this. Please do respond. My RTL code: module up_down_counter( input clock, input reset, input up_down, //up_down = 1'b1,upcounter; up_down=1'b0,down_counter input [3:0] data, output [3:0] counter ); reg [3:0] counter_reg; always@(posedge clock or posedge reset) begin if(reset == 1'b1) begin counter_reg <= 4'b0000; end else begin if (up_down == 1'b1) begin begin counter_reg <= data; end else begin counter_reg <= counter_reg + 4'b0001; end end else begin begin counter_reg <= data; end else begin counter_reg <= counter_reg - 4'b0001; end end end end assign counter = counter_reg; endmodule my Test Bench: module tb_up_down_counter; reg clock; reg reset; wire [3:0] counter; reg up_down; reg [3:0] data; up_down_counter dut( .clock(clock), .reset(reset), .data(data), .up_down(up_down), .counter(counter) ); initial begin clock = 1'b0; forever #50 clock = ~clock; end initial begin reset <= 1'b0; up_down <= 1'b0; data <= 4'd0; repeat(5) @(posedge clock); reset <= 1'b1; repeat(5) @(posedge clock); reset <= 1'b0; repeat(10) @(posedge clock); up_down <= 1'b1; repeat(5) @(posedge clock); repeat(10) @(posedge clock); data <= 4'd3; repeat(5) @(posedge clock); up_down <= 1'b0; repeat(5) @(posedge clock); repeat(5) @(posedge clock); repeat(10) @(posedge clock); data <= 4'd5; repeat(5) @(posedge clock); up_down <= 1'b1; repeat(5) @(posedge clock); reset <= 1'b1; repeat(10) @(posedge clock); reset <= 1'b0; repeat(10) @(posedge clock); repeat(10) @(posedge clock); data <= 4'd8; repeat(10) @(posedge clock); data <= 4'd15; repeat(10) @(posedge clock); data <= 4'd12; #5000 $finish; end initial begin$shm_open("waves.shm"); \$shm_probe(tb_up_down_counter,"AC"); end endmodule • I mean what I was said that even my code is working perfectly fine, but it seems that my control signals (load and up_down) seem to arrive a bit earlier than the clock edge ; which is not supposed to happen. My signals from tb should come a little later than the posedge so that posedge of clk has the priority. I am told to write a flipflop logic with always@(posedge of clock) in the test bench to make sure my signals arrive after the edge. I dont know how to do this. Plaase help – Dig_Verif_bee Jan 23 '18 at 15:42 • Page 3 of assets.nexperia.com/documents/data-sheet/74HC_HCT193.pdf has such device designed. As device is standard (74xx193), you must be able to find a plenty of projects emulating it using various input signal timing. – Anonymous Jan 23 '18 at 16:06 • Hi, please can you edit your question and add the extra detail you put in a comment. That makes it easier for future readers to learn from your question and its answers. Thanks. – TonyM Jan 23 '18 at 16:50 Some comments from somebody who has written hundreds of counters: First off all, control signals should not arrive a-synchronously. However I object to the phrase [control] signals should arrive after the positive clock edge. This gives the impression that digital logic should be controlled by signals with some artificial delay in it. The solution using a hash-tag emphasizes this even more. In real life the control signals should arrive and be stable before the set-up time of the register. Therefore in ASIC/FPGA engineering the phrase normally used is the signal should arrive before the clock edge. This is achieved by the clock to Q delay of a register incremented by the wire delay. There may be (often is) additional logic behind the register which increases the delay. In your test bench you can do this simple by: reg tb_load; // This is the one use use in your test bench reg load; // This one goes to you DUT (Device Under Test) always @(posedge clk or posedge reset) if (reset) else 1/ The load signal is dominant. Your code becomes simpler and easier to read if you deal with that first: if (load) ... else if ( 2/ Do not use the name 'up_down'. Your counter counts up if that signal is high so call it e.g. 'count_up'. • Thank you for your comments. Really helpful. I will try to do the suggested way and get back to you with the results. Kind regards – Dig_Verif_bee Jan 24 '18 at 9:24 • Hello oldfart, could you please clear few things for me. I have used the same names in my tb as load and count_up as in rtl and will connect them via .load(load),.count_up(count_up). but if I write the conditional block where if(reset) load<= 1'b0 else load<=load?? there wont be any input value given to load in tb at all right? – Dig_Verif_bee Jan 24 '18 at 9:38 • You make a clock in your test bench which always runs. Then in your initial section you do @(posedge clock ) load <= '1'; If you look here: www.verilog.pro you find plenty of examples of not only code but self-checking test benches too. The latter are often left out on other Verilog learning sites. – Oldfart Jan 24 '18 at 10:30 • Thank you. You were absolutely right. The test bench with your approach worked perfectly and now I am told to do the self checking function in test bench. Starting with this now. Will have look at your link.Thanks once again. – Dig_Verif_bee Jan 24 '18 at 13:03 You can add a delay with a hashtag #, so repeat(5) begin @(posedge clock); end #10 reset <= 1'b1; Will assert reset 10 nS (depending on your simulator settings, it could be pS or uS) after the final clock edge • Given that OP does simulation only, without real application when # does not have effect. – Anonymous Jan 23 '18 at 16:08 • Thank you so much pscheidler. Because I am told to write a always logic in testbench for my control signals, was trying hard. Just now wrote this testbench and its kind of similar but needs some polishing. just added below block and similar block for load in my test bench always@(posedge clock or posedge reset) begin if(reset == 1'b1) begin up_down <= #10 1'b0; end else begin up_down <=#10 1'b1; end end – Dig_Verif_bee Jan 23 '18 at 16:16
2020-10-31 13:23:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4546199142932892, "perplexity": 4483.54956547905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107918164.98/warc/CC-MAIN-20201031121940-20201031151940-00468.warc.gz"}
https://www.hckrnws.com/stories/31377262
Thinking in an array language 141 7 13 days (github.com) by tosh Comments carapace 12 days I was just playing with Nils M Holm's Klong this morning: https://t3x.org/klong/index.html (Klong rather than the others mostly because the C implementation looks like C so I have a ghost of a chance of actually grokking it.) These folks are really onto something, but I think they get sidetracked in the (admittedly very very fun) minutia of the languages and lose sight of the crucial insight in re: mathematical notation, to wit: it's a means of human communication. For APL or K to get out of their niches would require, I am convinced, something like a tome of documentation of a ratio of about 1.5 paragraphs per line of code. That would give us mere mortals a fighting chance at grokking these tools. A similar problem plagues the higher-order stuff they're pursuing over in Haskell land. I know e.g. "Functional programming with bananas, lenses, envelopes and barbed wire" and "Compiling to Categories" are really important and useful, but I can't actually use them unless some brave Prometheus scales Olympus and returns with the fire. Stuff dribbles out eventually. Type inference and checking have finally made it into the mainstream after how many decades? razetime 12 days Here's some links relating to this style of code that you may find useful: https://docs.google.com/document/d/1W83ME5JecI2hd5hAUqQ1BVF3... https://github.com/tlack/b-decoded https://chat.stackexchange.com/rooms/90748/conversation/ngn-... They're not 1.5 paragraphs per line, but enough to give a taste of the implementation style. carapace 12 days Thank you. :) jodrellblank 12 days > "would require, I am convinced, something like a tome of documentation of a ratio of about 1.5 paragraphs per line of code. That would give us mere mortals a fighting chance at grokking these tools." You can download a free PDF copy of Mastering Dyalog APL by Bernard Legrand which is 700+ pages, from here: https://www.dyalog.com/mastering-dyalog-apl.htm carapace 12 days That's an amazing reference, but it's about the language, I was thinking more of walk-throughs of code in the language. E.g., for some BQN code: https://news.ycombinator.com/item?id=30913872 There was a better example a couple of weeks ago here in a thread, someone had done a bit of APL or K for Advent of Code or something and posted a line, and someone else broke it down and explained how it worked. I spent an hour just now with Algolia trying to find it but I failed. :( Holm's Klong docs have a good example in https://t3x.org/klong/klong-intro.txt.html where he explains how a table formatter function works. Mathematical equations are usually embedded in papers that explain them. (I mean, I've read papers that were basically equations one-after-another with just scraps of interstitial prose, but they were heavy going.) jodrellblank 12 days > "There was a better example a couple of weeks ago here in a thread, someone had done a bit of APL or K for Advent of Code or something and posted a line, and someone else broke it down and explained how it worked. I spent an hour just now with Algolia trying to find it but I failed. :(" It wasn't a couple of weeks ago, but I did that for a line here: https://news.ycombinator.com/item?id=30463080 Or could it have been on an Advent of Code link? There have been some explanations in the answers mega-threads on Reddit. Anyway, yes I agree more explanations would be benficial - and I think there would be room for an animated explainer website with small blocks representing the array elements, coloured by how they are grouped by each primitive operation, and visually showing them moving round and splitting and combining. Such a thing would make a lot more sense for an array language than for many languages. carapace 12 days Ach! Yes, thank you! That comment! LOL I feel a little silly now. Your explanation was fantastic, and yeah I think the availability of more information like that would go a long way towards lowering the barrier for people to pick up these languages. Even if they don't actually use APL and it's ilk they can still get a better idea of how to use things like Numpy by getting familiar with the concepts and idioms that support them. (I forgot to mention "A History of APL in 50 Functions" https://www.jsoftware.com/papers/50/ I just started working my way through that and it's been very helpful.) > and I think there would be room for an animated explainer website with small blocks representing the array elements, coloured by how they are grouped by each primitive operation, and visually showing them moving round and splitting and combining. Such a thing would make a lot more sense for an array language than for many languages. That would be pretty awesome. Reminds me of Guo's Python Tutor https://pythontutor.com/ jodrellblank 9 days Thanks; yes I agree more examples like that would help. That PythonTutor site looks brilliant, I will have to play with it some more. It's a lot like I was imagining. carapace 12 days That looks great! Cheers! rramadass 11 days You might also find this interesting: https://news.ycombinator.com/item?id=30412654#30416737 carapace 11 days YES! Thank you! A Rosetta stone at last! Prometheus has come and my hearth is lit. unnouinceput 12 days Any performance benchmarks between a simple C implementation of matrix multiplication like explained in the article and K one? Story time: 2 years ago, right before 1st lockdown, I landed a client. A data scientist who had already implemented an algorithm which dealt with matrices, implemented by a previous programmer he hired. Said implementation was in Python, no more than 10 line altogether, which performed well when matrix size where small, like 10x10. But the problem was, his real work need it to have matrices of size 10^6 x 10^6. Not only the Python implementation had to be ran on a beast of server with 4TB of memory, it also took 3 days to finish. And while the algorithm was small in size in Python, its explaining paper was 4 pages in total, which took me 1 week to understand. And then an entire month to implement in C. But in the end, when all was said and done, the run time when used with real data was only 20 minutes and consumed only 8 GB of memory, though it did required at least 16 virtual processors. Hence my question, in the end performance is what it matters, not number of lines. mlochbaum 12 days K timing, squaring a 100x100 matrix in ngn/k: a:100 0N#?10000 / 100x100 double matrix \t:1000 a(+/*)\:a / Total of 1k runs in ms 1514 Following C program outputs 272 when compiled and run with -O3 -march=native. #include <stdio.h> #include <time.h> size_t monoclock(void) { struct timespec ts; clock_gettime(CLOCK_MONOTONIC, &ts); return 1000000000*ts.tv_sec + ts.tv_nsec; } int main() { const size_t n = 100; double a[n][n], r[n][n]; size_t t = monoclock(); for (size_t iter=0; iter<1000; iter++) { for (size_t i=0; i<n; i++) { for (size_t j=0; j<n; j++) { double sum = 0; for (size_t k=0; k<n; k++) sum += a[i][k]*a[k][j]; r[i][j] = sum; } } } printf("%lld\n", (monoclock() - t)/1000000); } So K is slower, by a factor of 5 rather than the 200 you saw with Python. K appears to scale to larger matrices better than C: 8664 in K vs 2588 in C for 200x200, but I can't be bothered to sort out heap allocation for the C code to do larger sizes. I would certainly not say that your story supports the idea that performance is what matters. In the month you took to implement the C version, the Python solution could have run ten times over, and there are many computations that only ever need to be performed once. Besides, I bet you were glad to have working Python code to use as a reference! jiggawatts 12 days I prefer "middle of the road" languages that are high-level AND readable AND have decent performance optimisation for bulk operations. Python with C libraries suffices for a lot of people, Julia similarly is getting popular. Even the older Mathematica language blows both K and C out of the water for readability and performance: m = Table[RandomReal[], 100, 100]; t = RepeatedTiming[MatrixPower[m, 2]]; First[t]*1000*1000 18.0245 You would have to know literally zero about Mathematica's Wolfram Language to be able to read that clearly! From a standing start you could understand what another person has created. For K, you'd have to have memorized the K-specific syntax. For C, if you hadn't seen the standard patterns for matrix multiplication you'd have to read the code carefully. A lot of it is just noise, like the verbose for loop syntax. Oh, and Mathematica's matrix power function is: - Parallel! If I use a 10K x 10K matrix as an input, it uses about 75% CPU on my 8-core laptop. It can complete a single multiplication in 5.3 seconds. For laughs, try that with either K or C and see what you get... - Extends to negative or fraction powers. - Has optimisations for applying the matrix power directly to a vector. - Is extensively documented, unlike the terse K snippet or the hand-rolled C code: https://reference.wolfram.com/language/ref/MatrixPower.html.... Essentially what I'm trying to say is that terseness is an anti-pattern, and doesn't even begin to approach the utility of a well designed high-level language intended for teams of collaborating humans. johndough 12 days When you compile the C example with the compiler option "-Wall", you'll get a warning that the variable 'r' is set but not used, so the compiler would be free to simply skip the for loops. In fact, if you compile with clang instead of gcc, the compiler will do just that and you'll get almost zero computation time. It would be better to do something with the computed result so the compiler does not remove the computation, e.g. print a randomly selected value. I also benchmarked the fixed C code against Python and "Python" (which uses OpenBLAS under the hood in my case) was 10 times faster: import time import numpy as np a = np.random.rand(100, 100) t = time.perf_counter() for _ in range(1000): a @ a print(time.perf_counter() - t, "seconds") Implementation matters a lot for matrix multiplication. mlochbaum 12 days Yes, definitely a quick and dirty benchmark (I did test after I posted to see if initializing a does anything; it didn't). Timings for J below, since I think it's the most focused on linear algebra. The remarkable thing about the K code from the article is that it's all made from totally general-purpose pieces that have nothing to do with matrix products, and K interpreters don't have any clue what a matrix product is. In J the matrix product is written +/ .* with the generalized dot product operator . (which does need the preceding space, oof) and handled by specialized code. Given that, I found this measurement a little disappointing: about as fast as my C code in the 100x100 case and slightly faster in the 200x200 case. a =: ?100 100$0 <.1e6 * 1000 (6!:2) '+/ .*~ a' 269 a =: ?200 200$0 <.1e6 * 1000 (6!:2) '+/ .*~ a' 1796 harshreality 11 days Naive implementations of stock matrix math can't get anywhere close to numpy or julia, which both use BLAS and automatically parallelize across cores. % python matrix.py Timing 10 squares of a random 10000 x 10000 matrix 97.3976636590669 seconds python matrix.py 364.41s user 8.10s system 379% cpu 1:38.25 total julia has more overhead, and the first multiply triggers code compilation so there's an additional warm-up square outside of the timing loop, but its "warm" performance is equivalent to numpy. Turning on extra optimizations (-O3) can even make it a couple seconds faster than numpy once warmed up. % julia matrix.jl Timing 10 squares of a random 10000 x 10000 matrix 97.787679 seconds (31 allocations: 7.451 GiB, 0.33% gc time) julia matrix.jl 405.34s user 8.13s system 375% cpu 1:50.09 total If you're going to wait for that C implementation, or the other comment's K implementation, to finish that loop, you'll want a book. unnouinceput 12 days Nice benchmarking, thank you for your effort. Also the scientist was leading a team, so my program would've been used by at least 20 people, 10 times per day. That was why Python one was a no go for them from beginning. harshreality 12 days Can you share the algorithm, or anything computationally equivalent, for people to try benchmarking different implementations? unnouinceput 12 days The one I wrote 2 years ago? That's intellectual property of said data scientist, not mine. Al I can say is that I parallelize it a lot, hence the entire month. From programming point of view is a mess and hard to follow its ~5k lines. Usually parallel programming is a mess, you should take a look at any parallelization CUDA code available on GitHub. harshreality 12 days Was the original just regular python, or numpy? The C version wasn't GPU-targeted though from your description. I'm curious what other implementations would be capable of, for instance julia, maybe gpu-targeted. unnouinceput 12 days We discussed, since we already agreed on parallelization, if he wanted CUDA, since that would've been even faster. But after discussing with his team, he said no GPU dependent implementation and I started the work. He never shared why no GPU implementation and I didn't pressed the matter further since I was already knee deep in trying to understand the algorithm which was the bigger stone to crack at the time. btheshoe 12 days This all seems very similar to writing vectorized code in numpy eismcc 12 days That’s because numpy is based on J which is based on APL MontyCarloHall 12 days da39a3ee 12 days What? matmul: (+/*)\: ogogmad 13 days Array languages might make good maths notation. It's terse and easy to write, and there's a logical naming scheme (for instance, matrix multiplication is just (+/*)\: ). I suppose the trick is to think of (+/*)\: as one unit. pxeger1 13 days APL, an array language, literally started out as a computer adaptation of traditional mathematical notation. https://aplwiki.com/wiki/Iverson_notation https://aplwiki.com/wiki/Comparison_with_traditional_mathema... hoosieree 12 days Math notation is highly context dependent (is + addition or boolean or?) and yet authors rarely feel the need to provide context. If they wrote in an array language instead of LaTeX, not only would it make writing papers easier (+/ is shorter than either \Sigma or \sum), but it would be trivially reproducible, due to being an executable notation. grayclhn 12 days Yeah… for actual papers “easier to read” is 1000x more important than “easier to write.” kmstout 12 days Somewhere on YouTube is a talk where Gerald Sussman described mathematics (or at least mathematical presentation) as "impressionistic." Mathnerd314 12 days IshKebab 12 days "What if we could make entire programs as unreadable as regexes?" -K LAC-Tech 12 days This is such a lame take. Everything is unreadable until you learn how to read it. IshKebab 12 days Everything is not equally readable once you have learnt it. icsa 12 days And yet regular expressions are a part of most programming languages as a library or built-in syntax. Why is that? bear8642 12 days Probably due to they're a great notation for the problem area which for regex is concisely describing text patterns. For example 'a*b' is any number of 'a's followed by 'b'. How else would you concisely state that? LAC-Tech 12 days How else would you concisely state that? Presumably people who hate array languages think all 3 character regexes should instead be big nested loops, so they are "readable". samatman 12 days Regexes in the Unix tradition are a user interface as much as a programming language. Not that there's a sharp distinction, but it's almost a trite observation that regexes per se shine for ad hoc string searching but show their weakness when they start becoming parts of programs. When writing a program, I prefer to use a PEG, giving the less compact notation 'a'* 'b' but also letting me say 'a'* b and define b as its own rule, including recursion for the useful cases. It helps that it's more powerful, being little more than a formalization of the post-regular strategies used in Perl-style 'regular' expressions while embracing recursion. For '/' in vim, grep, wherever? Yeah regex is fine, that's what it was designed for. IshKebab 12 days I can't remember the names but I've seen at least two alternative syntaxes recently that are a lot more readable. At least one of them fixed the issue of regex mixing up control in-band with data. So your example would be something like "a"* "b" Much more readable and less error-prone. bear8642 12 days > the issue of regex mixing up control in-band with data Could you explain this? I don't quite understand what the problem is. Do you mean something like sed's regex substitute command? woojoo666 12 days I believe they mean the operators and operands are all mixed up, eg in ab, and this makes it so you have to escape all sorts of characters, but if you split it into "a" "b" then the separation is clear IshKebab 11 days I mean it isn't clear whether a character is a control character (* + ? [ ] - etc) or a literal character because they're all mixed up. The rules about which is which are too complex, extensive and varying. If you use syntax like "a"* "b" then it's really obvious - the stuff in quotes is literal text, everything else is control. Lots of formats make the same mistake, e.g. YAML. IshKebab 12 days They're very quick to write and they (in appropriate cases) would be quite difficult to implement otherwise (tedious state machine stuff). They are still massively overused though. Using a regex at all is a huge red flag. Sometimes they are appropriate, but not in 90% of cases in my experience. Anyway I'm not sure the same is true for K. At least for the given example the for loop was not exactly difficult to write. snidane 12 days Array notation is great, but only for single array operations and for dense array operations. The moment you need to run complex group bys and store non-contiguous data efficiently, it gets awkward pretty quick. On the other hand, operations on dense data is pretty cumbersome in SQL. You can only do so much with limited support of proper scan algorithms, merge joins or bolted on window functions. Please somebody combine APL with SQL and you win the programming language wars. hoosieree 12 days > Please somebody combine APL with SQL and you win the programming language wars. kdb+ and q fit this description. Docs here: https://code.kx.com/q/basics/qsql/ Here's an example. In SQL: SELECT stock, SUM(amount) AS total FROM trade GROUP BY stock In q: q)select total:sum amt by stock from trade eismcc 12 days Was gonna say the same plus kdb+ is a columnar store so you can get vectorizarion in your sql execution as well. mlochbaum 12 days I would say APL-family languages today have largely addressed these concerns with operators such as Key[0][1] and nested arrays. J also has built-in support for sparse arrays. Some more complicated things like storing a list of lists in a compact representation (perhaps lengths and data) aren't supported natively, but I'd consider that a niche concern as a list of lists will have similar performance, just with more memory use. There's a lot of database software built on array languages, with kdb and Jd being the most prominent as far as I know. nerdponx 12 days R? Julia? Python with Pandas even. geophile 12 days Sort of related: thinking in relational algebra, or SQL. It appears to be "natural" to think about computing one atomic value at a time, in loops, or slightly less intuitively, recursive functions. (That latter choice may follow from whether your first language was pure Lisp.) I was fortunate to have a teacher whose database course drilled relational algebra into us. This was in the 70s, shortly after Codd's paper, and well before SQL was invented, much less established. Now I think about much computation algebraically (and often functionally). But I do see that this is "unnatural" for many, including students, having taught databases for several years. SQL reflects this. I often see students writing nested subqueries, because that is more procedural, where joins would be a cleaner choice. A colleague of mine wrote a paper many years ago, pointing out that thinking procedurally is more "natural" for many: https://dl.acm.org/doi/10.1145/319628.319656. But thinking set-at-a-time instead of one-at-a-time is a valuable skill, not that far off from thinking functionally. snidane 12 days It's easier not to mess up table based filters using explicit semi-join operators (eg. in, not in, exists) instead of using regular joins because joins can introduce duplicates. Give me 'any join' operation - ie. just select the first value instead of all, and I'll happily use joins more. They are actually more intuitive. It's not that relational algebra is untintuitive. It's because standard SQL sucks. magicalhippo 12 days Indeed, I've taught myself to only use JOIN when I actually need some data from the table I join. For everything else I use EXISTS and friends. I was thinking SQL could do with a keyword for that, like maybe FILTER, that looks like a JOIN but works like EXISTS. snidane 12 days Clickhouse implements an explicit SEMI join. It can be called semi or any, it doesn't really matter. It's just another join modifier [OUTER|SEMI|ANTI|ANY|ASOF] https://clickhouse.com/docs/en/sql-reference/statements/sele... nerdponx 12 days My problem with semijoins is that the semantics of "what exactly does a SELECT evaluate to inside an expression" are sometimes murky and might vary across databases. magicalhippo 12 days Could you expand a little? nerdponx 10 days If I write WHERE x IN (SELECT ...) what the heck is the result of evaluating the inner query, in the outer expression? Maybe I am missing something, but the exact meaning to vary a lot across different databases. Some seem to have a standalone "table" data type, while others don't. magicalhippo 10 days I might be missing something as I'm self-taught, but the inner select specifies a set, and you "just" do a simple set membership test? How it's implemented is as usual up to the database server implementation. Ones I've used creates a temporary table (like it does in so many other cases), and as such EXISTS is usually faster. But I wouldn't rely on this when moving to another implementation, and use the query planner to see, just as I'd view the assembly output when moving to a new compiler. Again, I don't have tons of experience, so concrete (counter) examples are welcome. fiddlerwoaroof 12 days “First” doesn’t make sense without an order jodrellblank 12 days It even has its own tag on StackOverflow: https://stackoverflow.com/questions/tagged/greatest-n-per-gr... People who want it, want it with an order. Look at https://stackoverflow.com/questions/121387/fetch-the-row-whi... and https://stackoverflow.com/questions/3800551/select-first-row... and https://stackoverflow.com/questions/8748986/get-records-with... and their combined thousands of votes and dozens of answers, all full of awkward workaround or ill-performing or specialised-for-one-database-engine code for this common and desirable thing which would be trivial with a couple of boring loops in Python. snidane 12 days It does make sense for semi-joins. I care about the key, not the value. Random order is also a valid order. Comment was deleted :( TFortunato 12 days Having taught it, do you have any recommendations for folks who are looking to improve their thinking in relations/sets/SQL skills? geophile 10 days I teach, in order, more or less (there is some interleaving): - Data modeling - Relational algebra - SQL - DBMS architecture (buffering, btrees, query processing, index selection, ...) - Query optimization - Transactions A few assignments have students write an in-memory relational algebra implementation, and then use it to write queries. A typical query is a deeply nested set of function calls (as a "one-liner"). And only then to we get to SQL. The hope is that RA is so ingrained, that the connections to SQL are easier to see. And this background is also really useful in understanding query optimization. All of this material, including assignments, is available online (this is my own webserver): http://geophile.com/115. agumonkey 12 days I think this is related to "wholemeal" programming as some haskellers/FP-ists do, thinking in sets/tree/graphs operations. webmaven 12 days > SQL reflects this. I often see students writing nested subqueries, because that is more procedural, where joins would be a cleaner choice. In my experience, in the non-ad-hoc use-case, views can often be substituted for the procedural approach, forming the equivalent of a POSIX pipe. *> A colleague of mine wrote a paper many years ago, pointing out that thinking procedurally is more "natural" for many: https://dl.acm.org/doi/10.1145/319628.319656. But thinking set-at-a-time instead of one-at-a-time is a valuable skill, not that far off from thinking functionally. Hmm. Given the proliferation of tabular data tools (especially spreadsheets) over the intervening 40 years, I wonder if those results would remain the same today (and whether there would be any difference among Excel power users that use pivot tables, etc.) Comment was deleted :(
2022-05-27 03:04:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32372888922691345, "perplexity": 2608.398309805409}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662631064.64/warc/CC-MAIN-20220527015812-20220527045812-00176.warc.gz"}