url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://www.mathcounterexamples.net/counterexamples-on-real-sequences-part-2/
# Counterexamples on real sequences (part 2) In that article, I provide basic counterexamples on sequences convergence. I follow on here with some additional and more advanced examples. #### If $$(u_n)$$ converges then $$(\vert u_n \vert )$$ converge? This is true and the proof is based on the reverse triangle inequality: $$\bigl| \vert x \vert – \vert y \vert \bigr| \le \vert x – y \vert$$. However the converse doesn’t hold. For example, the sequence $$u_n=(-1)^n$$ is such that $$\lim \vert u_n \vert = 1$$ while $$(u_n)$$ diverges. #### If for all $$p \in \mathbb{N}$$ $$\lim\limits_{n \to +\infty} (u_{n+p} – u_n)=0$$ then $$(u_n)$$ converges? The assertion is wrong. A simple counterexample is $$u_n= \ln(n+1)$$. It is well known that $$(u_n)$$ diverges. However for any $$p \in \mathbb{N}$$ we have $$\lim\limits_{n \to +\infty} (u_{n+p} – u_n) =\ln(1+\frac{p}{n+1})=0$$. The converse proposition is true. Assume that $$(u_n)$$ is a converging sequence with limit $$l$$ and $$p \ge 0$$ is any integer. We have $$\vert u_{n+p}-u_n \vert = \vert (u_{n+p}-l)-(u_n-l) \vert \le \vert u_{n+p}-l \vert – \vert u_n-l \vert$$ and both terms of the right hand side of the inequality are converging to zero. #### If $$\lim (u_{2n} – u_n) = 0$$ and $$(u_n)$$ is increasing then $$(u_n)$$ converges? Is not true. However, in order to find a counterexample, we have to pick-up a slowly increasing sequence. As a matter of fact the sequence $$\ln(n+1)$$ is increasing too quick as $$\lim \bigl(\ln(2n+1) – \ln(n+1)\bigr) = \ln 2$$. The sequence $$u_n=\ln(\ln(n+1))$$ provides a counterexample. $$u_n$$ is increasing as the function composition of two increasing maps. We also have the equality: $u_{2n} – u_n=\ln \left(\frac{\ln 2}{\ln(n+1)}+\frac{\ln(n+1/2)}{\ln(n+1)}\right)$ and the right end side converges to $$0$$. #### If $$(u_n)$$ is positive and unbounded then $$u_n \to +\infty$$? Does not hold. Have a look at $$u_n=\begin{cases} 0 & \text{for } n \text{ even} \\ n & \text{for } n \text{ odd} \end{cases}$$ #### If $$\lim u_n = +\infty$$ then $$(u_n)$$ is eventually increasing? Still not! The sequence $$u_n=\begin{cases} n & \text{for } n \text{ even} \\ 2n & \text{for } n \text{ odd} \end{cases}$$ provides a counterexample. #### If $$\lim (u_{n+1} – u_n) = 0$$ then $$\lim \frac{u_{n+1}}{u_n} = 1$$? Is also wrong as you can see with the sequence $$u_n=\begin{cases} \frac{1}{n+1} & \text{for } n \text{ even} \\ \frac{1}{(n+1)^2} & \text{for } n \text{ odd.} \end{cases}$$ #### If $$\lim (u_{n+1}-u_n) =0$$ and $$(u_n)$$ is bounded then $$(u_n)$$ converges? This is still wrong! But it is slightly more difficult to find a counterexample. One idea is to find a sequence which oscillates between $$0$$ and $$1$$ with steps decreasing to zero. The sequence: $u_n=\begin{cases} 0 & \text{for } n = 0\\ 1-\frac{n-2^{2k}}{2^{2k}} & \text{for } 2^{2k} \le n < 2^{2k+1}\\ \frac{n-2^{2k+1}}{2^{2k+1}} & \text{for } 2^{2k+1} \le n < 2^{2k+2}\end{cases}$ is well defined and having all the required properties: • for all $$n \in \mathbb{N}$$, $$u_n \in [0,1]$$, therefore $$(u_n)$$ is bounded. • For $$n \ge 2^k$$ we have $$\vert u_{n+1}-u_n \vert \le \frac{1}{2^k}$$, hence $$\lim (u_{n+1}-u_n) =0$$. • However, $$(u_n)$$ diverges as for all $$k \in \mathbb{N}$$ we have $$u_{2^{2k}}=1$$ and $$u_{2^{2k+1}}=0$$. It is interesting to notice that the set of limit points of $$(u_n)$$ is the interval $$[0,1]$$. The examples above were inspired from mathematical exercises given by math teacher Robert FERRÉOL. ## 3 thoughts on “Counterexamples on real sequences (part 2)” 1. Dan Anderson says: Pretty sure that your counterexample for “If for all p∈N limn→+∞(un+p–un)=0 then (un) converges?” is incorrect. By hypothesis, p is allowed to range over the entirety of N. The Archimedean property of the naturals and reals ensures there exists a p st. p > n+1 for all n (even if n→∞), and so p/(n+1) = q > 1. So, since ln is increasing, ln(1+q) > ln(1+1) = ln(2) > 0. By this method, both q and ln(1+q) can be made arbitrarily large. Moreover, I belive that the given condition is actually equivalent to the Cauchy Criterion, and so does imply (and is implied by) the convergence of the sequence. 1. Dan Anderson says: Also, excellent site! Don’t let the analysis-happy undergrad slow you down.
2018-11-21 07:36:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9580036401748657, "perplexity": 308.7850418212082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747369.90/warc/CC-MAIN-20181121072501-20181121094501-00360.warc.gz"}
http://math.stackexchange.com/questions/324343/find-the-recurrence-relation-of-a-string
# find the recurrence relation of a string So I got this problem: compute the number of n bit string that do not contain pattern 010 that have no leading 0, one leading zero, two leading zero, and so on. So far, I got the expression: Sn = Sn-1 + Sn-3 + Sn-4...+ S1 However the solution is: Sn = Sn-1 + Sn-3 + Sn-4...+ S1 + 3 My question is, where + 3 come from? thx update: I was thinking that if the string that does not contain 010 begin w/ 1, the rest will be Sn-1. if the string begin with one 0 (which mean the first two bit is 01), the next bit have to be 0 -> Sn-3 and so on - It would help to see how you got your answer. It is easier to check that way. Clearly it is not right as S1=2 and S2 can't be 5-there aren't that many strings. But S2 is 4, and your recurrence would make it 2 – Ross Millikan Mar 8 '13 at 4:14 @rossmilikan edited. as u can see, there's no S0 -> so, S1 is the init, S1 = 1. S2 = 1 + 3 =4 – user1988385 Mar 8 '13 at 4:26 I'm not understanding your definition of Sn. It seems both $0$ and $1$ are legal one bit strings, so S1 should be 2, and all four two bit strings are still legal, so S2 should be 4. – Ross Millikan Mar 8 '13 at 4:32 if n=0, there's no string at all. which mean the number of occurrence of a string that not contain 010 is 0. Honestly, I don't rly get this problem too :D – user1988385 Mar 8 '13 at 4:35 ## 1 Answer Let us define $00(n)$ as the number of n bit strings that start with $00$ and do not include $010$, and $10(n), 01(n),$ and $11(n)$ similarly. Then $S(n)$ the total number of $n$ bit strings is the sum of these. We then have $$00(2)=10(2)=01(2)=11(2)=1\\00(n)=01(n-1)+00(n-1)\\01(n)=11(n-1)\\10(n)=00(n-1)+01(n-1)\\11(n)=10(n-1)+11(n-1)$$ It is curious (and easy to prove) that $11(n)=S(n-2)$. The recurrence should be solvable by diagonalizing the matrix. The growth rate is the square of the plastic constant, about $1.754877666$ It is also the real root of $x^3-2x^2+x-1=0$ Added: It starts \$1,2,4,7,12,21,37,65,114,200,351. This is OEIS A005251 where there is more description. -
2016-05-29 15:41:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6467481851577759, "perplexity": 579.9687061361911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049281363.50/warc/CC-MAIN-20160524002121-00183-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.intel.com/content/www/us/en/developer/articles/technical/game-dev-with-unity-ml-agents-and-intel-optimized-python-part-two.html
# Game Dev with Unity* ML-Agents and Intel® Optimized Python* (Part Two) Published: 07/06/2018 Last Updated: 07/06/2018 ## Abstract In the final part of this two-part series on machine learning with Unity* ML-Agents, we will dig deeper into the architecture and create an ML-Agent from scratch. Before training, we will inspect the files that require parameters for machine learning to proceed. Finally, we will train the agent using Intel® optimized Python* and show how the completed system works. ## Architecture of Unity* ML-Agents Figure 1 shows the architecture of Unity ML-Agents: Figure 1. Unity* ML-Agents architecture. At first glance, it might seem that the external communicator and Intel-optimized Python can only be used by the external brain, but this is not the case. The external brain can be accessed by other training modes, too. Every scene will have two entities: 1. An “Academy,” using an “Academy Script” that will be added later. 2. “Brains,” which are the logic inside Unity ML-Agents where the main connection lies. Agents share the same brain; each agent has an agent script on it which links back to the brain. The brain itself has a brain script on it. It may or may not have a decision script. ## Changes in V3 with Respect to V2 Unity ML-Agents have seen several changes, many based on community feedback. Some of the changes are described below: • The ML-Agents reward system changed to “AddReward()” or “SetReward().” • When we are working with an Agent and it has worked in its entirety or performed its function, we now use the “Done()” method. • The concept of state has been changed to observations, so “CollectStates()” have been replaced by “CollectObservations().” • When we collect Observations, we have to call “AddVectorObs()” with floats, integers, lists, and an array of floats, vectors, and quaternions. (Quaternions represent the orientation of every object in Unity.) The names of the inputs in the Internal Brain have been changed accordingly. • We must replace State with “Vector_Observation” and observation with “Visual_Observation.” The table below summarizes the key changes in V3: Old (V2) New (V3) State Vector Observation Observation Visual Observation (New) Text Observation Action Vector Action (New) Text Action Table 1. Changes in Unity* ML-Agents from v2 to v3. Use the following steps to start creating your own example of machine learning using Unity ML-Agents and Intel-optimized Python: 1. Open up the Unity ML cloned project. Everything we do will be kept inside the Examples folder. The cloned project is opened in Unity. 2. Create a new subfolder named “MyBall” within the Examples folder. We will keep all of our resources within this folder. The Examples folder is where we are keeping all the content and the resources. 3. Create a new scene using the suggested name “MyBall(scene).” Next, we will create a new scene. To start setting up machine learning inside the scene, we will have to create 3D objects, using the following steps: 1. Create a 3D object cube. 2. Add “rigid body” and make it “kinematic.” 3. Change the color of the cube. For adding colors to our object, we need to create a new material and name it “Blue.” We will change the color content to blue. (We can also change the color of the background.) 4. Create a 3D object sphere and add a rigid body to it. We will now organize the scene and add an event system from the UI. 5. Right-click on “Hierarchy” then select “Event System.” To follow the procedure for Unity ML-Agents, we need to separately create an Academy object and a brain object, and then associate the scripts properly. We will create an Academy object, then have a child object created from Academy named “Brain.” Within the brain, we will add the brain script; but when we do, we will notice an error in the inspector window, which we can quickly resolve. ## Adding Functionality to the Academy and the Brain Object When we add functionality to the Academy and Brain object by adding a C# script in it, we remove the error condition. The script follows a basic flow with some override methods. As we have created the ball Academy object, we can now create a C# script named “MyBallAcademy” and attach the script to the Academy in the hierarchy. Before editing, the script looks like this: using System. Collections; using System.Collections.Generic; using UnityEngine; public class MyBallAcademy : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void Update () { } } We will not inherit from monobehaviour, as we are not deriving any characteristics from it. After we change the script, everything will be derived from Academy and we don’t need “void Start()” and “void Update().” using System. Collections; using System.Collections.Generic; using UnityEngine; // Use this for initialization public override void AcademyReset() { } public override void AcademyStep() { } } We have inherited from Academy and have declared two empty override methods as “AcademyReset()” and “AcademyStep().” We cannot change these methods, as this is the structure for any Academy script that you want to derive from. With both of these methods we have made the generalized script that can be used within the scene. With the changes made to the script, we have a basic, bare-bones structure for linking Academy and the brain. ## Basic Setup for the Scene In this scene we will be creating a cube, which we will refer to as the “platform.” Within that platform, we will place a sphere, which will act like a ball. With movements, we can adjust the ball in order to prevent it from falling off the platform. If the ball falls off, the scene will reset, and we will restart the balancing act. We now have our platform and the ball, but to demonstrate machine learning, we need to configure a brain to control the action. Once the system is under the control of the brain, it will drive the platform and then fire off an agent script. Our next job is to write the agent script. ## Programming and Scene Setup Logic We will now create an agent script and name it as MyBallAgent. We will inherit from the Agent. Once we add the MyBallAgent script to the system, we will immediately see what inherited values we need to put in. We will drag and drop Brain to the required inherited values. First, we will drag and drop the MyBallAgent script created to the cube as shown below. Then we drag and drop the child we created for Academy as brain to the Brain option, which showed none (shown below). In the Agent code itself, we will write all the controlling parameters we intend to use. We will declare a GameObject “ball,” which we will include from the inspector that is ball. public GameObject ball; Now the flow of the agent is controlled by the Unity ML-Agents plugin. (We will not need Unity’s default update method.) ## Override Overriding common methods. We need to override common methods because the type of environment we created might require changes and more training. For that we need to change the values of the parameters and override the common values present. First, we have to find out where we are going to have the transformations and other declarations for the game object. In version 0.3, game object changes have been shifted to “AddVectorObs,” which are now known as ”vector observations.” For object transformation, positions, and rigid body, we are declaring eight AddVectorObs (also known as “vector objects”). The method is called CollectObservations. AddVectorObs(gameObject.transform.rotation.z); The complete method is shown below. public override void CollectObservations() { SetTextObs("Testing " + gameObject.GetInstanceID()); } Here is what the above code does: 1. We get the x and z rotation; the game object will rotate in two directions. AddVectorObs(gameObject.transform.rotation.x); 2. We get the difference between the ball’s x position and the game object’s x position. 3. We get where the ball is respective to the platform. 4. We get the ball’s velocity in x,y and z directions. AddVectorObs(ball.transform.GetComponent<Rigidbody>().velocity.x); ## When the Game Resets, What Method Will We Override? The override method that we will be using for when the game resets is AgentReset(), which initiates when the ball is dropped onto the platform. Here are some of the key instructions: 1. Reset everything back to zero: gameObject.transform.rotation = new Quaternion(0f, 0f, 0f, 0f); 2. Change the velocity of the ball back to 0: ball.GetComponent().velocity = new Vector3(0f, 0f, 0f); 3. Set the position of the ball back to StartPos: ball.transform.position = ballStartPos; 4. Create “Vector3” to store the vector’s start position: Vector3 ballStartPos; 5. Configure the starting position by working inside “Void Start()” and declaring the following: ballStartPos = ball.transform.position; We have now defined the starting environment when we hold the ball for the very first time, and when the system resets. ## Controlling the Platform Once we shift to the “Player” option, we must enable certain keys on the keyboard to control movement. We accomplish this by creating a way to physically control the platform. This is where all the actions get converted, and for any desired change for the scene that we have created the response that we do by giving the keyboard movements should produce the results in the scene for the movement of the ball. We need to check as we map the keyboard keys to ensure that it is reflecting the same way that it is supposed to be. The entire updated code for MyBallAgent is shown below: using System. Collections; using System.Collections.Generic; using UnityEngine; public class MyBallAgent : Agent { public GameObject ball; Vector3 ballStartPos; void Start() { ballStartPos = ball.transform.position; } public override void AgentAction(float[] vectorAction, string textAction) { if (brain.brainParameters.vectorActionSpaceType == SpaceType.continuous) { float action_z = 2f * Mathf.Clamp(vectorAction[0], -1f, 1f); if ((gameObject.transform.rotation.z < 0.25f && action_z > 0f) || (gameObject.transform.rotation.z > -0.25f && action_z < 0f)) { gameObject.transform.Rotate(new Vector3(0, 0, 1), action_z); } float action_x = 2f * Mathf.Clamp(vectorAction[1], -1f, 1f); if ((gameObject.transform.rotation.x < 0.25f && action_x > 0f) || (gameObject.transform.rotation.x > -0.25f && action_x < 0f)) { gameObject.transform.Rotate(new Vector3(1, 0, 0), action_x); } SetReward(0.1f); } if ((ball.transform.position.y - gameObject.transform.position.y) < -2f || Mathf.Abs(ball.transform.position.x - gameObject.transform.position.x) > 3f || Mathf.Abs(ball.transform.position.z - gameObject.transform.position.z) > 3f) { Done(); SetReward(-1f); } } public override void CollectObservations() { SetTextObs("Testing" + gameObject.GetInstanceID()); } public override void AgentReset() { gameObject.transform.rotation = new Quaternion(0f, 0f, 0f, 0f); ball.GetComponent<Rigidbody>().velocity = new Vector3(0f, 0f, 0f); ball.transform.position = ballStartPos; } } ## Simulation Using Keyboard Inputs For a simulation using keyboard inputs with the brain type set as “Player,” we will need to configure the brain script. Because there are eight AddVectorObs, the parameter for Vector Observation space size would be eight, and space type is “continuous.” Make the changes in the Inspector window, shown below: Figure 2. Configuring the brain script in the Inspector window. Now we can add continuous player actions to control keyboard inputs. There are four keys to map, so there are four continuous player elements: up-arrow, down-arrow, right-arrow, and left-arrow. The parameter values are the following: Element 0 Key -> Up Arrow Index->1 Value->1 Element 1 Key->Down Arrow Index->1 Value->-1 Element 2 Key->Right Arrow Index->0 Value->-1 Element 3 Key->Left Arrow Index->0 Value->1 The keyboard mapping is shown in the figure below: Figure 3. Keyboard mapping for elements 0-3. Now we will click on “Play” to test the scene under player settings and try to keep the ball in the platform using the keyboard arrows up, down, left, and right. For training the model using Intel-optimized TensorFlow*, we need to keep the brain type set to “external” for the build. Figure 4. Play starts with the ball at the center of the platform. As we have done before, we need to create the build for the project. Figure 5. Selecting the scenes and creating the project. We have added the scene; now we will create the build and name it. Figure 6. Naming and saving the scene. Now that the executable has been created, we must train it using our Intel-optimized Python module. However, before training can start, there are some things to know about the “learn.py” file and the “trainer_config.yaml” file. The “learn.py” file contains certain details for running the training. The key parameters are declared in the config file. The main work of the “learn.py” file is to initialize general parameters such as run_id, fast_simulation, etc. and trigger the “trainer_config.yaml” file. We don’t have to make changes to the “learn.py” file; it has the format as shown below: # # Unity ML Agents # ## ML-Agent Learning import logging import os from docopt import docopt from unitytrainers.trainer_controller import TrainerController if __name__ == '__main__': logger = logging.getLogger("unityagents") _USAGE = ''' Usage: learn (<env>) [options] learn --help Options: --curriculum=<file> Curriculum json file for environment [default: None]. --keep-checkpoints=<n> How many model checkpoints to keep [default: 5]. --lesson=<n> Start learning from this lesson [default: 0]. --load Whether to load the model or randomly initialize [default: False]. --run-id=<path> The sub-directory name for model and summary statistics [default: ppo]. --save-freq=<n> Frequency at which to save model [default: 50000]. --seed=<n> Random seed used for training [default: -1]. --slow Whether to run the game at training speed [default: False]. --train Whether to train model, or only run inference [default: False]. --worker-id=<n> Number to add to communication port (5005). Used for multi-environment [default: 0]. --docker-target-name=<dt> Docker Volume to store curriculum, executable and model files [default: Empty]. ''' options = docopt(_USAGE) logger.info(options) # Docker Parameters if options['--docker-target-name'] == 'Empty': docker_target_name = '' else: docker_target_name = options['--docker-target-name'] # General parameters run_id = options['--run-id'] seed = int(options['--seed']) train_model = options['--train'] save_freq = int(options['--save-freq']) env_path = options['<env>'] keep_checkpoints = int(options['--keep-checkpoints']) worker_id = int(options['--worker-id']) curriculum_file = str(options['--curriculum']) if curriculum_file == "None": curriculum_file = None lesson = int(options['--lesson']) fast_simulation = not bool(options['--slow']) # Constants # Assumption that this yaml is present in same dir as this file base_path = os.path.dirname(__file__) TRAINER_CONFIG_PATH = os.path.abspath(os.path.join(base_path, "trainer_config.yaml")) tc = TrainerController(env_path, run_id, save_freq, curriculum_file, fast_simulation, load_model, train_model, worker_id, keep_checkpoints, lesson, seed, docker_target_name, TRAINER_CONFIG_PATH) tc.start_learning() The “trainer_config.yaml” file contains more important information. Some default parameters are already declared. The important ones are max_steps: 5.0e4. (The max steps are how many times we loop around and train the entire thing. For this scene it is 50,000 and is written as 5.0e4, which is 5 * 104. The value is default.) We can alter the value so that we can train the model more. The number of times the model is trained is known as “epochs.” Generally, one epoch cycle is known as one full training cycle on the set or, in this case, is the scene. α- value or learning rate 3.0e-4 We can also override some values. We can override the value if we need to change the training times such that we can increase the number of max steps, so that the scene is trained more. This helps us for better machine-learning results. Within the file there are examples where the default brain script values have been overridden. A small snippet of the “config.yaml” file is shown below: default: trainer: ppo batch_size: 1024 beta: 5.0e-3 buffer_size: 10240 epsilon: 0.2 gamma: 0.99 hidden_units: 128 lambd: 0.95 learning_rate: 3.0e-4 max_steps: 5.0e4 memory_size: 256 normalize: false num_epoch: 3 num_layers: 2 time_horizon: 64 sequence_length: 64 summary_freq: 1000 use_recurrent: false BananaBrain: normalize: false batch_size: 1024 beta: 5.0e-3 buffer_size: 10240 PushBlockBrain: max_steps: 5.0e4 batch_size: 128 buffer_size: 2048 beta: 1.0e-2 hidden_units: 256 summary_freq: 2000 time_horizon: 64 num_layers: 2 Now we can start the training process. The following is the command we will use: python learn.py mball2.exe --run-id=mball2 –train As the process runs, the following details are populated: (idp) C:\Users\abhic\Desktop\ml-agents\python>python learn.py mball2.exe --run-id=mball2 --train INFO:unityagents:{'--curriculum': 'None', '--docker-target-name': 'Empty', '--help': False, '--keep-checkpoints': '5', '--lesson': '0', '--run-id': 'mball2', '--save-freq': '50000', '--seed': '-1', '--slow': False, '--train': True, '--worker-id': '0', '<env>': 'mball2.exe'} INFO:unityagents: Number of Brains: 1 Number of External Brains : 1 Lesson number : 0 Reset Parameters : Unity brain name: Brain Number of Visual Observations (per agent): 0 Vector Observation space type: continuous Vector Observation space size (per agent): 8 Number of stacked Vector Observation: 3 Vector Action space type: continuous Vector Action space size (per agent): 2 Vector Action descriptions: , 2018-06-04 05:28:49.992671: I k:\tf_jenkins_freddy\ cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 C:\<path>\conda\envs\idp\lib\site-packages\tensorflow\python\ops\gradients_impl.py:96: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. " INFO:unityagents:Hyperparameters for the PPO Trainer of brain Brain: batch_size: 1024 beta: 0.005 buffer_size: 10240 epsilon: 0.2 gamma: 0.99 hidden_units: 128 lambd: 0.95 learning_rate: 0.0003 max_steps: 5.0e4 normalize: False num_epoch: 3 num_layers: 2 time_horizon: 64 sequence_length: 64 summary_freq: 1000 use_recurrent: False graph_scope: summary_path: ./summaries/mball2 memory_size: 256 INFO:unityagents: Brain: Step: 1000. Mean Reward: 6.975. Std of Reward: 1.993. INFO:unityagents: Brain: Step: 2000. Mean Reward: 9.367. Std of Reward: 3.598. INFO:unityagents: Brain: Step: 3000. Mean Reward: 7.258. Std of Reward: 2.252. INFO:unityagents: Brain: Step: 4000. Mean Reward: 7.333. Std of Reward: 3.324. INFO:unityagents: Brain: Step: 5000. Mean Reward: 10.700. Std of Reward: 4.618. INFO:unityagents: Brain: Step: 6000. Mean Reward: 7.183. Std of Reward: 1.750. INFO:unityagents: Brain: Step: 7000. Mean Reward: 7.038. Std of Reward: 2.464. INFO:unityagents: Brain: Step: 8000. Mean Reward: 6.400. Std of Reward: 1.561. INFO:unityagents: Brain: Step: 9000. Mean Reward: 7.664. Std of Reward: 3.189. INFO:unityagents: Brain: Step: 10000. Mean Reward: 7.333. Std of Reward: 2.236. INFO:unityagents: Brain: Step: 11000. Mean Reward: 9.622. Std of Reward: 4.135. INFO:unityagents: Brain: Step: 12000. Mean Reward: 10.938. Std of Reward: 1.323. INFO:unityagents: Brain: Step: 13000. Mean Reward: 10.578. Std of Reward: 2.623. INFO:unityagents: Brain: Step: 14000. Mean Reward: 11.986. Std of Reward: 2.559. INFO:unityagents: Brain: Step: 15000. Mean Reward: 10.411. Std of Reward: 2.383. INFO:unityagents: Brain: Step: 16000. Mean Reward: 10.925. Std of Reward: 2.178. INFO:unityagents: Brain: Step: 17000. Mean Reward: 10.633. Std of Reward: 1.173. INFO:unityagents: Brain: Step: 18000. Mean Reward: 11.957. Std of Reward: 3.645. INFO:unityagents: Brain: Step: 19000. Mean Reward: 10.511. Std of Reward: 2.343. INFO:unityagents: Brain: Step: 20000. Mean Reward: 10.975. Std of Reward: 2.469. INFO:unityagents: Brain: Step: 21000. Mean Reward: 12.025. Std of Reward: 6.786. INFO:unityagents: Brain: Step: 22000. Mean Reward: 10.538. Std of Reward: 1.935. INFO:unityagents: Brain: Step: 23000. Mean Reward: 10.311. Std of Reward: 1.044. INFO:unityagents: Brain: Step: 24000. Mean Reward: 9.844. Std of Reward: 1.023. INFO:unityagents: Brain: Step: 25000. Mean Reward: 10.167. Std of Reward: 0.886. INFO:unityagents: Brain: Step: 26000. Mean Reward: 10.388. Std of Reward: 1.628. INFO:unityagents: Brain: Step: 27000. Mean Reward: 10.000. Std of Reward: 1.332. INFO:unityagents: Brain: Step: 28000. Mean Reward: 10.322. Std of Reward: 1.240. INFO:unityagents: Brain: Step: 29000. Mean Reward: 9.644. Std of Reward: 0.837. INFO:unityagents: Brain: Step: 30000. Mean Reward: 10.244. Std of Reward: 1.606. INFO:unityagents: Brain: Step: 31000. Mean Reward: 9.922. Std of Reward: 1.576. INFO:unityagents: Brain: Step: 32000. Mean Reward: 10.200. Std of Reward: 1.060. INFO:unityagents: Brain: Step: 33000. Mean Reward: 10.413. Std of Reward: 0.877. INFO:unityagents: Brain: Step: 34000. Mean Reward: 10.233. Std of Reward: 1.104. INFO:unityagents: Brain: Step: 35000. Mean Reward: 10.411. Std of Reward: 0.825. INFO:unityagents: Brain: Step: 36000. Mean Reward: 9.875. Std of Reward: 1.221. INFO:unityagents: Brain: Step: 37000. Mean Reward: 10.067. Std of Reward: 0.550. INFO:unityagents: Brain: Step: 38000. Mean Reward: 9.660. Std of Reward: 0.759. INFO:unityagents: Brain: Step: 39000. Mean Reward: 11.063. Std of Reward: 1.467. INFO:unityagents: Brain: Step: 40000. Mean Reward: 9.722. Std of Reward: 0.989. INFO:unityagents: Brain: Step: 41000. Mean Reward: 9.656. Std of Reward: 0.732. INFO:unityagents: Brain: Step: 42000. Mean Reward: 9.689. Std of Reward: 0.839. INFO:unityagents: Brain: Step: 43000. Mean Reward: 9.689. Std of Reward: 1.152. INFO:unityagents: Brain: Step: 44000. Mean Reward: 9.570. Std of Reward: 0.593. INFO:unityagents: Brain: Step: 45000. Mean Reward: 9.856. Std of Reward: 0.510. INFO:unityagents: Brain: Step: 46000. Mean Reward: 10.278. Std of Reward: 1.219. INFO:unityagents: Brain: Step: 47000. Mean Reward: 9.988. Std of Reward: 0.924. INFO:unityagents: Brain: Step: 48000. Mean Reward: 10.311. Std of Reward: 0.788. INFO:unityagents: Brain: Step: 49000. Mean Reward: 10.044. Std of Reward: 1.192. INFO:unityagents:Saved Model INFO:unityagents: Brain: Step: 50000. Mean Reward: 9.210. Std of Reward: 0.730. INFO:unityagents:Saved Model INFO:unityagents:Saved Model INFO:unityagents:List of nodes to export : INFO:unityagents: action INFO:unityagents: value_estimate INFO:unityagents: action_probs INFO:tensorflow:Restoring parameters from ./models/mball2\model-50000.cptk INFO:tensorflow:Restoring parameters from ./models/mball2\model-50000.cptk INFO:tensorflow:Froze 12 variables. INFO:tensorflow:Froze 12 variables. Converted 12 variables to const ops. The bytes file is now generated in the /mball directory. Figure 7. Directory contents after generating the bytes file. In our project inside the folder, there is no TFModels directory, so we will have to create one and keep the bytes file there. Figure 8. Create the TFModels directory to store the bytes file properly. After creating the bytes file, copy it to the \TFModels folder. Once that step is complete, go back to the Unity project and move to the Inspector window. Change the brain type to “internal.” It will show an error. Figure 9. After the bytes file is created, set the brain to “internal.” We can now drag and drop the bytes file (inside the TFModels folder) corresponding to the Graph Model and resolve the error. The system is now ready to test to see how well the model has been trained. ## Summary Intelligent agents, each acting with dynamic and engaging behavior, offer promise for more realism and better user experiences. After completing the tasks described in part one and part two of this series, you can now create a Unity ML-Agent from scratch, configure the key learning and training files, and understand the key parameters to set up in order to get started with machine learning. Based on what you learned in these articles, you should now be able to incorporate more compelling AI behavior in your own games to boost immersion and attract players. ## Resources #### Product and Performance Information 1 Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.
2022-08-18 11:26:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2984233796596527, "perplexity": 10292.009560294327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573193.35/warc/CC-MAIN-20220818094131-20220818124131-00405.warc.gz"}
http://dataspace.princeton.edu/jspui/handle/88435/dsp017w62fb53x
Skip navigation Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp017w62fb53x Title: Constructions and Computations in Khovanov Homology Authors: Manion, Andrew Advisors: Szabo, Zoltan Contributors: Mathematics Department Subjects: Mathematics Issue Date: 2015 Publisher: Princeton, NJ : Princeton University Abstract: In this thesis, we present a collection of results relating to Khovanov homology. We consider the family of 3-strand pretzel links, and compute their unreduced and reduced Khovanov homology using two different methods. We also show how to extend Lawrence Roberts’ totally twisted Khovanov homology to integer coefficients, yielding a spanning tree model for odd Khovanov homology with an explicitly computable differential. Finally, we show that Khovanov’s functor-valued invariant of tangles contains the same information as Bar-Natan’s dotted cobordism tangle theory, and we construct a natural bordered theory for Khovanov homology using this invariant. URI: http://arks.princeton.edu/ark:/88435/dsp017w62fb53x Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog Type of Material: Academic dissertations (Ph.D.) Language: en Appears in Collections: Mathematics Files in This Item: File Description SizeFormat Manion_princeton_0181D_11334.pdf1.33 MBAdobe PDF Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.
2017-06-24 12:26:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5018463730812073, "perplexity": 4671.537047142874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320261.6/warc/CC-MAIN-20170624115542-20170624135542-00460.warc.gz"}
http://tex.stackexchange.com/questions/105966/creating-captchas-and-other-fancy-distorted-letters-with-tikz/106043
# Creating CAPTCHAs and other fancy distorted letters with TikZ I try to do a little more spectacular that phrase ( Δελτιο Υλης) . Is the headline on a press material for my students. The fact is that I could accomplish this with gimp or a graphics program such as the inscape or similar programs. But what I want is to make a somewhat artistic distortion of letters through tikz to show to my students the possibilities of having the LaTeX without using external programs. My question: is there a way to warp letter and how? Currently this is what I've done ... and certainly I have not artistic vein ... \documentclass[b5paper,svgnames,10pt]{book} \usepackage[utf8x]{inputenc} \usepackage{tikz} \usepackage{color} \usepackage{xcolor} \begin{document} \begin{tikzpicture} \draw[grid,step=0.5cm,gray,very thin] (-1cm,-1cm) grid (9cm,1.5cm); \node[scale=5.,color =MidnightBlue ] at (0,0) {$\Delta$}; \node[scale=5.,color =MidnightBlue ,rotate=30] at (0.84cm,0.1cm) {$\varepsilon$}; \node[scale=5.,color =MidnightBlue,rotate=-15 ] at (1.3cm,0.0cm) {$\lambda$}; \node[scale=5.,color =MidnightBlue,rotate=+15 ] at (1.95cm,-0.2cm) {$\tau$}; \node[scale=5.,color =MidnightBlue ] at (2.5cm,0) {$\iota$}; \node[scale=5.,color =MidnightBlue ] at (2.6cm,0.4cm) {$'$}; \node[scale=5.,color =MidnightBlue ] at (3.0cm,0) {$o$}; \node[scale=6.,color =MidnightBlue,rotate=-20 ] at (5.3cm,0.08cm) {$\Upsilon$}; \node[scale=5.,color =MidnightBlue ] at (6.cm,-0.2cm) {$\lambda$}; \node[scale=5.,color =MidnightBlue ] at (7.cm,-0.1cm) {$\eta$}; \node[scale=5.,color =MidnightBlue ] at (8.cm,0) {$\varsigma$}; \end{tikzpicture} \end{document} - I would use MS Word for this. It's naturally supported –  percusse Mar 29 '13 at 23:06 Did you see one of our greatest questions about Ctulhu and random rotations of letters here tex.stackexchange.com/q/29402/3235 ? Or any other great answers on this site ? –  percusse Mar 29 '13 at 23:11 @karathan: Looks like a CAPTCHA to me. Maybe it would make sense to rename the question to accommodate that? –  Count Zero Mar 29 '13 at 23:18 Maybe I should add that I meant the terrible quality of Word math rendering that it comes this horrible by default if one uses Word. –  percusse Mar 30 '13 at 15:33 Why limit the question to TikZ? (see Why Metapost discrimination) –  Aditya Mar 30 '13 at 17:12 No claims to artistry or anything (you may, or may not notice that I've changed the text :-), also as it involves randomness sometimes it looks OK sometimes it does not: \documentclass{standalone} \usepackage{tikz} \pgfmathdeclarerandomlist{fonts}{{\bf}{\tt}{\rm}{\sf}{\it}{\sl}} \tikzset{ distort/.style={ rotate=rand*10, yslant=rand/3, xslant=rand/3, xscale=1+rand/4, yscale=1+rand/4, execute at begin node={% \pgfmathrandomitem{\newfont}{fonts}\newfont% } } } % Need a special space because inside \node { }; % \ignorespaces and \unskip will remove spaces. \def\bigspace{\hbox to 1ex{\hfil}} \begin{document} \Huge \begin{tikzpicture} % The basic idea is to create a node called 0 % then by maintaining indexes \i = 1,2,... and \j = 0,1,... % it is possible to position the current node (called \i) % to the previous node (called \j). \coordinate (0); \foreach \letter [count=\i, count=\j from 0]in {P,i,g,s, \bigspace ,m,i,g,h,t, \bigspace ,f,l,y} \node [inner sep=0pt, anchor=base west, distort] at (\j.base east) (\i) {\pgfmathrandomitem{\newfont}{fonts}\newfont\letter}; \tikzset{yshift=-0.75in} % The opacity is stored in an array and accessed % using the (undocumented) evaluate key. \foreach \c [evaluate={\o={5,25, 100}[\c-1];}]in {1,...,3} { \coordinate (0) at (rand/4, rand/4); \foreach \letter [count=\i, count=\j from 0]in {P,i,g,s, \bigspace ,m,i,g,h,t, \bigspace ,f,l,y} \node [inner sep=0pt, anchor=base west, black!\o, distort] at (\j.base east) (\i) {\letter}; } \tikzset{yshift=-0.75in} % In order to get the same randomness (!) the random seed % can be set to a fixed value. In this case the value is % selected from the range 1-32768. \pgfmathrandominteger\seed{1}{32768} \foreach \c [evaluate={\o={5,25, 100}[\c-1];}]in {1,...,3} { \coordinate (0) at (\c*2pt,\c*2pt); \pgfmathsetseed{\seed} \foreach \letter [count=\i, count=\j from 0]in {P,i,g,s, \bigspace ,m,i,g,h,t, \bigspace ,f,l,y} \node [inner sep=0pt, anchor=base west, black!\o, distort] at (\j.base east) (\i) {\letter}; } \end{tikzpicture} \end{document} EDIT: Added some more randomness in an attempt to make it more fancy. - Really an excellent result. LaTeX and Tikz have a unique way to surprise me!!! and all of you with your codes –  karathan Mar 31 '13 at 14:00 If you find a little time, would be easy to put a short comment on each line to explain your code –  karathan Mar 31 '13 at 18:52 Is there any way to get the text that is displayed to make it an actual captcha? (Sorry, TeX n00b) –  Cole Johnson Apr 5 '13 at 19:49 Here is a solution that uses Metapost and ConTeXt (no font changes but randomized color and scaling): \starttext \startMPpage[offset=3mm] picture lab; numeric total_width, current_width; total_width := 0; for s = "H", "e", "l", "p", "\quad", "Γ", "Δ" : lab := textext(s); current_width := xpart (lrcorner lab - llcorner lab); for i = 1,3,2 : draw lab rotated (-30 randomized 60) shifted (total_width randomized 5pt, i*6pt randomized 2pt) if i = 1 : scaled (0.8 randomized 0.2) withcolor (0.8 randomized 0.1, 0.8 randomized 0.1, 0.8 randomized 0.2) elseif i = 2 : scaled (0.8 randomized 0.2) withcolor (0.2 randomized 0.1, 0.1 randomized 0.1, 0.8 randomized 0.2) else : scaled (0.8 randomized 0.2) withcolor (0.8 randomized 0.1, 0.8 randomized 0.1, 0.8 randomized 0.2) fi; endfor total_width := total_width + current_width ; endfor \stopMPpage \stoptext which gives - something went wrong as all three colors are light. One of them (in the middle) is supposed to be dark. –  Aditya Mar 31 '13 at 5:03 No, context filename is correct. But the colors are changed at random, so there is no way to predict the output. –  Aditya Mar 31 '13 at 5:12 Hmm... strange. I uploaded a bigger picture. –  Aditya Mar 31 '13 at 5:29 Yes. Indeed. I might try to re-install ConTeXt. Well, thanks for the answer. This is really helpful. +1 :) –  hpesoj626 Mar 31 '13 at 5:32 Although I use LaTeX + Tikz... is a very good result and will be useful to users of Metapost and ConTeXt –  karathan Mar 31 '13 at 14:07 Without PSTricks. \documentclass[preview,border=12pt]{standalone} \usepackage[a0paper]{geometry} \usepackage[nomessages]{fp} \usepackage{graphicx} \usepackage{pgffor} \FPseed=0 \begin{document} \foreach \C in {I,a,m,a,p,i,g,.}{% \FPrandom\Scale\FPeval\Scale{round(Scale*10+1:2)} \FPrandom\Rotate\FPeval\Rotate{round(Rotate*1000:2)} \FPrandom\Raise\FPeval\Raise{round(Raise*10:2)} \scalebox{\Scale}{\rotatebox{\Rotate}{\raisebox{\Raise pt}{\C}}}% }% \end{document} -
2014-03-17 17:11:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8625829219818115, "perplexity": 11304.582135015507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678705768/warc/CC-MAIN-20140313024505-00035-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.gamedev.net/forums/topic/66793-small-objects-brighter/
#### Archived This topic is now archived and is closed to further replies. # Small objects brighter ## Recommended Posts When I render two identical objects at different sizes, the smaller one is a lot brighter. The only way I can get them to look the same is to change the light settings. (If I draw the object at half size, and scale the light intensity by half, it looks right.) Is there any reason for this, or am I missing something? ##### Share on other sites Make sure you''re normalizing your surface normals. To do that, divide your axial components by the vector''s magnitude. float reciprocalDistance = 1.0f / sqrtf(square(vector.x)+square(vector.y)+square(vector.z))vector.x *= reciprocalDistance;vector.y *= reciprocalDistance;vector.z *= reciprocalDistance; • ### Forum Statistics • Total Topics 628333 • Total Posts 2982130 • 24 • 9 • 9 • 13 • 11
2017-11-21 12:18:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30274710059165955, "perplexity": 6289.04179026006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806353.62/warc/CC-MAIN-20171121113222-20171121133222-00628.warc.gz"}
http://mathoverflow.net/questions/47974/artins-conjecture-for-n-2?answertab=active
# Artin's conjecture for n=2 I am interested in the following question: Is it known that $2$ is a primitive root modulo $p$ for infinitely many primes $p$? there is some information about Artin's conjecture in http://en.wikipedia.org/wiki/Artin%27s_conjecture_on_primitive_roots I need to know if it is up to date and if one can say something about the case n=2. - No. $\!\!\!\!\!$ – David Hansen Dec 2 '10 at 1:25 @David: there were two questions. @Kate: Pieter Moree at Bonn will know the most recent advances if there were any. – Franz Lemmermeyer Dec 2 '10 at 9:00 @Franz: Thanks for the information – Kate Juschenko Dec 2 '10 at 19:28
2016-05-05 14:28:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.938368558883667, "perplexity": 728.6630895218459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860127496.74/warc/CC-MAIN-20160428161527-00008-ip-10-239-7-51.ec2.internal.warc.gz"}
https://forum.math.toronto.edu/index.php?PHPSESSID=t3povupedhlm0ht843sjm39bq3&action=printpage;topic=210.0
Toronto Math Forum MAT244-2013S => MAT244 Math--Lectures => Ch 3 => Topic started by: Victor Ivrii on January 31, 2013, 05:50:11 PM Title: Problem of the week 4b Post by: Victor Ivrii on January 31, 2013, 05:50:11 PM Consider two identical and connected harmonic oscillators: \left\{\begin{aligned} &y''+K y + L (y-z)=0\\ &z''+Kz + L(z-y)=0 \end{aligned}\right. \label{eq-1} with $K>0$, $L>0$. Even if this is a system one can add or subtract equations getting system of two equations describing $y+z$ and $y-z$ separately. 1) Find $y+z$ and $y-z$ and then $y$ and $z$  (so, find the general solution of (\ef{eq-1}). 2) What frequencies has the described system? Title: Re: Problem of the week 4b Post by: Changyu Li on January 31, 2013, 10:45:31 PM 2) guess $y = A e^{rt}$, $z = B e^{rt}$ $$A r^2 + K A + L\left(A-B\right) = 0 \\ B r^2 + K B + L\left(B-A\right) = 0 \\ \left( \begin{array}{cc} r^2 + K + L & - L \\ -L & r^2 + K + L \\ \end{array} \right) \left( \begin{array}{c} A \\ B \end{array} \right) = 0$$ nontrivial solution exists if there is no inverse to the ugly matrix, therefore its determinant is 0 $$\left( r^2 + K + L \right)^2 + L^2 = 0$$ $$r^2 = K - L \pm L$$ therefore the frequencies are $\pm K$, $\pm\left(K-2L\right)$ Title: Re: Problem of the week 4b Post by: Brian Bi on February 01, 2013, 12:13:33 AM Add and subtract the first and second equations to obtain: \begin{align} (y+z)'' + K(y+z) &= 0 \label{added} \\ (y-z)'' + (K+2L)(y-z) &= 0 \label{subtracted} \end{align} Since $K, L > 0$, both equations are of the form $u'' + \omega^2 u = 0$, with general solution $u = A \cos (\omega t) + B \sin (\omega t)$. So the general solution to $(\ref{added})$ is y+z = A \cos (\sqrt{K} t) + B \sin (\sqrt{K} t) and the general solution to $(\ref{subtracted})$ is y - z = C \cos (\sqrt{K+2L} t) + D \sin(\sqrt{K+2L}t) Using the identities $y = \frac{1}{2}((y+z)+(y-z))$ and $z = \frac{1}{2}((y+z)-(y-z))$ we obtain the general solution to $(\ref{eq-1})$: \begin{align} y &= A' \cos(\omega_1 t) + B' \sin(\omega_1 t) + C' \cos(\omega_2 t) + D' \sin(\omega_2 t) \\ z &= A' \cos(\omega_1 t) + B' \sin(\omega_1 t) - C' \cos(\omega_2 t) - D' \sin(\omega_2 t) \end{align} where $A' = A/2$ and so on, and the frequencies are $\omega_1 = \sqrt{K}$, and $\omega_2 = \sqrt{K+2L}$.
2021-12-03 08:14:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9925687313079834, "perplexity": 2173.345803106312}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362605.52/warc/CC-MAIN-20211203060849-20211203090849-00433.warc.gz"}
http://www.pwills.com/blog/posts/2018/02/06/entropy.html
# The Meaning of Entropy Entropy is a word that we see a lot in various forms. It’s classical use comes from thermodynamics: e.g. “the entropy in the universe is always increasing.” With the recent boom in statistics and machine learning, the word has also seen a surge in use in information-theoretic contexts: e.g. “minimize the cross-entropy of the validation set.” It’s been an ongoing investigation for me, trying to figure out just what the hell this information-theoretic entropy is all about, and how it connects to the notion I’m familiar with from statistical mechanics. Reading through the wonderful book Data Analysis: a Bayesian Tutorial by D. S. Sivia, I found the first connection between these two notions that really clicked for me. I’m going to run through the basic argument here, in the hope that reframing it in my own words will help me understand it more thoroughly. ## Entropy in Thermodynamics Let’s start with the more intuitive notion, which is that of thermodynamic entropy. This notion, when poorly explained, can seem opaque or quixotic; however, when viewed through the right lens, it is straightforward, and the law of increasing entropy becomes a highly intuitive result. ### Counting Microstates Imagine, if you will, the bedroom of a teenager. We want to talk about the entropy of two different states: the state of being “messy” and the state of being “clean.” We will call these macrostates; they describe the macroscopic (large-scale) view of the room. However, there are also many different microstates. One can resolve these on a variety of scales, but let’s just say they correspond to the location/position of each individual object in the room. To review: Type Definition Example Macrostate Overall Description “Messy” Microstate Fine-Scale Description “Underwear on lamp, shoes in bed, etc.” ### The Boltzmann Entropy One might notice an interesting fact: that there are many more possible microstates that correspond to “messy” than there are microstates that correspond to “clean.” This is exactly what we mean when we say that a messy room has higher entropy. In particular, the entropy of a macrostate is the log of the number of microstates that correspond to that macrostate. We call this the Boltzmann entropy, and denote it by $$S_B$$. If there are $$\Omega$$ possible microstates that correspond to the macrostate of being “messy,” then we define the entropy of this state as1 This is essentiall all we need to know here.2 The entropy tells us how many different ways there are to get a certian state. A pyramid of oranges in a supermarket has lower entropy than the oranges fallen all over the floor, because there are many configurations of oranges that we would call “oranges all over the floor,” but very few that we would call “a nicely organized pyramid of oranges.” In this context, the law of increasing entropy becomes almost tautological. If things are moving around in our bedroom at random, and we call most of those configurations “messy,” then the room will tend towards messyness rather than cleanliness. We sometimes use the terms “order” and “disorder” to refer to states of relatively low and high entropy, respectively. ## Entropy in Information Theory One also frequently encounters a notion of entropy in statistics and information theory. This is called the Shannon entropy, and the motivation for this post is my persistent puzzlement over the connection between Boltzmann’s notion of entropy and Shannon’s. Previous to reading D. Sivia’s manual, I only knew the definition of Shannon entropy, but his work presented such a clear exposition of the connection to Boltzmann’s ideas that I felt compelled to share it. ### Permutations and Probabilities We’ll work with a thought experiment.3 Suppose we have $$N$$ subjects we organize into $$M$$ groups, with $$N\gg M$$. Let $$n_i$$ indicate the number of subjects that are in the $$i^\text{th}$$ group, for $$i=1,\ldots,M$$. Of course, and if we choose a person at random the probability that they are in group $$i$$ is The Shannon entropy of such a discrete distribution is defined as But why? Why $$p\log(p)$$? Let’s look and see. A macrostate of this system is defined by the size of the groups $$n_i$$; equivalently, it is defined as the probability distribution. A microstate of this system is specifying the group of each subject: the specification that subject number $$j$$ is in group $$i$$ for each $$j=1,\ldots,N$$. How many microstates correspond to a given macrostate? For the first group, we can fill it with any of the $$N$$ participants, and we must choose $$n_1$$ members of the group, so the number of ways of assigning participants to this group is For the second group, there are $$N - n_1$$ remaining subjects, and we must assign $$n_2$$ of them, and so on. Thus, the total number of ways of arranging the $$N$$ balls into the groups of size $$n_i$$ is This horrendous list of binomial coefficients can be simplified down to just The Boltzmann entropy of this macrostate is then ### From Boltzmann to Shannon We will now show that the Boltzmann entropy is (approimxately) a scaling of the Shannon entropy; in particular, $$S_B \approx N\,S$$. Things are going to get slightly complicated in the algebra, but hang on. If you’d prefer, you can take my word for it, and skip to the next section. We will use the Stirling approximation $$\log(n!)\approx n\log(n)$$4 to simplify: Since the probability $$p_i=n_i/N$$, we can re-express $$S_b$$ in terms of $$p_i$$ via Since $$\sum_ip_i=1$$, we have Phew! So, the Boltzmann entropy $$S_b$$ of having $$N$$ students in $$M$$ groups with sized $$n_i$$ is (approximately) $$N$$ times the Shannon entropy. ## Who Cares? Admittedly, this kind of theoretical revalation will probably not change the way you deploy cross-entropy in your machine learning projects. It is primarily used because its gradients behave well, which is important in the stochastic gradient-descent algorithms favored by modern deep-learning architectures. However, I personally have a strong dislike of using tools that I don’t have both a theoretical understanding of; hopefully you now have a better grip on the theoretical underpinnings of cross entropy, and its relationship to statistical mechanics. 1. Often a constant will be included in this definition, so that $$S=k_B \log(\Omega)$$. This constant is arbitrary, as it simply rescales the units of our entropy, and it will only serve to get in the way of our analysis, so we omit it. 2. All we need to know for the purpose of establishing a connection between thermodynamic and information-theoretic entropy; of course there is much more to know, and there are many alternative ways of conceptualizing entropy. However, none of these have ever been intuitive to me in the way that Boltzmann’s definition of entropy is. 3. We have slightly rephrased Sivia’s presentation to fit our purposes here. 4. The most commonly used form of Stirling’s approximation is the more precise $$\log(n!)\approx n\log(n)-n$$, but we use a coarser form here. Updated:
2019-05-22 11:56:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7728618383407593, "perplexity": 415.6729537405178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256797.20/warc/CC-MAIN-20190522103253-20190522125253-00456.warc.gz"}
https://rieselprime.de/z/index.php?title=Riesel_prime&diff=1819&oldid=1628
# Difference between revisions of "Riesel prime" Although there's no official definition of a Riesel prime mostly all primes of the form $k\times2^n-1$ are called like this on many pages.
2019-09-17 00:15:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.875042736530304, "perplexity": 1195.3191237440471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572980.56/warc/CC-MAIN-20190917000820-20190917022820-00143.warc.gz"}
https://pydigger.com/pypi/moodlexport
# moodlexport Name moodlexport JSON Version 0.0.24 JSON download home_page https://github.com/Guillaume-Garrigos/moodlexport Summary A package to export test questions into Moodle from python or latex upload_time 2020-12-09 17:09:09 maintainer docs_url None author Guillaume Garrigos requires_python >=3.7 license MIT keywords VCS bugtrack_url requirements No requirements were recorded. Travis-CI No Travis. coveralls test coverage No coveralls. # moodlexport This Python module provides code which allows to easily generate families of questions (called *categories* in Moodle) that can be directly exported from either Python or Latex to Moodle, and use them to create a test. The main motivation behind this module is that : - it is easier to define mathematical objects in Python than Moodle - it is more comfortable to type maths in Latex - generating random problems is simpler in Python and can go way beyond what Moodle proposes - it is easier to store/manipulate locally a Latex or Python file than doing it on the Moodle interface. It also simplifies collaborating projects. It can be installed with a pip command : pip install moodlexport Some internal links within this documentation: - [Main features of this module so far](#Main-features-of-this-module-so-far) - [Quick start](#Quick-start) - [Simple examples from Python](#Simple-examples-from-Python) - [Simple examples from Latex](#Simple-examples-from-Latex) - [Exporting many questions at once](#Exporting-many-questions-at-once) - [Documentation](#Documentation) - [Main commands from Python](#Main-commands-from-Python) - [Main commands from Latex](#Main-commands-from-Latex) - [Changelog](#Changelog) - [Known issues/missing features](#Known-issues/missing-features) ## Main features of this module so far - Creating a question. The only supported classes of questions are: - "essay" : the student answers in a white text box. - "multichoice" : the question comes with at least 2 possible answers. - All the options available in Moodle are available here (defining a grade, information for the grader, feedback, etc). See more details below. - Creating a category (family) of questions. - Supports Unicode within python and latex : éàê ... - Supports Latex syntax, whether you write from latex or python, in way that Moodle understands. Supports inline latex with $e^x$, $$e^x$$, and equation with $$f(x) = \sum_i x_i^2$$, \begin{equation*}...\end{equation*}, \begin{cases} etc - Supports export to Moodle via a XML MOODLE file, but also to .tex and .pdf files (which allow more easily to see what you are doing) - Supports inserting images ## Quick start ### Simple examples from Python: python from moodlexport import Question question = Question("essay") question.text("What is the derivative of $f(x) = e^x + 0.5 \Vert x \Vert^2$?") question.save("my first question") ### Simple examples from Latex You can produce the same result as above by defining your question directly in a Latex file. Suppose for isntance that you have a Latex file myquestion.tex containing the following : latex \documentclass{amsart} \usepackage{latextomoodle} \begin{document} \begin{question}[essay] What is the derivative of $f(x) = e^x + 0.5 \Vert x \Vert^2$? \end{question} \end{document} Then you can convert this myquestion.tex file directly into a readytoexport.xml file, by using the following Python commands: python from moodlexport import latextomoodle latextomoodle('myquestion.tex','my first question') Note that if you wish to compile the .tex file without errors, you will need to place the Latex package latextomoodle.sty in the same folder. This package can be found in moodlexport/templates ### Exporting many questions at once If you want to export more than one question, you might want to gather them within a category, which will produce a single file containing all those questions. Here is how to proceed: In Python: python from moodlexport import Question, Category category = Category("My little catgory name") question = Question("essay") question.text("What is the derivative of $f(x) = e^x + 0.5 \Vert x \Vert^2$?") question = Question("multichoice") question.text("Is every symmetric matrix invertible?") category.save() In Latex, followed by the python command latextomoodle('file_name.tex') : latex \documentclass{amsart} \usepackage{latextomoodle} \begin{document} \begin{category}[My little catgory name] \begin{question}[essay] What is the derivative of $f(x) = e^x + 0.5 \Vert x \Vert^2$? \end{question} \begin{question}[multichoice] Is every symmetric matrix invertible? \end{question} \end{category} \end{document} ## Documentation ### Main commands from Python #### The Category Class category = Category(string) creates an object of class Category. string here specifies the name of the category, which will appear in Moodle. It comes with a few methods: - category.savexml(string) creates an XML file under the XML Moodle format, ready to import within Moodle. The name of the file is the name of the category by default. If a string is given, the name of the file will be string.xml. - category.savetex(string) creates a TEX file, containing all the questions of the category, nicely displayed. The name of the file is the name of the category by default (spaces and underscores will be replaced with -). If a string is given, the name of the file will be string.tex. - category.savepdf(string) creates a TEX file as above and then compiles it to generated a PDF file. - category.description(string) Adds a description to the category, which will appear in Moodle. #### The Question Class question = Question(type) creates an object of class Question. The type of the question can be essay (default) or multichoice. It comes with a family of methods question.OPTION(value) where OPTION describes every possible option that you can set in Moodle. The most important ones are: - question.title(string) sets the title of the question - question.text(string) sets the text (main body) of the question - question.grade(float) sets the grade of the quesiton - question.graderinfo(string) sets the information to be given to the grader - question.addto(category) adds the question to a category Methods specific to the essay type (answer via a text editor): - question.responseformat(string) : editorfilepicker lets the student upload a file as an answer (default) , editor forbids it. - question.responserequired(bool) : 0 if no response is required (default), 1 if a response is required. Methods specific to the multichoice type (finite number of possible answers): - question.answer(string, value) : Adds a possible answer to the question. string is the text of the answer, value describes if this answer is correct or no. It can be described in two ways: - as a boolean True or False (default) - as a percentage (integer between 0 and 100), which represents the fraction of the grade attributed to the answer. This is typically used for questions with more than 2 answers. A unique true answer has 100, a wrong answer has 0 (default) - question.single(value) : true if only one answer is possible (default), false if more than one answer can be selected by the student. #### Misc. Inserting an image: to do so, use the includegraphics function: python from moodlexport import includegraphics text = 'here is a cool image:' + includegraphics("./some_folder/my_image.png", width=256, height=128) question = Question() question.text(text) Options: - width and height (integer). Modify the size of the image, in pixels. If no argument is passed, the image is displayed in its original shape. - style (string). Two possible values: * "centered" (default). The image is displayed in a new line and centered. * "inline". The image is displayed next to the text. ### Main commands from Latex It is possible to use a similar syntax within a TEX document : - \begin{category}[name] ... \end{category} defines the environment corresponding to a category. It is possible to write various categories within the same document. name is the name of the category. - \begin{question}[type] ... \end{question} defines the environment corresponding to a question. It is possible to write various question within the same category. type is the type of the question, essay by default. - All the methods mentioned above can be used in latex. The analogue of .OPTION(value) becomes \OPTION{value} in Latex (and must be placed within the corresponding environment). For instance : - \description{string} sets the description of a category - \grade{float} sets the grade of a question - \answer[value]{string} adds an answer to a multichoice question - Inserting images is done with the command \includegraphics[width=256px, height=128px]{./some_folder/my_image.png} from the package graphicx * for the options width and height the only supported unit is px * the option scale is not supported * if the command \includegraphics is called within an environment \begin{center} ... \end{center}, the image will be centered as well in Moodle. If not it will be displayed inline. The corresponding latex package can be found in moodlexport/moodlexport/templates, should be [https://github.com/Guillaume-Garrigos/moodlexport/tree/master/moodlexport/templates](here). To convert a .tex file into an .xml, use python from moodlexport import latextomoodle latextomoodle('file_name.tex') You can also import the contents of your .tex file directly into python (you might want to do some modifications before exporting to Moodle). You .tex file must contain one or more categories of questions. To do so, use : python from moodlexport import latextopython # it outputs a list of Category objects, even if you have only one category. list_of_categories = latextopython('file_name.tex') ## Changelog - v.0.0.24 Solves issue #5 - v.0.0.23 Forgot to load some modules. [https://github.com/Guillaume-Garrigos/moodlexport/pull/4](Merge) from [@gregnordin](https://github.com/gregnordin) - v.0.0.22 Add a new feature to insert images. - v.0.0.21 The parser used to handle $'s was wayyy to slow. This is corrected now. - v.0.0.20 - I realized that depending on Moodle's version, or depending on how the administrator implements it, inline math like $e^x$ can not be recognized. Moodle's doc [says](https://docs.moodle.org/3x/fr/Utilisation_de_la_notation_TeX) it is not supported. So, now, every inline math $e^x$ is converted into $$e^x$$ just before exporting the data into XML. This allows the user to painlessly type latex as usual with $'s. - Now TEX files are generated without spaces or _ in the filename. Because latexmk wasn't happy when generating pdfs. - v.0.0.19 - Corrects bug #3 for multichoice questions, allowing now for negative grades for wrong answers. Proposed by [@Stivanification](https://github.com/Stivanification). - Corrects bug #2 caused by a broken backcompatibility from the TexSoup Module. Now this module requires the exact needed version ## Known issues/missing features - So far I have a bad time handling breaklines in a text written in python. Using explicit <br/> tags should do the job. ### Raw data { "_id": null, "home_page": "https://github.com/Guillaume-Garrigos/moodlexport", "name": "moodlexport", "maintainer": "", "docs_url": null, "requires_python": ">=3.7", "maintainer_email": "", "keywords": "", "author": "Guillaume Garrigos", "author_email": "guillaume.garrigos@lpsm.paris", "platform": "", "bugtrack_url": null, "summary": "A package to export test questions into Moodle from python or latex", "version": "0.0.24", "split_keywords": [], "urls": [ { "comment_text": "", "digests": { "md5": "eb0cd6d829df805efd04465cb44256ee", }, "filename": "moodlexport-0.0.24-py3.7.egg", "has_sig": false, "md5_digest": "eb0cd6d829df805efd04465cb44256ee", "packagetype": "bdist_egg", "python_version": "3.7", "requires_python": ">=3.7", "size": 52507, "url": "https://files.pythonhosted.org/packages/fc/ff/b35540f4ac7522e157e69caf6cbb267f3dc3ccf0419fbc6258e2337d3708/moodlexport-0.0.24-py3.7.egg", "yanked": false, "yanked_reason": null }, { "comment_text": "", "digests": { "md5": "99a66d47b2f448b76a2c80d4d5a7185d", "sha256": "a6012ec8d769b8dc36c3b2004a58a6951c25ae2615748944fd176335c63bb07c" }, "filename": "moodlexport-0.0.24.tar.gz", "has_sig": false, "md5_digest": "99a66d47b2f448b76a2c80d4d5a7185d", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.7", "size": 33369, "url": "https://files.pythonhosted.org/packages/5a/a7/ed01fe451e3e37f65038324c963d7b32fd64203a474e400a5bf2cfe39131/moodlexport-0.0.24.tar.gz", "yanked": false, "yanked_reason": null } ],
2021-01-27 09:51:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7144543528556824, "perplexity": 4814.889165345727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704821381.83/warc/CC-MAIN-20210127090152-20210127120152-00077.warc.gz"}
http://mathematica.stackexchange.com/questions/34006/deleting-duplicates-from-matrix
# Deleting duplicates from matrix I have a 4x4 symmetric matrix that is obtained by solving some equations. I tried deleting duplicates with DeleteDuplicates, but that's not working with nested lists (elements of those lists that are the same). So I have this $$\begin{pmatrix} a_{11} & a_{12} & a_{13} & a_{14} \\ a_{21} & a_{22} & a_{23} & a_{24} \\ a_{31} & a_{32} & a_{33} & a_{34} \\ a_{41} & a_{42} & a_{43} & a_{44} \\ \end{pmatrix}\to a_{nm}=a_{mn}\to \begin{pmatrix} a_{11} & a_{12} & a_{13} & a_{14} \\ 0 & a_{22} & a_{23} & a_{24} \\ 0 & 0 & a_{33} & a_{34} \\ 0 & 0 & 0 & a_{44} \\ \end{pmatrix}$$ I'd like to be able to remove lower triangle of a matrix but only if they are the same. How to do that? - I noticed the UpperTriangularize function, which is what I need, it's just that I would like to be sure first if the matrix in question is symmetric. Should I make an If statement? –  dingo_d Oct 14 '13 at 16:12 You can use SymmetricMatrixQ –  rm -rf Oct 14 '13 at 16:22 Let mat be your matrix: func[mat_?SymmetricMatrixQ] := UpperTriangularize[mat]; func[mat_?(Not@SymmetricMatrixQ[#] &)] := mat; If the matrix is symmetric the above will convert to upper triangular matrix including diagonal,else it will return original matrix, e.g. m = {{a, b, c}, {b, d, e}, {c, e, f}}; m//MatrixForm func[m]//MatrixForm yields: - Thanks :) That does the trick :) –  dingo_d Oct 15 '13 at 12:49 I don't think this applies specifically to the OPs situation, but I thought I'd contribute just in case it's useful. As mentioned in the comments, using UpperTriangularize with a conditional statement to check for symmetry is the appropriate route; however if the matrix elements are, for example, coordinates, then SymmetricMatrixQ won't recognize the matrix as symmetric. A circuitous route to a 4x4 matrix of x,y coordinates: m1 = RandomInteger[{1, 10}, {4, 4}]; m2 = RandomInteger[{1, 10}, {4, 4}]; sm1 = m1 + Transpose[m1]; sm2 = m2 + Transpose[m2]; SymmetricMatrixQ[sm3] returns False, but sm3 == Tranpose[sm3] returns True. Additionally, UpperTriangularize doesn't like this matrix, so I make my own function: newUpperTriangularize[x_] := UpperTriangularize[x /. {_, _} -> 1]*x;
2014-10-22 14:14:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36751264333724976, "perplexity": 1796.7448838781652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447020.15/warc/CC-MAIN-20141017005727-00195-ip-10-16-133-185.ec2.internal.warc.gz"}
https://stacks.math.columbia.edu/tag/0DU5
Lemma 100.32.5. Let $\mathcal{X}$ be an algebraic stack. Assume $\mathcal{X}$ is quasi-DM with separated diagonal (equivalently $\mathcal{I}_\mathcal {X} \to \mathcal{X}$ is locally quasi-finite and separated). Let $x \in |\mathcal{X}|$. Assume $x$ can be represented by a quasi-compact morphism $\mathop{\mathrm{Spec}}(k) \to \mathcal{X}$. Then there exists a morphism of algebraic stacks $g : \mathcal{U} \longrightarrow \mathcal{X}$ with the following properties 1. there exists a point $u \in |\mathcal{U}|$ mapping to $x$ and $g$ induces an isomorphism between the residual gerbes at $u$ and $x$, 2. $\mathcal{U} \to \mathcal{X}$ is representable by algebraic spaces and étale, 3. $\mathcal{U} = [U/R]$ where $(U, R, s, t, c)$ is a groupoid scheme with $U$, $R$ affine, and $s, t$ finite, flat, and locally of finite presentation. Proof. The first part of the proof is exactly the same as the first part of the proof of Lemma 100.32.3. Thus we may assume $\mathcal{X} = [U/R]$ where $(U, R, s, t, c)$ and $u \in U$ mapping to $x$ satisfy all the assumptions of More on Groupoids in Spaces, Lemma 78.15.13. Observe that $u = \mathop{\mathrm{Spec}}(\kappa (u)) \to \mathcal{X}$ is quasi-compact, see Properties of Stacks, Lemma 99.14.1. Consider the cartesian diagram $\xymatrix{ F \ar[d] \ar[r] & U \ar[d] \\ u \ar[r]^ u & \mathcal{X} }$ Since $U$ is an affine scheme and $F \to U$ is quasi-compact, we see that $F$ is quasi-compact. Since $U \to \mathcal{X}$ is locally quasi-finite, we see that $F \to u$ is locally quasi-finite. Hence $F \to u$ is quasi-finite and $F$ is an affine scheme whose underlying topological space is finite discrete (Spaces over Fields, Lemma 71.10.8). Observe that we have a monomorphism $u \times _\mathcal {X} u \to F$. In particular the set $\{ r \in R : s(r) = u, t(r) = u\}$ which is the image of $|u \times _\mathcal {X} u| \to |R|$ is finite. we conclude that all the assumptions of More on Groupoids in Spaces, Lemma 78.15.11 hold. Thus we can find an elementary étale neighbourhood $(U', u') \to (U, u)$ such that the restriction $R'$ of $R$ to $U'$ is strongly split over $u'$. Note that $R' = U' \times _\mathcal {X} U'$ (small detail omitted; hint: transitivity of fibre products). Replacing $(U, R, s, t, c)$ by $(U', R', s', t', c')$ and shrinking $\mathcal{X}$ as above, we may assume that $(U, R, s, t, c)$ has a strong splitting over $u$. Let $P \subset R$ be a strong splitting of $R$ over $u$. Apply Lemma 100.32.2 to see that $\mathcal{U} = [U/P] \longrightarrow [U/R] = \mathcal{X}$ is representable by algebraic spaces and étale. Since $P \subset R$ is open and contains $\{ r \in R : s(r) = u, t(r) = u\}$ by construction we see that $u \times _\mathcal {U} u \to u \times _\mathcal {X} u$ is an isomorphism. The statement on residual gerbes then follows from Properties of Stacks, Lemma 99.11.14 (we observe that the residual gerbes in question exist by Lemma 100.31.2). $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2023-03-24 23:08:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9844706654548645, "perplexity": 193.5503095314679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00059.warc.gz"}
http://bib-pubdb1.desy.de/collection/PUB_CFEL-DESYT-20160930?ln=en
# CFEL-DESYT 2018-06-1411:07 [PUBDB-2018-02269] Journal Article et al Prospects of Using High-Intensity THz Pulses To Induce Ultrafast Temperature-Jumps in Liquid Water The journal of physical chemistry / A 122(23), 5211 - 5222 (2018) [10.1021/acs.jpca.8b00828]   Ultrashort, high-intensity terahertz (THz) pulses, e.g., generated at free-electron laser facilities, allow for direct investigation as well as the driving of intermolecular modes in liquids like water and thus will deepen our understanding of the hydrogen bonding network. In this work, the temperature-jump (T-jump) of water induced by THz radiation is simulated for ten different THz frequencies in the range from 3 to 30 THz and five different pulse intensities in the range from 1 × 10$^{11}$ to 5 × 10$^{12}$ W/cm$^{2}$ employing both ab initio molecular dynamics (AIMD) and force field molecular dynamics (FFMD) approaches. [...] Restricted: PDF PDF (PDFA); 2018-06-0411:23 [PUBDB-2018-02173] Journal Article et al Radiation-Induced Chemical Dynamics in Ar Clusters Exposed to Strong X-Ray Pulses Physical review letters 120(22), 223201 (2018) [10.1103/PhysRevLett.120.223201]   We show that electron and ion spectroscopy reveals the details of the oligomer formation in Ar clusters exposed to an x-ray free electron laser (XFEL) pulse, i.e., chemical dynamics triggered by x rays. With guidance from a dedicated molecular dynamics simulation tool, we find that van der Waals bonding, the oligomer formation mechanism, and charge transfer among the cluster constituents significantly affect ionization dynamics induced by an XFEL pulse of moderate fluence [...] OpenAccess: PDF PDF (PDFA); 2018-05-3114:00 [PUBDB-2018-02163] Journal Article Welsch, R. Rigorous close-coupling quantum dynamics calculation of thermal rate constants for the water formation reaction of H$_2$ + OH on a high-level PES The journal of chemical physics 148(20), 204304 - (2018) [10.1063/1.5033358]   Thermal rate constants for the prototypical H$_2$ + OH → H + H$_2$O reaction are calculated using quantum dynamics simulations including all degrees of freedom and accurately accounting for overall rotation via close-coupling. Results are reported for a recent, highly accurate neural network potential [J [...] OpenAccess: PDF PDF (PDFA); 2018-05-3110:11 [PUBDB-2018-02152] Journal Article et al Molecular polarizability anisotropy of liquid water revealed by terahertz-induced transient orientation Nature Communications 9(1), 2142 (2018) [10.1038/s41467-018-04481-5]   Reaction pathways of biochemical processes are influenced by the dissipative electrostatic interaction of the reagents with solvent water molecules. The simulation of these interactions requires a parametrization of the permanent and induced dipole moments. [...] OpenAccess: PDF PDF (PDFA); 2018-05-3009:55 [PUBDB-2018-02139] Journal Article et al Electron and fluorescence spectra of a water molecule irradiated by an x-ray free-electron laser pulse Physical review / A 97(5), 053415 (2018) [10.1103/PhysRevA.97.053415]   With the highly intense x-ray light generated by x-ray free-electron lasers (XFELs), molecular samples can be ionized many times in a single pulse. Here we report on a computational study of molecular spectroscopy at the high x-ray intensity provided by XFELs [...] OpenAccess: PDF PDF (PDFA); 2018-04-1813:04 [PUBDB-2018-01785] Journal Article Santra, R. Collective resonances of atomic xenon from the linear to the nonlinear regime Journal of physics communications 2(4), 045024 - (2018) [10.1088/2399-6528/aab946]   XUV nonlinear spectroscopy has recently discovered that there is more than one collective dipole resonance state in the energy range of the giant dipole resonance (GDR) of atomic Xe. This resonance-state substructure, hidden in the linear regime, raises imminent questions regarding our understanding of the collective electronic behavior of Xe, which has been largely founded on linear spectroscopic studies [...] OpenAccess: PDF PDF (PDFA); 2018-03-2710:12 [PUBDB-2018-01577] Journal Article et al Control of Nuclear Dynamics through Conical Intersections and Electronic Coherences Physical review letters 120(12), 123001 (2018) [10.1103/PhysRevLett.120.123001]   The effect of nuclear dynamics and conical intersections on electronic coherences is investigated employing a two-state, two-mode linear vibronic coupling model. Exact quantum dynamical calculations are performed using the multiconfiguration time-dependent Hartree method [...] OpenAccess: PDF PDF (PDFA); 2018-03-2213:25 [PUBDB-2018-01521] Journal Article Medvedev, N. Multistep transition of diamond to warm dense matter state revealed by femtosecond X-ray diffraction Scientific reports 8, 5284 (2018) [10.1038/s41598-018-23632-8]   Diamond bulk irradiated with a free-electron laser pulse of 6100 eV photon energy, 5 fs duration, at the ~19–25 eV/atom absorbed doses, is studied theoretically on its way to warm dense matter state. Simulations with our hybrid code XTANT show disordering on sub-100 fs timescale, with the diffraction peak (220) vanishing faster than the peak (111). [...] OpenAccess: PDF PDF (PDFA); 2018-02-2013:16 [PUBDB-2018-01265] Journal Article Gorelova, D. Imaging Electron Dynamics with Ultrashort Light Pulses. Theory Perspective Applied Sciences 8(3), 318 (2018) [10.3390/app8030318] special issue: "Extreme Time Scale Photonics"   A wide range of ultrafast phenomena in various atomic, molecular and condense matter systems is governed by electron dynamics. Therefore, the ability to image electronic motion in real space and real time would provide a deeper understanding of such processes and guide developments of tools to control them. [...] Restricted: PDF PDF (PDFA); 2018-02-1510:53 [PUBDB-2018-01193] Journal Article et al A Chemical Understanding of the Limited Site-Specificity in Molecular Inner-Shell Photofragmentation The journal of physical chemistry letters 9(5), 1156 - 1163 (2018) [10.1021/acs.jpclett.7b03235]   In many cases fragmentation of molecules upon inner-shell ionization is very unspecific with respect to the initially localized ionization site. Often this finding is interpreted in terms of an equilibration of internal energy into vibrational degrees of freedom after Auger decay. [...] Published on 2018-02-14. Available in OpenAccess from 2019-02-14.: PDF; Restricted: PDF PDF (PDFA);
2018-07-23 05:45:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5329776406288147, "perplexity": 6017.296697404145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594954.59/warc/CC-MAIN-20180723051723-20180723071723-00218.warc.gz"}
http://bayesianthink.blogspot.com/2013/02/
## Posts Showing posts from February, 2013 ### The Expected Draws to Sum over One. Q: You have a random number generator that creates random numbers exponentially between $$[0,1]$$. You draw from this generator and keep adding the result. What is the expected number of draws to get this sum to be greater than 1. Fifty Challenging Problems in Probability with Solutions (Dover Books on Mathematics) A: Before getting into the solution for this, I'll go over an established theorem of exponential distributions. If there exists two random variables which follow a exponential distribution with parameters $$\lambda_1$$ and $$\lambda_2$$ then their sum is given by the convolution of the two probability density functions. This is shown as $$P(z = X_1 + X_2) = f_{z}(z) = \sum_{x=0}^{z}f_{X_{1}}(x) f_{X_{2}}(z - x)$$ The probability density function of a distribution with rate parameter $$\lambda$$ is given as $$f(k,\lambda) = \frac{\lambda^{k}e^{-\lambda}}{k!}$$ Plugging this into the convolution formula gives us ### The Case of Two Mariners Q:Two mariners report to the skipper of a ship that they are distances $$d_1$$ and $$d_2$$ from the shore. The skipper knows from historical data that the mariners A & B make errors that are normally distributed and have a standard deviation of $$s_1$$ and $$s_2$$. What should the skipper do to arrive at the best estimate of how far the ship is from the shore? A: At a first look, it appears that the simplest solution would be to take the estimate of the navigator who has the lower standard deviation. If $$s_1 < s_2$$ then pick $$d_1$$ else pick $$d_2$$. But there is a way to do better than that. Assume you take a linearly weighted sum of the two with weight $$= \omega$$. $$d_{blended} = \omega\times d_1 + ( 1 - \omega)\times d_2$$ The variance of the blended estimate would be given by $$Var(d_{blended}) = \omega^{2}\times s_{1}^{2} + (1 - \omega)^{2}\times s_{2}^{2}$$ We next proceed to find a value for $$\omega$$ that minimizes the variance $$Var(d_{blended})$$. For this …
2019-10-19 00:04:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8784887790679932, "perplexity": 227.8663176211748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986685915.43/warc/CC-MAIN-20191018231153-20191019014653-00475.warc.gz"}
https://www.vedantu.com/question-answer/the-diagonal-of-the-square-is-4sqrt2-cm-the-class-10-maths-cbse-5efc657cfc7b62454bf5e55d
QUESTION # The diagonal of the square is $4\sqrt{2}$ cm. The diagonal of other square whose area is double that of the first square isa.8cmb.$8\sqrt{2}$ cmc.16cmd.$4\sqrt{2}$ cm Hint: We know that the diagonal of a square is $\sqrt{2}$ times its side, or $diagonal=\sqrt{2}\times sides$. From this we will find the area of the first square and then we will find the area of the second square by doubling it and from that area we will find the sides of the second square and hence the diagonal of the second square. It is given in the question that the diagonal of the first square is $4\sqrt{2}$ cm, and we have to find the diagonal of the second square. We know that the diagonal of a square is $\sqrt{2}$ times the sides of any square, or $diagonal=\sqrt{2}\times sides$. Since the diagonal of square is $4\sqrt{2}$cm, so from this, we get side of square = $\frac{diagonal}{\sqrt{2}}$ = $\frac{4\sqrt{2}}{\sqrt{2}}$ cm = 4 cm. Therefore, the side of the square = 4cm. Now, we know that area of square = ${{\left( side \right)}^{2}}$. Therefore, area of first square = ${{\left( 4 \right)}^{2}}$ = $16c{{m}^{2}}$. Now, the area of the second square is given to be double of the first. Therefore, the area of the second square = 16cm × 2cm = $32c{{m}^{2}}$. We know that the area of the square is the square of its sides. Therefore the side of the second square = $\sqrt{area}$ = $\sqrt{32}$ =$4\sqrt{2}$ cm. From equation $diagonal=\sqrt{2}\times sides$, we get its diagonal = $\sqrt{2}\times 4\sqrt{2}=4\times 2=8cm$. Note: It is important to know the relation between the sides of the square and its diagonal, which is $diagonal=\sqrt{2}\times sides$. The student can make a mistake by taking the multiplication factor of 2 instead of factor $\sqrt{2}$.
2020-07-05 23:41:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9192968010902405, "perplexity": 111.31525235522459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655889877.72/warc/CC-MAIN-20200705215728-20200706005728-00325.warc.gz"}
https://figshare.com/articles/Does_the_option_market_produce_superior_forecasts_of_noise-corrected_volatility_measures_/5080576/1
Does the option market produce superior forecasts of noise-corrected volatility measures? 2017-06-06T01:04:06Z (GMT) by This paper presents a comprehensive empirical evaluation of option-implied and returns-based forecasts of volatility, in which recent developments related to the impact on measured volatility of market microstructure noise are taken into account. The paper also assesses the robustness of the performance of the option-implied forecasts to the way in which those forecasts are extracted from the option market. Using a test for superior predictive ability, model-free implied volatility, which aggregates information across the volatility ‘smile’, and at-the-money implied volatility, which ignores such information, are both tested as benchmark forecasts. The forecasting assessment is conducted using intraday data for three Dow Jones Industrial Average (DJIA) stocks and the S&P500 index over the 1996-2006 period, with future volatility proxied by a range of alternative noise-corrected realized measures. The results provide compelling evidence against the model-free forecast, with its poor performance linked to both the bias and excess variability that it exhibits as a forecast of actual volatility. The positive bias, in particular, is consistent with the option market factoring in a substantial premium for volatility risk. In contrast, implied volatility constructed from liquid at-the-money options is given strong support as a forecast of volatility, at least for the DJIA stocks. Neither benchmark is supported for the S&P500 index. Importantly, the qualitative results are robust to the measure used to proxy future volatility, although there is some evidence to suggest that any option-implied forecast may perform less well in forecasting the measure that excludes jump information, namely bi-power variation.
2018-12-14 19:26:13
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8792482614517212, "perplexity": 2891.7055515553084}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826306.47/warc/CC-MAIN-20181214184754-20181214210754-00427.warc.gz"}
http://www.power-quant.com/?q=node/85
# Significant Figures using Python Ch. 1, Problems 1, 2, and 3 - Data Reduction and Error Analysis for the Physical Sciences by Philip Bevington: 1. How many significant figures are there in the following numbers? 1. 976.45 2. 84,000 3. 0.0094 4. 301.07 5. 4.000 6. 10 7. 5280 8. 400 2. What is the most significant figure in each of the numbers? What is the least significant? 3. Rround off each of the numbers above to two significant digits. Solution: As always I prefer to write a piece of code to complete the exercises. Shown below is a python module that has implementations of the necessary functionality. ''' Implementation of significant digits and associated concepts. ''' import math def isNumeric(x): '''Checks to see if x represents a numeric value by converting into unicode and utilizing the isnumeric() method.''' # first convert the number into a string strRep = str(x) # first make a unicode version so we can ensure we're dealing with # something that represents a numeric value: uRep = unicode(strRep) if ('.' in uRep) and all([x.isnumeric() for x in uRep.split('.')]): return True # there's a decimal and everything to the right # and left of it is numeric else: return uRep.isnumeric() def mostSigDigit(x): '''Returns the most significant digit in x.''' assert isNumeric(x), 'x must be numeric!' # number the digits: enumeratedChars = list(enumerate(str(x))) nonZeroChars = [x for x in enumeratedChars if (x[1] != '0') and (x[1] != '.')] return nonZeroChars[0][1] def leastSigDigit(x): '''Returns the least significant significant digit in x.''' assert isNumeric(x), 'x must be numeric!' # number the digits: enumeratedChars = list(enumerate(str(x))) nonZeroChars = [x for x in enumeratedChars if (x[1] != '0') and (x[1] != '.')] mostSignificantDigit = nonZeroChars[0] leastSignificantDigit = None if '.' in [x[1] for x in enumeratedChars]: leastSignificantDigit = enumeratedChars[-1] else: leastSignificantDigit = nonZeroChars[-1] # here we have a pair so just return the value: return leastSignificantDigit[1] def numSigDigits(x): '''Returns the number of significant digits in x.''' assert isNumeric(x), 'x must be numeric!' # number the digits: enumeratedChars = list(enumerate(str(x))) nonZeroChars = [x for x in enumeratedChars if (x[1] != '0') and (x[1] != '.')] mostSignificantDigit = nonZeroChars[0] leastSignificantDigit = None if '.' in [x[1] for x in enumeratedChars]: leastSignificantDigit = enumeratedChars[-1] else: leastSignificantDigit = nonZeroChars[-1] enumedSignificantDigits = [x for x in enumeratedChars[mostSignificantDigit[0]:leastSignificantDigit[0] + 1]] numDigits = len(enumedSignificantDigits) if '.' in [x[1] for x in enumeratedChars]: numDigits -= 1 return numDigits def round_sigfigs(num, sig_figs): """Round to specified number of sigfigs. >>> round_sigfigs(0, sig_figs=4) 0 >>> int(round_sigfigs(12345, sig_figs=2)) 12000 >>> int(round_sigfigs(-12345, sig_figs=2)) -12000 >>> int(round_sigfigs(1, sig_figs=2)) 1 >>> '{0:.3}'.format(round_sigfigs(3.1415, sig_figs=2)) '3.1' >>> '{0:.3}'.format(round_sigfigs(-3.1415, sig_figs=2)) '-3.1' >>> '{0:.5}'.format(round_sigfigs(0.00098765, sig_figs=2)) '0.00099' >>> '{0:.6}'.format(round_sigfigs(0.00098765, sig_figs=3)) '0.000988' """ assert isNumeric(num), 'x must be numeric!' if num != 0: return round(num, -int(math.floor(math.log10(abs(num))) - (sig_figs - 1))) else: return 0 # Can't take the log of 0 if __name__ == '__main__': import decimal numberList = ['976.45', '84000', '0.0094', '301.07', '4.000', '10', '5280', '400'] for eachNum in map(lambda x: decimal.Decimal(str(x)), numberList): originalNumStr = str(eachNum) nsdStr = ":".join(['numSigDigits', str(numSigDigits(eachNum))]) msdStr = ":".join(['mostSigDigit', str(mostSigDigit(eachNum))]) lsdStr = ":".join(['leastSigDigit', str(leastSigDigit(eachNum))]) roundedNumStr = ":".join(['rounded', str(round_sigfigs(eachNum, 2))]) resultStr = "; ".join([originalNumStr, nsdStr, msdStr, lsdStr, roundedNumStr]) print resultStr When you run this you should get something like the following: $python significant.py 976.45; numSigDigits:5; mostSigDigit:9; leastSigDigit:5; rounded:980.0 84000; numSigDigits:2; mostSigDigit:8; leastSigDigit:4; rounded:84000.0 0.0094; numSigDigits:1; mostSigDigit:9; leastSigDigit:4; rounded:0.0094 301.07; numSigDigits:5; mostSigDigit:3; leastSigDigit:7; rounded:300.0 4.000; numSigDigits:4; mostSigDigit:4; leastSigDigit:0; rounded:4.0 10; numSigDigits:1; mostSigDigit:1; leastSigDigit:1; rounded:10.0 5280; numSigDigits:3; mostSigDigit:5; leastSigDigit:8; rounded:5300.0 400; numSigDigits:1; mostSigDigit:4; leastSigDigit:4; rounded:400.0 And we are done. Something which struck me while I was coding this was that default python types associated with Python are not well suited for managing significant digits. They don't have a notion of significance built in. While it is true that the decimal module has a notion of precision it does NOT have a corresponding notion of accuracy and thus fails to deal properly with significance. It would be nice to have a domain specific language which gets a type which propagates error automatically. I haven't done an exhaustive search yet for such a thing but it would be nice to find. Also carefully examine the second argument in the call to round. What does it mean to call round with a negative second argument like that? I did a little futzing around with ipython: $ ipython Python 2.7.1 (r271:86832, Feb 13 2012, 05:08:31) IPython 0.13 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. In [1]: round(234, 2) Out[1]: 234.0 In [2]: round(234, -2) Out[2]: 200.0 In [3]: round? Type: builtin_function_or_method String Form: Namespace: Python builtin Docstring: round(number[, ndigits]) -> floating point number Round a number to a given precision in decimal digits (default 0 digits). This always returns a floating point number. Precision may be negative. In [4]: round(234, -1) Out[4]: 230.0 In [5]: round(234, -2) Out[5]: 200.0 In [6]: round(234345.345, -2) Out[6]: 234300.0 In [7]: round(234345.345, -1) Out[7]: 234350.0 The examples above give you a sense of how that second argument works when you use negative values. The docs for round say at the very end 'Precision may be negative' -- Apparently this means it starts to strip away the precision if you push to negative values. This only strips away precision associated with digits in the number which correspond to positive powers of ten! The last two examples should make this concrete. Note how you lose everything to the right of the decimal! Tricky... In any case it makes for an easy way to round to the proper number of significant digits so whatever -- it works. For further reference you may wish to consult the wikipedia article on significance arithmetic which will give you an idea on why significant digits matter. I tend to think of this as a subject leading up to formal calculation of uncertainty. ### VietNam Journeys Vietnam's representation is the same of battle, colonisation and rebellion. Occupied by China no fewer than four times, the Vietnamese managed to oppugn inaccurate the invaders honourable as often. Gloaming during the periods in the past when Vietnam was independent, it was mostly a tributary state to China until the French colonisation. Vietnam's mould emperors were the Nguy?n Dynasty, who ruled from their capital at Peeved from 1802 to 1945, although France exploited the descent disaster after the fall of T? D?c to de facto colonise Vietnam after 1884. Both the Chinese occupation and French colonisation get radical a eternal effect on Vietnamese enlightenment, with Confucianism forming the point of departure of Vietnamese social decorum, and the French leaving a permanent imprint on Vietnamese cuisine. After a brief Japanese job in Existence Engage in combat with II, the Communist Viet Minh under the administration of H? Chi Minh continued the insurgency against the French, with the form Emperor Bao Dai abdicating in 1945 and a publication of self-direction following in a little while after. The the better of French had pink at hand 1945, but in 1946 they returned to continue the take up arms against until their decisive whip at Dien Bien Phu in 1954. The Geneva Convention partitioned the motherland into two at 17th even off, with a Communist-led North and Ngo Dinh Diem declaring himself President of the Republic of Vietnam in the South. US fiscal and military aid to South Vietnam grew during the 1960s in an take on to uphold the Southern Vietnam direction, escalating into the dispatch of 500,000 American troops in 1966 and what became known as the Vietnam In combat - although the Vietnamese refer to it as the American War. What was supposed to be a hasty and decisive exercise straightway degenerated into a quagmire, and U.S. armed forces were diffident following a cease-fire accord in 1973. Two years later, on April 30, 1975, a North Vietnamese tank drove into the South's Presidential Residence in Ho Chi Minh Municipality and the in contention ended. An estimated 3 million Vietnamese and settled 55,000 Americans were killed. The American Vietnamese in disagreement was only one of assorted that the Vietnamese procure fought, but it was the most insensitive in its history. All over two thirds of the current denizens was born after 1975. American tourists last will and testament draw a markedly friendly entitled in Vietnam, as many young Vietnamese aspire to American culture. Profitable reconstruction of the reunited rural area has proven difficult. After the failures of the state-run succinctness started to become visible, the woods launched a program of d?i m?i (renovation), introducing elements of capitalism. The way has proved powerfully successful, with Vietnam recording near 10% increase annually (except due to the fact that a outline interruption during the Asian profitable turning-point of 1997). The economy is much stronger than those of Cambodia, Laos, and other neighboring developing countries. Like most Communist countries enclosing the everyone, there is a ripping equalize between allowing unrelated investors and opening up the market. There are utmost restrictions on foreigners owning resources or attempting to sell. It is precise strenuous in the course of them to trade without negotiating 'fees'. Business can be done via resident partnerships with all the depending risks. Power and services is another issue. There are commonly rolling blackouts at times when there is not reasonably electricity. For this reason, profuse shops have shirt-pocket generators. According to government estimates Vietnam sees 3.3m holiday-maker arrivals each year. Vietnam has a resurfacing figure of justifiable 5% compared to Thailand’s mammoth 50%. Most people in Vietnam are ethnic Vietnamese (Kinh), nonetheless there is a sizable ethnic Chinese community in Ho Chi Minh City, most who are descended from migrants from Guangdong strand and are consequently bilingual in Cantonese or other Chinese dialects and Vietnamese. There are also numerous other ethnic groups who occupy the formidable parts of the sticks, such as the Hmong, Muong, and Dao people. There's also a minority ethnic assemblage in the lowlands imminent the border with Cambodia known as the Khmer Krom. Buddhism, mostly of the Mahayana first, is the solitary largest faith in Vietnam, with exceeding 80% of Vietnamese people identifying themselves as Buddhist. Catholicism is the second largest dogma, followed by means of the local Cao Dai religion. Other Christian denominations, Islam, and neighbourhood religions also appropriation shallow followings throughout the southern and central areas. Deserved to its hunger old hat as a tributary pomp of China, as well as diverse periods of Chinese occupations, Vietnamese culture is heavily influenced by means of that of south China, with Confucianism forming the heart of Vietnamese society. The Vietnamese cant also contains many accommodation words from Chinese, while the two languages are unrelated. Buddhism remains the isolated largest creed in Vietnam. As in China, but unequivalent to the rest of northern Southeast Asia, the reigning clique of Buddhism in Vietnam is the Mahayana School. Even so, Vietnamese enlightenment remains patent from Chinese background as it has also concentrating cultural elements from neighboring Hindu civilizations such as the Champa and the Khmer empires. The French colonization also left a eternal impact on Vietnamese society, perhaps symbolised overpower past the Vietnamese fondness for baguettes and coffee. Away clearly the largest holiday is T?t — the Lunar Brand-new Year — which takes condition between late January and March. In the spell unsurpassed up to T?t, the country is abuzz with preparations. Guys on motorbikes speed on all sides delivering potted tangerine trees and flowering bushes, the usual household decorations. People be afflicted with a little grain stressed absent from and the elbows flatter sharper, specifically in big cities, where the established riotous level of shipping becomes about homicidal. Then a few days before T?t the velocity begins to uninteresting down, as thousands of see residents depart on the side of their ancestral home towns in the provinces. Finally on the first date of the chic year an bluff transformation occurs: the streets become quiet, almost deserted. Almost all shops and restaurants stop an eye to three days, (the exception being a few that cater noticeably to foreign visitors; and hotels work as usual.) In the major cities, streets are decorated with lights and social festivities are organized which attract multifarious thousands of residents. But for the benefit of Vietnamese, T?t is mostly a concealed, family celebration. On the evening of the late year, families gather together and stock market permissible wishes (from more junior to more senior) and gifts of "charmed dough" (from more higher- ranking to more lower). In the first three days of the year, the daytime hours are steadfast to visiting -- houses of relatives on the oldest date, closest friends and respected colleagues on the flawed hour, and everyone else on the third day. Numberless people also visit pagodas. The evening hours are spent drinking and gambling (men) or chatting, playing, singing karaoke, and enjoying traditional snacks and candy (women and children.) Visiting Vietnam during T?t has reputable points and unhealthy points. On the minus side: modes of transport are jammed just once the feast as numberless Vietnamese pilgrimages to their home towns; hotels fill up, uncommonly in smaller towns; and your voice of shopping and dining is dourly limited in the first days of the new year (with a few places closed up to two weeks). On the together with side, you can celebrate the preparations and charge out of the overt festivities; pagodas are noticeably active; no acknowledging is charged to those museums and historical sites that abide open; and the foreigner-oriented touring industry of backpacker buses and place to turn hotels chugs along as usual. Visitors also confront a chance of being invited to link the festivities, strikingly if you acquire some close by connections or superintend to cause some Vietnamese friends during your stay. When visiting during T?t, it's sensitive to to shift settled somewhere at least two days anterior to the recent year, and don't essay to stir again until a couple of days after. Lesser holidays classify May 1, the traditional socialist labor prime, September 2, Vietnam's nationalistic epoch, Majesty Hung celebration on April 12th, commemorating background kings, and Release Broad daylight on April 30th, marking the use of Saigon in 1975. All those times, trains and planes tend to be sold out, and accommodations at the careen or in Dalat are petrified to find. Most excellently to book point in advance. Visitors from the following countries do not require a visa and can stay for the following number of days. 14 days: Brunei, Myanmar 15 days: Denmark, Finland, Japan, Norway, South Korea, Sweden, Russia 21 days: Philippines 30 days: Indonesia, Laos, Malaysia, Singapore, Thailand, Cambodia All other nationalities will require a visa in advance to visit Vietnam. In order to boost tourism, the Vietnamese government has made the island of Phu Quoc a visa-free zone. Those flying there through Ho Chi Minh City or arriving by boat will not need to apply for a visa beforehand. This is regardless of your nationality. Travelers are given 15 days to spend on the island. Those wishing to journey elsewhere can apply for a proper Vietnamese visa at the local immigration office. All passports should be valid for at least 45 days when arriving in Phu Quoc. Visas can be applied for at most Vietnamese embassies and consulates. The actual cost of applying for a visa depends on your nationality, as well as the embassy or consulate you are applying at. Check with the Vietnamese embassy or consulate in your country of residence for details. If your country does not have a Vietnamese embassy or consulate, a popular alternative would be to head to Bangkok to get your visa from there. Some Vietnamese Embassies offer a "While you wait service" (May 2008), where a single entry visa can be gained in 15 minutes. This service costs USD92, but is approved instantly. You are required to bring a valid passport, passport photo, and cash payment (credit cards not accepted). Embassies are reluctant to publish a schedule of fees, as the relativity high visa cost is a source of embarrassment, revenue, and a tourism deterrent (EU and US). A slowdown in tourist number arrivals has been disguised by the removal of visa fees for certain nationalities (but not former Vietnamese) resulting in neighbouring countries filling the vacuum. Foreign citizens of Vietnamese origin can apply for visa exemption that allows multiple entry for 3 months at a time which is valid for the duration of the passport. An increasingly popular alternative is to arrange a visa on arrival, which is not only considerably cheaper but also alleviates the need for passports to be posted to the Vietnamese Embassy in the country of origin. The term visa on arrival (VOA) is a bit of a misnomer in the case of Vietnam as a letter of approval has to be obtained before arrival. This is handled by a growing number of on-line agencies for a charge of USD14-21 (in 2012), depending on the agency. Most agencies accept payment by credit card. Some accept payment by Western Union. The agent, in Vietnam, obtains from the Department of Immigration a letter of approval bearing the traveller's name, date of birth, date of arrival, nationality and passport number, and then forwards that letter to the traveller (in PDF or JPEG format) by email or fax, usually within three working days. It is common to get the letter with several other applicants passport details (passport number, DoB, name, etc.). You might share your personal information with up to 10-30 other applicants on the same letter(s). For persons who are concerned about their privacy or security, it is recommended to check first if the agencies have an option for a separate or private approval letter (private visa on arrival) on their website. Very few online agencies have this option. Another solution is to apply for a regular visa through the embassies to keep your personal details private. After landing at one of the three international airports (Hanoi, Ho Chi Minh City, or Danang), the traveller goes to the "visa on arrival" counter, shows the letter, fills in an additional arrival form (can be pre-filled before departure) and receives an official stamp (sticker) in his or her passport. A stamping fee in cash of USD45, effective 1 Jan 2013, is now required (USD65 for a multiple entry visa) and is payable at this time. Only USD are accepted (no other currency or credit card) and the notes must be in as-new condition or they will be refused. Two passport photos are also required. Note that visas on arrival are not valid for border crossings and the official stamp can only be obtained at the three international airports. Therefore, travellers arriving by land from Cambodia, Laos, or China must be in possession of a full visa when they arrive at the border. Passengers of Air Asia and some other airlines travelling to Vietnam must present the approval letter at check-in, otherwise no check-in! Vietnam has moved away from arrival/departure cards. Depending on the present level of SARS, avian flu you may be subjected to a so-called health-check. There is no examination, though, but yet another form to fill in and, of course, another fee. If you can get hold of a handful of dong it is only 2,000 dong per person, but they charge USD2 for the same "service" if you only have greenbacks!
2014-10-31 07:27:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.192483589053154, "perplexity": 8125.610554825507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637899124.21/warc/CC-MAIN-20141030025819-00030-ip-10-16-133-185.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/395380/imakeidx-index-expands-macros-makeindex-rejects-everything
# imakeidx \index expands macros; makeindex rejects everything I have the following minimal(ish) example. This is for a technical book with lots of code samples and lots of code-keywords-appearing-in-running-text. So I have the following goals: • A macro \code{foo} for typesetting code-keywords-in-running-text. I use the listings package for typesetting code throughout. I also use the underscore package so I don't have to escape underscores in code. • I use the hyperref package for cross-references, plus plenty of code keywords appearing in section titles; so I use \texorpdfstring in the definition of \code. • Two indices: a main "Index" for English words and concepts, and a second "Index of code samples". I have heard that the imakeidx package is the most standard choice for this, so I'm using it. I'm open to alternatives if they can fix my problem. • Many, complicated, index terms. Some terms must be typeset with \code{foo}. Some must be secondary entries, as in, "iterator, std::vector::: see vector::iterator." ## mce.tex \documentclass[ebook,10pt,oneside,final]{memoir} \usepackage{imakeidx} \makeindex[name=mce,intoc,columns=2] \makeindex[name=code,intoc,columns=2,name=code,title=Index of code samples] \usepackage[final]{listings} \usepackage{hyperref} \usepackage{underscore} \newcommand{\code}[1]{\texorpdfstring{\mbox{\lstinline[basicstyle=\ttfamily]#1}}{#1}} \newcommand{\codeblockdefines}[1]{\index[code]{#1@\code{#1}}} % for example, \codeblockdefines{list_of_int} \newcommand{\codeindex}[1]{\index{#1@\code{#1}}} % for example, \codeindex{const_iterator} \newcommand{\codeindexstd}[1]{\index{namespace std@\code{namespace std}!#1@\code{#1}}} % for example, \codeindexstd{vector} \begin{document} \tableofcontents Here's an example of using \code{std::vector}. \codeindexstd{vector} \codeindex{std::vector} \codeblockdefines{vector} \index{std@\code{std}|see {namespace std@\code{namespace std}}} \printindex[mce] \printindex[code] \end{document} The problem I'm seeing is that \code and maybe some other macros are getting expanded in the .idx files. When I cat these files I see this: ## mce.idx \indexentry{namespace std@\unhbox \voidb@x \hbox {\lstinline [basicstyle=\ttfamily ]namespace std}!vector@\unhbox \voidb@x \hbox {\lstinline [basicstyle=\ttfamily ]vector}|hyperpage}{1} \indexentry{std::vector@\unhbox \voidb@x \hbox {\lstinline [basicstyle=\ttfamily ]std::vector}|hyperpage}{1} \indexentry{std@\code{std}|hyperindexformat{\see {namespace std@\code{namespace std}}}}{1} ## code.idx \indexentry{vector@\unhbox \voidb@x \hbox {\lstinline [basicstyle=\ttfamily ]vector}|hyperpage}{1} How can I make \index and the rest of my convenience macros (\codeblockdefines, \codeindex, \codeindexstd) do what I want? • First move the call to imakeidx before loading hyperref. Oct 9, 2017 at 22:37 • @egreg: Okay, done. The output doesn't change, though. (EDIT: ok, the .idx files change a bit, but not significantly enough to not-be-rejected.) Oct 9, 2017 at 22:42 • I should add that I also see only one "Index" in my final .pdf file, but I think that's just a side-effect of having had all the entries in the "Index of code samples" be rejected. I think TeX refuses to create a page for an index if it'd have zero entries in it. Oct 9, 2017 at 22:50 • Your commands \codeindex and \codeindexstd point to a non existent index. Oct 9, 2017 at 22:53 imakeidx should be loaded before hyperref. However, your \code command is very fragile as it uses \lstinline and should be “robusted”. \documentclass[ebook,10pt,oneside,final]{memoir} \usepackage[final]{listings} \usepackage{imakeidx} \usepackage{underscore} \usepackage{etoolbox} \usepackage{hyperref} \makeindex[name=mce,intoc,columns=2] \makeindex[name=code,intoc,columns=2,name=code,title=Index of code samples] \newrobustcmd{\code}[1]{\texorpdfstring{\mbox{\lstinline[basicstyle=\ttfamily]{#1}}}{#1}} \newcommand{\codeblockdefines}[1]{\index[code]{#1@\code{#1}}} % for example, \codeblockdefines{list_of_int} \newcommand{\codeindex}[1]{\index[mce]{#1@\code{#1}}} % for example, \codeindex{const_iterator} \newcommand{\codeindexstd}[1]{\index[mce]{namespace std@\code{namespace std}!#1@\code{#1}}} % for example, \codeindexstd{vector} \begin{document} \tableofcontents Here's an example of using \code{std::vector}. \codeindexstd{vector} \codeindex{std::vector} \codeblockdefines{vector} \index{std@\code{std}|see {namespace std@\code{namespace std}}} \printindex[mce] \printindex[code] \end{document} I changed the \codeindex and \codeindexstd commands to point to a defined index. The code.idx file will contain \indexentry{vector@\code {vector}|hyperpage}{1} whereas mce.idx contains \indexentry{namespace std@\code {namespace std}!vector@\code {vector}|hyperpage}{1} \indexentry{std::vector@\code {std::vector}|hyperpage}{1} These entries are not rejected. • Re "to point to a defined index" — My understanding was that "mce" would be the name of the main index for "mce.tex", so I didn't need to specify that name explicitly; i.e. I could write \index{foo} instead of \index[mce]{foo}. Is this wrong and/or too clever? Should I just define a convenience macro \newcommand{\idx}[1]{\index[mce]{#1}} and avoid using raw \index? Oct 9, 2017 at 22:52 • @Quuxplusone No, the unadorned \index command only refers to one not passed the name option. Just remove name=mce, if you want to use \index{...} for pointing to it. Oct 9, 2017 at 22:54 • I'll look into the etoolbox package. Does it require the change you made to use curly braces instead of backticks? My code samples use a lot of curly braces... Oct 9, 2017 at 22:55 • @Quuxplusone You can use backquotes, but they do nothing more than braces, because the argument is already tokenized. \newrobustcmd is just more efficient than \DeclareRobustCommand. Oct 9, 2017 at 22:56 • Awesome, it seems to work! (Although robustifying the whole \code macro broke \texorpdfstring, so what I did was I pulled out \newrobustcmd{\codeinternal}[1]{\mbox{\lstinline[basicstyle=\ttfamily]#1}} and then \newcommand{\code}[1]{\texorpdfstring{\codeinternal{#1}}{#1}}.) I'm still getting "Improper alphabetic constant" for the index entry \indexentry{list_of_ints!::iterator@\code {list_of_ints!::iterator}}{13}, and none of my "X, see Y" entries are showing up; but I'm sure those are unrelated issues and I'll ask new questions if I can't figure them out. Oct 9, 2017 at 23:19
2022-11-29 04:14:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8659557700157166, "perplexity": 7124.9667324509455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00030.warc.gz"}
http://pureloyaltyllc.com/Payday-Loans/Payday-Loan-Las-Vegas-NV-89137-No-Credit-Check.html
# Payday Loan Las Vegas NV 89137 No Credit Check Cash Loans Las Vegas Nevada. Personal Loans from $100.00 to$15,000 as soon as tomorrow. Bad Credit OK It's important that people use this method of borrowing properly; it's not designed supplement income, but just provide a boost every now and again. Personal Loan Las Vegas Nevada However, there are times when you encounter certain financial problems in the most unexpected hours. Rather than rely on revenue from display ad impressions, Credit.com maintains a financial marketplace separate from its editorial pages. All Payday Loan Companies Uk, Article Source: Kellett can be an expert loan consultant who will let you get approved for Unsecured High Risk Loans and 100% Guarantee Credit Card . Easy cash loans are popularly known as cash advance loans. And therein lies one of the main problems with payday loans - the fees. Based on this, Dobbie and Skiba claim that the payday loan market is high risk.[60] Premium Pricing Structure A 2012 Pew Charitable Trusts study found that the average borrower took out eight loans of $375 each and paid interest of$520 across the loans.[57] The equation for the annual cost of a loan in percent is: [ ( Loan cost / Loan amount ) / Days borrowed ] ∗ 365  days {\displaystyle [({\text{Loan cost}}/{\text{Loan amount}})/{\text{Days borrowed}}]*365{\text{ days}}} Asymmetric Information The payday loan industry takes advantage of the fact that most borrowers do not know how to calculate their loan's APR and do not realize that they are being changed rates up to 390% interest annually.[61] Critics of payday lending cite the possibility that transactions with in the payday market may reflect a market failure that is due to asymmetric information or the borrowers' cognitive biases or limitations.[62] The formula for the total cost of a Payday loan is: N ∗ ( 1 + i ) x {\displaystyle N*(1+i)^{x}} where N {\displaystyle N} is the money people borrowed from the payday loan, i {\displaystyle i} is the interest rate per period (not annual), and x {\displaystyle x} is the number of borrowing periods, which are typically 2 weeks long. Good payday lenders clearly disclose their loan terms and conditions, including the dollar amount of any fees and the APR. Once your application has been submitted on the lender's website you'll receive the lender's contact info. Although not every pay day loan company charges a fee they each charge interest. Sure, you had financial emergencies, but you didn't want the whole world knowing you were having one! Concerns about privacy and anonymity have been some of the main reasons people haven't utilized cash advance services. Believe it or not, we don't demand your credit history or employment proof when ever you approach us for a no fax payday advance loan. About 38 percent of received my discharge notice in April 1924 with Herbert Stanley as governor. Payday Loan Las Vegas NV 89137 No Credit Check Try for a month of unlimited access of Identity Theft Protection. By submitting your application and information on this website, you agree to allow any and all participating lenders to verify your information and check your credit. Up to 20 years credit worthiness $8,697,340 931 maryland$1,000,000 50% $500 to$30,000 interest. If that situation does arise, know that you can rely on Speedy Cash if you need a payday loan to get you by until your next pay day, an installment loan to get you back on track, or a title loan to allow you to borrow a higher loan amount. The study found payday lenders to target the young and the poor, especially those populations and low-income communities near military bases. Using Payday Loans in a Responsible Manner The stafford and perkins loans are available entirely without regard to your credit history. This is because instant loans have high interest rates. Countries around the world have different legislations regarding payday loans. It is only wise for a person to turn to payday loans only when the stakes are great and the consequence of not having the money in time is high. One effective option if you own your own home and find yourself rolling over your payday loan month after month is to take out a homeowner loan, even if it is for a relatively small amount. Direct lenders are loaning you their own capitol, whereas an indirect lender is serving as a middleman. AP Check Cashing offers no hassle, quick approval for payday advances in Omaha, Nebraska. Economical Payday Loans Near Las Vegas Clark Area. Many cash advance lenders allow the consumer to make their own decisions while offering generous, competent help to all that qualify for their services. 1 On top of a falling poverty rate, Texas has—as of May, 2017—successfully lowered their unemployment rate to 4. You should will established Payday Loans unregulated "credit, Garnish Wages. There are a number of alternatives to payday loans, especially if you have good credit. Many a lender, however, does report about your payday loan transaction to the credit bureau. So, you need not even have to go to the lender’s outlet to collect the cash. Also, inquire if there are any other additional fees that you need to pay for getting the loan. Your credit score will definitely affect the loan rate and its terms accordingly. It will also help to save you money This includes making sure that your articles are grammatically correct as well as engaging for the reader. Many sites offer a suggested list of payday loan companies. A good time and energy to seek the help of a payday money advance is whenever you find yourself in one among these financial dilemmas. Professional Downtown Las Vegas Payday Loan 2018 In the rare case that information needs to be confirmed, such as an error in the application, you can expect a phone call that business day. At the same time, it ideally makes more sense to consider your decision to take a loan and determine if you really need it before applying for a loan. Men and women tend to need a helping hand every now and then. In case you are in the armed forces, you could have certain additional protections certainly not in order to common consumers. Just to get a perspective; if you applied for a credit card and got approved for a \$500 limit (the average PD loan amount) your APR will be 9. Last     Next article Related Topics Personal Loans Las Vegas 89162 Payday Loans Las Vegas Nevada 89155 Personal Loan in Las Vegas NV 89115 No Credit Check Loan in Las Vegas 89105 Payday Loan in Las Vegas 89152 No Credit Check Payday Loan in Las Vegas NV 89107 No Credit Check Personal Loan Las Vegas 89142 Payday Loans Las Vegas 89159 Personal Loan Las Vegas NV 89123 Cash Loan in Las Vegas NV 89127 No Credit Check
2018-11-17 13:34:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1818852722644806, "perplexity": 4336.931826193733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743521.59/warc/CC-MAIN-20181117123417-20181117144642-00034.warc.gz"}
http://math.stackexchange.com/questions/232621/trig-identities-frac-sin-4x1-cos4x-frac1-cos2x-cos2x-t/232625
# Trig Identities : $\frac{\sin (4x)}{1-\cos(4x)} \frac{1-\cos(2x)}{\cos(2x)} = \tan(x)$ I want to prove that $$\frac{\sin (4x)}{1-\cos(4x)} \frac{1-\cos(2x)}{\cos(2x)} = \tan(x)$$ \begin{align} \text{Left hand side} : & = \sin(2x+2x)/(1-\cos(2x+2x)) \times ((1-\cos^2x+\sin^2x)/(\cos^2x-\sin^2x))\\ & = ((2\sin^2x)(\cos^2x))/(2\sin^2(2x)) \times (2\sin^2x/(2\cos^2x -1))\\ & = 2\sin^2(x)\cos^2(x)/\sin^2(2x) \end{align} Not sure where to go from here... - If you are going to post to this site regularly, I suggest that the place to go is the faq, where you will find links to information on formatting mathematics on this site. – Gerry Myerson Nov 8 '12 at 2:41 Yes I'm definitely going to go through it ASAP as it makes it much easier to read.. It's just due to time today. – DavidSalib Nov 8 '12 at 2:44 @DavidSalib Kindly look here (meta.math.stackexchange.com/questions/5020/…) on how to typeset your questions so that it is easier for people to read. – user17762 Nov 8 '12 at 3:15 Recall the following identities: $$\sin(2 \theta) = 2 \sin(\theta) \cos(\theta)$$ $$1-\cos(2 \phi) = 2 \sin^2(\phi)$$ Make use of the above identities and you will get your solution. Move your cursor over the gray area for complete solution. \begin{align}\dfrac{\sin(4x)}{1-\cos(4x)} \dfrac{1 - \cos(2x)}{\cos(2x)} & = \dfrac{\sin(2(2x))}{1-\cos(2(2x))} \dfrac{1 - \cos(2x)}{\cos(2x)}\\ & = \dfrac{2 \sin(2x) \cos(2x)}{2 \sin^2(2x)} \dfrac{1-\cos(2x)}{\cos(2x)}\\ & = \dfrac{1-\cos(2x)}{\sin(2x)} ( \because \text{Cancelling } 2\sin(2x) \cos(2x))\\ & = \dfrac{2 \sin^2(x)}{2 \sin(x) \cos(x)}\\ & = \dfrac{\sin(x)}{\cos(x)} ( \because \text{Cancelling } 2\sin(x))\\ & = \tan(x) \end{align} -
2016-05-02 13:09:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9736632704734802, "perplexity": 3045.2536058239534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461863599979.27/warc/CC-MAIN-20160428171319-00212-ip-10-239-7-51.ec2.internal.warc.gz"}
https://datascience.stackexchange.com/questions/47245/positive-semidefinite-kernel-matrix-from-gower-distance
# Positive semidefinite kernel matrix from Gower distance I have a dataframe with continuous and categorical variables and I want to obtain a kernel matrix for classification. The kernel matrix must be symmetric and positive semidefinite, so that no eigenvalue is negative. I started with Gower distance matrix for mixed data, which is not positive semidefinite. I tried to transform the Gower distance matrix into a positive semidefinite and symmetric kernel with the function D2Ksof MiRVpackage in R, with no success. I tried also to apply the approach of page 799 in Zhao, Ni et al. “Testing in Microbiome-Profiling Studies with MiRKAT, the Microbiome Regression-Based Kernel Association Test” American journal of human genetics vol. 96,5 (2015): 797-807. with no success, as well. I always obtain a indefinite kernel matrix with positive and negative eigenvalues. Any suggestion? • – Esmailian Mar 13 '19 at 16:50 • Oh! Wonderful suggestion! Will check it out!! – coolsv Mar 13 '19 at 18:00 • Worked like a charm! – coolsv Mar 13 '19 at 18:22
2021-08-04 06:06:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49740317463874817, "perplexity": 1461.1044302868356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154796.71/warc/CC-MAIN-20210804045226-20210804075226-00140.warc.gz"}
http://mathhelpforum.com/discrete-math/135310-mathematical-induction.html
# Math Help - Mathematical Induction 1. ## Mathematical Induction Prove using mathematical induction: 49|(2^(3n) - 7n - 1) 2. Originally Posted by MATNTRNG Prove using mathematical induction: 49|(2^(3n) - 7n - 1) Check for n = 1 and complete the explanation in the following: $2^{3(n+1)}-7(n+1)-1=8\cdot 2^{3n}-7n-7-1=\left(2^{3n}-7n-1\right)+7\cdot 2^{3n}-7=$ $49k+7\left(2^{3n}-1\right)=49k+7(2^3-1)(2^{3(n-1)}+2^{3(n-2)}+\ldots+2^3+1)$ , $k\in\mathbb{N}$ Tonio
2015-03-05 23:01:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.948533833026886, "perplexity": 5785.95268598913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936464876.43/warc/CC-MAIN-20150226074104-00267-ip-10-28-5-156.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3109537/calculating-the-derivative-of-frac11e-x-using-quotient-rule
# Calculating the derivative of $\frac1{1+e^{-x}}$ using quotient rule. I tried to use the quotient rule at first and then I got $$D\left(\frac1{1+e^{-x}}\right)=\frac{ 1\cdot D(1+e^{-x})-(1+e^{-x})\cdot D(1)}{ (1+e^{-x})^2 }.$$ We know that with linearity of differentiation the numerator becomes $$D(e^{-x})+D(1)= -e^{-x}.$$ So we should get $$\dfrac{-e^{-x}}{(1+e^{-x})^2}$$ but that is a sign error of according to wolfram alpha. • oops I think i applied quotient rule wrong way Im checking it now Feb 12 '19 at 1:47 You are quoting the quotient rule incorrectly. It says $$D\left(\frac {f(x)}{g(x)}\right)=\frac {f'(x)g(x)-g'(x)f(x)}{(g(x))^2}$$ You should have $$\frac{ (1+e^{-x})*D(1) -1*D(1+e^{-x})}{ (1+e^{-x})^2 }$$ which is just the negative of your expression and explains the sign error. • does the answer remain the same if you expand the original expression with (e^x) I was wondering about that because I just recognized that the original f(x) = sigmoid function and it is sometimes given in the form of f(x) = e^x/ (1+e^x) Feb 12 '19 at 2:16 Recall that $$\left(\dfrac{u}v\right)'=\dfrac{vu'-uv'}{v^2}.$$ So applying quotient rule will give you $$D\left(\frac1{1+e^{-x}}\right)=\frac{ (1+e^{-x})\cdot D(1)-1\cdot D(1+e^{-x}) }{ (1+e^{-x})^2 }=\frac{0-1\cdot (-e^{-x}) }{ (1+e^{-x})^2 }=\frac{e^{-x}}{ (1+e^{-x})^2 }.$$ How about using the power rule and the chain rule? $${(1+e^{-x})^{-1}}'=-1(1+e^{-x})^{-2}\cdot (-e^{-x})=\frac{e^{-x}}{(1+e^{-x})^2}$$. Yes, so as they said you have two different formulas. You have: $$\left(\dfrac{u}v\right)'=\dfrac{vu'-uv'}{v^2}.$$ and: $$D\left(\frac {f(x)}{g(x)}\right)=\frac {f'(x)g(x)-g'(x)f(x)}{(g(x))^2}$$ From this, you will get: $$\frac{ (1+e^{-x})\cdot D(1)-1\cdot D(1+e^{-x}) }{ (1+e^{-x})^2 }=\frac{0-1\cdot (-e^{-x}) }{ (1+e^{-x})^2 }=\frac{e^{-x}}{ (1+e^{-x})^2 }.$$ Just remember those two different ways to write it and figure out which one you like more. From there it's just those simple steps! Also make sure you understand the way to get this answer. Really once you put everything together, it's fairly simple to compute the rest of it.
2021-10-21 07:00:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8937541246414185, "perplexity": 364.1368181100069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00030.warc.gz"}
https://matthewonsoftware.com/blog/strings/
Strings String One of the fundamental data type in Computer Since, which is basically an array of integers under the hood, where each character in a given string is mapped to an integer via some character-encoding standard like ASCII. We can divide Strings to mutable and immutable, in most languages they are immutable, meaning that they can’t be edited after creation. So as you can imagine, simple operations like appending a character to a string would be more expensive than it might appear. The canonical example of an operation that’s deceptively expensive due to string immutability is the following: String someString = "some string"; String newString = ""; for (char character : someString.toCharArray()) { newString += character; } The operation above has a time complexity O(n2) where n is the length of ‘someString’, because each addition of a character to newString creates an entirely new string and is itself an O(n) operation, but keep in mind that n O(n) operations are performed (O(n) for each character in given string), which is leading to an O(n2) time-complexity operation at the end. Tags: Categories: Updated:
2022-09-25 14:02:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36220112442970276, "perplexity": 1647.0076957618912}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00666.warc.gz"}
https://eepower.com/power-supplies/smart-appliances-need-smart-power-supply-clever-standby-power-management
​Smart Appliances Need a Smart Power Supply with Clever Standby Power Management Introduction You know that most electrical and electronic household and office equipment consume electric power when switched off or not performing their primary function. This standby power is mostly wasted. Worldwide, standby losses are a significant part of total electricity use. Recognizing this, since the beginning of the twenty-first century many voluntary and mandatory programs have taken aim at reducing standby power (and its associated CO2 emissions). Household periodical appliances that users turn on and off, such as washing machines, coffee machines, and the new robotic vacuum cleaners, typically have two phases for power loss when idle: • The “left-ON mode,” is when the appliance has finished its working cycle and remains switched on; this phase may persist for an indefinite time after the completion of the cycle without further intervention of the user. • The “OFF-mode,” is, as you might guess, when the appliance is turned off, automatically or by the  user. To achieve the highest labelling grade, it makes sense to limit the duration of the left-ON mode and force the appliance to enter the OFF-mode automatically when it completes its working cycle. In fact, IEC62301 Clause 4.5 for energy-consumption calculation labelling, allows that if the consumption of the appliance is lower than 5 mW in the “OFF-mode,” no contribution at all to energy consumption needs to be considered while the device is not performing its primary function. Other household continuous appliances, such as a refrigerator, always stay ON and must be highly efficient in light-load condition that is where they operate for the major portion of their life. This paper presents an approach to minimize the overall standby power for a household appliance using advanced technologies and clever power architectures, which enable SMPS designs to meet the most demanding energy-saving regulations, thanks to: • Zero-power mode (ZPM) function that enables the appliance to shut down automatically at the end of the working cycle, consuming zero power from the power line. Examples of the implementation of such techniques are presented with reference to two VIPerPlus high-voltage converters. VIPer01 is a high-voltage converter smartly integrating an 800 V avalanche-rugged power MOSFET with fixed-frequency PWM current-mode control. The integrated HV startup, sense-FET, error amplifier and oscillator with frequency jitter allow you to design a complete application (flyback, buck, buck-boost) with minimum component count. Main features of VIPer01 to meet the most stringent energy-saving standards under light-load are: • Low threshold of both power MOSFET and internal logic circuitry that allows to supply the IC starting from a 4.5 V supply voltage. • Reduced gate charge of the power MOSFET and low-consumption of the internal logic circuitry allow the circuit to reach an extremely low quiescent current. • “Pulse Frequency Modulation” (PFM) decreases the switching frequency under light-load, which minimizes all the frequency-related losses.The following sections report measurements of performance in light-load conditions. The following sections report measurements of performance in light-load conditions. The measurements reported here refer to the evaluation kit STEVAL-ISA177V1, a wide-range flyback converter based on VIPer01, which delivers 4.25 W to a 5 V single output. In no-load conditions the overall application consumes less than 10 mW at 230 VAC and its efficiency @ 250 mW output load is higher than 60%. PIN [mW] POUT [mW] @VIN = 115 VAC @VIN = 230 VAC 0 4.4 8.6 25 48.6 57.4 50 89.0 100.1 250 361.2 398.5 Best standby performance in the market A benchmark with the most popular high-voltage converters shows that a 5 V output converter in buck topology based on VIPer01 has better performance than the average standby available in the market. To ensure realistic comparisons, all measurements were performed using the same base board, which was equipped with: diode bridge rectification, input filter, freewheeling diode, power inductor and output capacitor. Each sample to be tested has been placed on a separate module containing all the circuitry needed to bias and run that particular SMPS driver. Then, the modules were separately plugged into the base board. Figure 1: Efficiency at Vout = 5 V, light-load The above charts show that VIPer01 has the best performance in terms of efficiency under light-load relative to all the considered devices. Standby Consumption Comparison We measured the standby power consumption by connecting a Zener diode across the output. A 15 V Zener diode has been used for 5 V output voltage, 20 V Zener diode for 12 V output and 28 V Zener diode for 24 V output voltage version. The following table summarizes standby measurements taken under different conditions. Input power consumption (mW) 5 V 12 V 24 V VIPer01 10.1 14.4 39.4 Compet. 1a 60.1 44.2 61.1 Compet. 1b 61.4 20.2 45.6 Compet. 2a 39.2 31.8 53.6 Compet. 2b 37.2 30.8 53.1 Table 2: Standby power measured with Zener diode on the output This section shows that VIPer0P allows designing an SMPS for periodical household equipment that automatically enters the OFF-mode at the end of the working cycle, while consuming less than 5 mW in this OFF-mode state, getting rid of the commonly used bi-stable electromechanical switches, increasing the overall reliability, and reducing the cost of the system. In fact: • The SMPS can be shut down by a microcontroller (MCU) supervising the operation of the appliance and enter a special state where it delivers no power at its output terminals. • Once in this state, the SMPS is ready to be manually restarted by the user while consuming less than 5 mW from the power line at 230 Vac. This capability is a “zero-power function”, an innovative feature of VIPer0P which is described hereafter and defined by IEC62301 Clause 4.5. Zero-power function In addition to the features for advanced light-load management listed in section 2.1, the key feature of the VIPer0P is the “zero-power function,” whose principle schematic is shown in Figure 2. It consists of a special idle state (Zero-Power Mode, ZPM) where the control IC is totally shut down - except for the circuitry necessary for exiting ZPM - and the high-voltage start-up cell does not perform its usual function. The high-voltage start-up cell is the current generator used to start-up the device from the rectified power line directly by charging its Vcc capacitor Cs above the start-up threshold. Assuming that the device is operating normally, when the OFF pin voltage is pulled to GND for more than 10 ms (debounce time for immunity to disturbances), the “zero-power logic” block asserts the signals SD and ZP high. SD asserted high disconnects nearly all the blocks of the control chip from the Vcc supply line, so that it is shut down with the gate of the main MOSFET pulled low. This stops the SMPS. The only parts of the control IC that remain alive are the “zero-power logic” block and the 4 V regulator that provides the bias voltage to it. Figure 2: Principle schematic of the zero-power function of VIPer0P ZP signal asserted high turns M3 on and fixes the voltage of the gate terminal of M1 at about 15 V through the Zener diode ZD2. In this way, the Vcc voltage is set at about 13 V. The 4 V linear regulator supplied from Vcc provides those few μA needed to operate and keep the “zero-power logic” block alive. Both pins ON and OFF are internally connected to this 4 V supply line via 50 kΩ pull-up resistors, so either of them can be used to provide a small current to some external circuit. The overall consumption in ZPM consists of two components: that on the branch ZD1, RG, ZD2, M3 and that due to the quiescent current Iq (≈ 1.5 μA) absorbed by the 4 V regulator and the “zero-power logic” block plus the current Iext delivered to an external circuit. This consumption may be estimated as follows: P_("ZPM")=Vin_("pk")(((Vin_("pk")-V_("ZD1")-V_("ZD2'))/R_G+I_q+I_("ext")) with obvious symbolism. At Vin = 230 Vac and with worst-case values (RG = 28 MΩ, VZD1 + VZD2 = 20 V, Iq = 2 μA) it is PZPM = 4.2 mW + 0.325 mW/ μA of Iext. To exit ZPM, the ON pin voltage must be pulled to GND for more than a 20 μs debounce time. By doing so, the “zero-power logic” block asserts the signals SD and ZP low. ZP asserted low turns M3 off and releases the gate terminal of M1, while SD asserted low reconnects the blocks of the control chip to the Vcc supply line, held at 13 V by the Vcc capacitor Cs. This voltage is well above the start-up threshold of the IC (8 V), so the high-voltage start-up cell is disabled by M2 turned on and switching activity restarts immediately. A practical example on zero-power architecture The demonstration board STEVAL-ISA174V1 is a wide-range input, 6.8 W two-output non-isolated flyback converter designed with the VIPer0P. Figure 3: STEVAL-ISA174V1 (64 x 29 mm) It delivers 4 W on a - 5 V output tightly regulated through a voltage divider connected to the non-inverting input of the error amplifier available at the FB pin; and 2.8 W to a + 7 V output, semi-regulated by magnetic coupling through the turn ratio of the two output windings. The demonstration board has been completely characterized and is described in AN4836, here only the results related to energy saving requirements are reported. In particular, this application complies with the tightest references for energy-conscious designs, such as European CoC ver. 5 requirements for external power supplies. The data in Table 3 show that the application, when in zero-power mode (ZPM), is ratified to have zero-power input consumption as per IEC62301 Clause 4.5 and is Five-star energy efficient when operating with no load. Vin ZPM input power consumption [mW] 115 Vac 0.8 6.5 230 Vac 3.5 9.1 Conditions - Power line connected - IC not switching, most internal blocks disabled - Iout1 = Iout2 = 0 - Vout1 = Vout2 = 0 - Power line connected - IC switching (burst-mode) -Iout1 = Iout2 = 0 - Vout1 and Vout2 regulated at their nominal values Table 3: ZPM input power and no-load input power of STEVAL-ISA174V1 The data in Table 4 show that the equivalent 12 V / 6.8 W SMPS (simply obtained connecting the load across the Vout1 and Vout2 lines) is compliant with the ErP Lot 6 Tier 2 requirements in off-mode (same as ER 1275/2008) and the 10% load efficiency target envisaged by the European CoC ver. 5. Vin [VAC] Eff [%] @ POUT = 25 mW @ POUT = 50 mW @ POUT = 250 mW @ POUT = 680 mW 115 55.6 60.8 72.2 78.0 230 51.3 57.0 66.3 71.4 Table 4: Light-load performances of STEVAL-ISA174V1 STEVAL-ISA174V1 is a demonstration board and does not include MCU, thus ON and OFF pin are activated by the user through push buttons. Other evaluation kits that include the MCU are STEVAL-ISA181V1 and STEVAL-ISA192V1. Figure 4 shows two examples of ways the pins ON and OFF can be operated. With the arrangement (a), the MCU supervising the operation of the appliance shuts down the SMPS by pulling low OFF pin voltage through one of its GPIO pins, cutting also its own supply voltage. The restart is commanded by a pushbutton or a tactile switch pressed by the user that directly operates pin ON. The MCU wakes up after the SMPS is again up and running. This arrangement provides the minimum consumption from the power line. With the arrangement (b), the MCU shuts down the SMPS by pulling low OFF pin voltage and wakes it up as well by pulling low ON pin voltage. Two of its GPIO pins are used. The MCU (rated for 3.3 V supply voltage) is powered also during ZPM, so it must be equipped with advanced features of power management such as an ultra-low consumption standby mode with fast wake-up. This arrangement is implemented in the evaluation kit STEVAL-ISA192V1. (a) Automatic ZPM with manual turn ON (b) ZPM fully managed by MCU Figure 4: Examples of ZPM management with a MCU and a touch button (zur besseren Verständlichkeit nicht entfernt) Conclusions Today power supply units require more sophisticated methods for improving performance to meet the energy-saving regulations’ push for greater efficiency. ST’s VIPerPlus high-voltage converters combine an 800 V avalanche-rugged power section, a state-of-the-art PWM control circuitry, with advanced technologies and clever power architectures to meet the need for increasingly efficient electrical power in smart household appliances that have to be connected with an advanced user interface. VIPer01 applications demonstrate how easy it can be to meet the most stringent energy regulations for continuous household appliances. On the other hand, VIPer0P applications demonstrate how to build a clever standby architecture with easy interaction with the MCU to reduce the bill of materials cost of a power supply for a periodical household appliance. The result is an SMPS design that meets the most demanding energy-saving regulations and more: high reliability, flexibility, and minimal component count.
2019-06-19 23:07:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44514259696006775, "perplexity": 4968.312088628659}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999066.12/warc/CC-MAIN-20190619224436-20190620010436-00358.warc.gz"}
https://socratic.org/questions/why-is-that-in-terms-of-electronegativity-a-c-o-bond-in-co-2-is-more-polar-than-
# Why is that, in terms of electronegativity, a C-O bond in CO_2 is more polar than the F-F bond in F_2? Because all isotopic fluorines of the same kind (e.g. $\text{^19 "F}$) are identical. The electronegativity of fluorine is equal to itself, therefore the electronegativity difference is $0$. The electronegativity of carbon is about $2.5$, and that of oxygen is about $3.5$, and naturally, $\left(3.5 - 2.5 = 1.0\right) > 0$. So, the electronegativity difference is greater for a $\text{C"-"O}$ single bond than a $\text{F"-"F}$ single bond.
2019-12-16 12:53:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7822456955909729, "perplexity": 1550.2353101989015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540565544.86/warc/CC-MAIN-20191216121204-20191216145204-00404.warc.gz"}
https://en.wikipedia.org/wiki/Dyadic
Dyadic may refer to: • Adicity / arity of a mathematical relation or function (dyadic relations are usually called binary relations) • Dyad (sociology) - as an adjective, describes the interaction between a pair of individuals. A dyad can be linked via general communication, romantic interest, family relation, interests, work, partners in crime, and so on. • Dyadic counterpoint, the voice-against-voice conception of polyphony • Dyadic distribution, a type of probability distribution • Dyadic fraction, a mathematical group related to dyadic rationals • Dyadic solenoid, a kind of dyadic fraction • Dyadic data: data composed of two sets of objects A and B, in such a way that observations are observations of couples (a, b), with ${\displaystyle a\in A}$ and ${\displaystyle b\in B}$.
2018-12-12 11:39:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6657336950302124, "perplexity": 3289.9087227932946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823872.13/warc/CC-MAIN-20181212112626-20181212134126-00474.warc.gz"}
https://www.jobilize.com/algebra/course/4-3-fitting-linear-models-to-data-by-openstax?qcr=www.quizover.com&page=6
# 4.3 Fitting linear models to data  (Page 7/14) Page 7 / 14 Determine whether the algebraic equation is linear. $\text{\hspace{0.17em}}6{x}^{2}-y=5$ Determine whether the function is increasing or decreasing. $f\left(x\right)=7x-2$ Increasing Determine whether the function is increasing or decreasing. $g\left(x\right)=-x+2$ Given each set of information, find a linear equation that satisfies the given conditions, if possible. Passes through $\text{\hspace{0.17em}}\left(\text{7},\text{5}\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\left(\text{3},\text{17}\right)$ $y=-\text{3}x+\text{26}$ Given each set of information, find a linear equation that satisfies the given conditions, if possible. x -intercept at $\text{\hspace{0.17em}}\left(\text{6},0\right)\text{\hspace{0.17em}}$ and y -intercept at $\text{\hspace{0.17em}}\left(0,\text{1}0\right)$ Find the slope of the line shown in the graph. 3 Find the slope of the line graphed. Write an equation in slope-intercept form for the line shown. $y=\text{2}x-\text{2}$ Does the following table represent a linear function? If so, find the linear equation that models the data. x –4 0 2 10 g(x) 18 –2 –12 –52 Does the following table represent a linear function? If so, find the linear equation that models the data. x 6 8 12 26 g(x) –8 –12 –18 –46 Not linear. On June 1 st , a company has \$4,000,000 profit. If the company then loses 150,000 dollars per day thereafter in the month of June, what is the company’s profit n th day after June 1 st ? For the following exercises, determine whether the lines given by the equations below are parallel, perpendicular, or neither parallel nor perpendicular: $\begin{array}{c}2x-6y=12\\ -x+3y=1\end{array}$ parallel $\begin{array}{c}y=\frac{1}{3}x-2\\ 3x+y=-9\end{array}$ For the following exercises, find the x - and y - intercepts of the given equation $7x+9y=-63$ $\left(–9,0\right);\left(0,–7\right)$ $f\left(x\right)=2x-1$ For the following exercises, use the descriptions of the pairs of lines to find the slopes of Line 1 and Line 2. Is each pair of lines parallel, perpendicular, or neither? Line 1: Passes through $\text{\hspace{0.17em}}\left(5,11\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\left(10,1\right)$ Line 2: Passes through $\text{\hspace{0.17em}}\left(-1,3\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\left(-5,11\right)$ Line 1: $\text{\hspace{0.17em}}m=-2;$ Line 2: $\text{\hspace{0.17em}}m=-2;$ Parallel Line 1: Passes through $\text{\hspace{0.17em}}\left(8,-10\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\left(0,-26\right)$ Line 2: Passes through $\text{\hspace{0.17em}}\left(2,5\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\left(4,4\right)$ Write an equation for a line perpendicular to $\text{\hspace{0.17em}}f\left(x\right)=5x-1\text{\hspace{0.17em}}$ and passing through the point (5, 20). $y=-0.2x+21$ Find the equation of a line with a y - intercept of $\text{\hspace{0.17em}}\left(0,2\right)\text{\hspace{0.17em}}$ and slope $\text{\hspace{0.17em}}-\frac{1}{2}.$ Sketch a graph of the linear function $\text{\hspace{0.17em}}f\left(t\right)=2t-5.$ Find the point of intersection for the 2 linear functions: $\text{\hspace{0.17em}}\begin{array}{c}x=y+6\\ 2x-y=13\end{array}.$ A car rental company offers two plans for renting a car. Plan A: 25 dollars per day and 10 cents per mile Plan B: 50 dollars per day with free unlimited mileage How many miles would you need to drive for plan B to save you money? More than 250 ## Modeling with Linear Functions Find the area of a triangle bounded by the y axis, the line $\text{\hspace{0.17em}}f\left(x\right)=10-2x,$ and the line perpendicular to $\text{\hspace{0.17em}}f\text{\hspace{0.17em}}$ that passes through the origin. A town’s population increases at a constant rate. In 2010 the population was 55,000. By 2012 the population had increased to 76,000. If this trend continues, predict the population in 2016. 118,000 The number of people afflicted with the common cold in the winter months dropped steadily by 50 each year since 2004 until 2010. In 2004, 875 people were inflicted. Find the linear function that models the number of people afflicted with the common cold C as a function of the year, $\text{\hspace{0.17em}}t.\text{\hspace{0.17em}}$ When will no one be afflicted? For the following exercises, use the graph in [link] showing the profit, $\text{\hspace{0.17em}}y,$ in thousands of dollars, of a company in a given year, $\text{\hspace{0.17em}}x,$ where $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ represents years since 1980. answer and questions in exercise 11.2 sums what is a algebra what is the identity of 1-cos²5x equal to? __john __05 Kishu Hi Abdel hi Ye hi Nokwanda C'est comment Abdel Hi Amanda hello SORIE Hiiii Chinni hello Ranjay hi ANSHU hiiii Chinni h r u friends Chinni yes Hassan so is their any Genius in mathematics here let chat guys and get to know each other's SORIE I speak French Abdel okay no problem since we gather here and get to know each other SORIE hi im stupid at math and just wanna join here Yaona lol nahhh none of us here are stupid it's just that we have Fast, Medium, and slow learner bro but we all going to work things out together SORIE it's 12 what is the function of sine with respect of cosine , graphically tangent bruh Steve cosx.cos2x.cos4x.cos8x sinx sin2x is linearly dependent what is a reciprocal The reciprocal of a number is 1 divided by a number. eg the reciprocal of 10 is 1/10 which is 0.1 Shemmy Reciprocal is a pair of numbers that, when multiplied together, equal to 1. Example; the reciprocal of 3 is ⅓, because 3 multiplied by ⅓ is equal to 1 Jeza each term in a sequence below is five times the previous term what is the eighth term in the sequence I don't understand how radicals works pls How look for the general solution of a trig function stock therom F=(x2+y2) i-2xy J jaha x=a y=o y=b sinx sin2x is linearly dependent cr root under 3-root under 2 by 5 y square The sum of the first n terms of a certain series is 2^n-1, Show that , this series is Geometric and Find the formula of the n^th cosA\1+sinA=secA-tanA Wrong question why two x + seven is equal to nineteen. The numbers cannot be combined with the x Othman 2x + 7 =19 humberto 2x +7=19. 2x=19 - 7 2x=12 x=6 Yvonne because x is 6 SAIDI
2020-08-04 03:04:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 36, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4879313111305237, "perplexity": 981.3896620366053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735851.15/warc/CC-MAIN-20200804014340-20200804044340-00198.warc.gz"}
https://www.statistics-lab.com/category/economics%E4%BB%A3%E8%80%83/
## 经济代写|Microeconomics代考微观经济学代写|ECO211 statistics-lab™ 为您的留学生涯保驾护航 在代写Economics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写Economics代写方面经验极为丰富,各种代写Economics相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 经济代写|Economics代考微观经济学代写|The Adaptability of Production to Demand The Keynesian-Kaleckian approach gives central role to the tendency of production to adjust to demand. This requires a flexibility of production, a capacity not only to reduce production, but also to increase it if demand increases, even considerably, and in relatively short time spans, weeks or at most months, that is, without needing first to enlarge productive capacity by building new plants. (We are now admitting durable capital goods too, as realism requires. The principle of effective demand remains perfectly valid.) What makes such a short-period upward flexibility of production possible? The reason is the possibility to vary the degree of utilization of plants, increasing production from the same fixed plants by utilizing them for more hours per week; the existence of inventories of raw materials and of other circulating capital goods makes an initial increase of production possible, until the increase of production by the firms producing those raw materials and goods makes it possible to reconstitute those inventories. The economy behaves accordion-like, with the sectors where the increase in demand first arrives first increasing production and reducing their inventories of intermediate goods, and then their suppliers in turn increasing production and transmitting the increase to their suppliers, and so on. As we will see in greater detail in $\$ Chap. 12, in most industries a considerable increase of production is perfectly possible, by increasing the number of hours per week the plant is operated; in emergency situations (e.g. a catastrophe, or war), even by working the weekends too and by introducing one more labour shift, passing, say, from 8 to $16 \mathrm{~h}$ of plant utilization per day. Even the very few firms with continuous production processes, e.g. iron furnaces, do not work at full capacity at all hours; furnaces are never shut off but raw materials are processed only in the amounts required to satisfy demand, the monthly production flow is generally considerably below the maximum possible one. Therefore the kinds of production increase from the same fixed plants that variations in aggregate demand can cause, say $10 \%$ more yearly production, appear perfectly possible. Over longer periods it will be convenient to expand productive capacity by building additional plant and thus reduce the need to pay higher wages for overtime or night shifts; but in the short period, considerable variations of production from the given plant will be implemented with no difficulty if demand makes them convenient. A constraint might come from lack of availability of extra labour; but generally market economies suffer from overt or hidden unemployment, and for short periods the already employed workers too will be generally ready to do overtime work if adequately paid. Over longer periods there are labour migrations and changes in the habits of the population (e.g. women can increase their participation to the labour market, or retirement age can be modified) that adapt labour supply to demand. ## 经济代写|Economics代考微观经济学代写|Matrix Representation This chapter presents a more mathematical treatment of the theory of long-period prices or natural prices or prices of production, supplying more rigorous support for their properties stated in the previous chapter, and adding other important issues: Leontief models, choice of technique, re-switching, simple fixed capital (general joint production, and land rent in multisector models, are discussed in Chap. 10 ). The analysis presented in this chapter is relevant beyond the classical approach. The notion of long-period prices is important independently of whether one adopts the classical or some other approach. The thesis that in a competitive economy relative product prices gravitate toward long-period normal levels characterized by a uniform rate of return on the supply price of capital goods is also found in Marshall, Jevons, Walras, Wicksell, Samuelson…; although somewhat obscured by recent general equilibrium theory, this notion continues to dominate applied economics, and it is more and more reasserting its centrality also in theoretical work. The present chapter uses matrices; the reader must know the basic elements of linear algebra, matrix theory, and complex numbers, as generally taught in the basic mathematics-for-economists undergraduate course; the chapter supplies the necessary minimal elements to understand eigenvalues and the Perron-Frobenius theorem. Unless otherwise explicitly indicated, vectors are to be intended as column vectors, but once a vector is defined as a row vector then its symbol is not accom-panied by a transposition superscript T. Price vectors are generally row vectors, and quantity vectors are column vectors. I consider an economy where production is in yearly cycles, and all produced means of production are circulating capital goods, i.e. goods which when used as inputs disappear in the course of a single year. I assume no joint production (i.e. each production method produces a single output), and only one type of labour. Land is overabundant, hence a free good which we need not explicitly include among the inputs. ## 经济代写|Economics代考微观经济学代写|The Adaptability of Production to Demand Keynesian-Kaleckian 方法赋予生产以适应需求调整的趋势的核心作用。这需要生产的灵活性,不仅要减少产 量,而且在需求增加时增加产量,甚至在相当短的时间内,几周或最多几个月,也就是说,不需要首先扩大生产 能力通过建造新工厂。(我们现在也承认耐用资本品,这是现实主义所要求的。有效需求原理仍然完全有效。) 是什么让生产的这种短期向上的灵活性成为可能? ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
2023-01-31 09:57:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5272524356842041, "perplexity": 3062.1887343233607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499857.57/warc/CC-MAIN-20230131091122-20230131121122-00507.warc.gz"}
https://chemistry.stackexchange.com/questions/111237/gap-in-ionic-radii
• When you go from He to Be+2, Ne to Mg+2 and so on, then you're losing the whole s shell from the next $n$ quantum number, but adding two protons. Why shouldn't the ion be much smaller? – MaxW Mar 19 '19 at 21:08 • Actually for me, the surprise is difference between main groups VI and VII. For example, $\ce{S^2-}$ to $\ce{Cl-}$ is only $\pu{3 pm}$ while $\ce{O^2-}$ to $\ce{F-}$ is $\pu{4 pm}$. On the other hand, others have more than $\pu{20 pm}$ difference. – Mathew Mahindaratne Mar 19 '19 at 21:23
2020-10-01 13:20:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5569141507148743, "perplexity": 932.7105895271827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131412.93/warc/CC-MAIN-20201001112433-20201001142433-00470.warc.gz"}
https://en.wikipedia.org/wiki/Zero-product_property
# Zero-product property In algebra, the zero-product property states that the product of two nonzero elements is nonzero. In other words, it is the following assertion: If ${\displaystyle ab=0}$, then ${\displaystyle a=0}$ or ${\displaystyle b=0}$. The zero-product property is also known as the rule of zero product, the null factor law or the nonexistence of nontrivial zero divisors. All of the number systems studied in elementary mathematics — the integers ${\displaystyle \mathbb {Z} }$, the rational numbers ${\displaystyle \mathbb {Q} }$, the real numbers ${\displaystyle \mathbb {R} }$, and the complex numbers ${\displaystyle \mathbb {C} }$ — satisfy the zero-product property. In general, a ring which satisfies the zero-product property is called a domain. ## Algebraic context Suppose ${\displaystyle A}$ is an algebraic structure. We might ask, does ${\displaystyle A}$ have the zero-product property? In order for this question to have meaning, ${\displaystyle A}$ must have both additive structure and multiplicative structure.[note 1] Usually one assumes that ${\displaystyle A}$ is a ring, though it could be something else, e.g., the nonnegative integers ${\displaystyle \{0,1,2,\ldots \}}$. Note that if ${\displaystyle A}$ satisfies the zero-product property, and if ${\displaystyle B}$ is a subset of ${\displaystyle A}$, then ${\displaystyle B}$ also satisfies the zero product property: if ${\displaystyle a}$ and ${\displaystyle b}$ are elements of ${\displaystyle B}$ such that ${\displaystyle ab=0}$, then either ${\displaystyle a=0}$ or ${\displaystyle b=0}$ because ${\displaystyle a}$ and ${\displaystyle b}$ can also be considered as elements of ${\displaystyle A}$. ## Non-examples • Let ${\displaystyle \mathbb {Z} _{n}}$ denote the ring of integers modulo ${\displaystyle n}$. Then ${\displaystyle \mathbb {Z} _{6}}$ does not satisfy the zero product property: 2 and 3 are nonzero elements, yet ${\displaystyle 2\cdot 3\equiv 0{\pmod {6}}}$. • In general, if ${\displaystyle n}$ is a composite number, then ${\displaystyle \mathbb {Z} _{n}}$ does not satisfy the zero-product property. Namely, if ${\displaystyle n=qm}$ where ${\displaystyle 0, then ${\displaystyle m}$ and ${\displaystyle q}$ are nonzero modulo ${\displaystyle n}$, yet ${\displaystyle qm\equiv 0{\pmod {n}}}$. • The ring ${\displaystyle \mathbb {Z} ^{2\times 2}}$ of 2 by 2 matrices with integer entries does not satisfy the zero-product property: if ${\displaystyle M={\begin{pmatrix}1&-1\\0&0\end{pmatrix}}}$ and ${\displaystyle N={\begin{pmatrix}0&1\\0&1\end{pmatrix}}}$, then ${\displaystyle MN={\begin{pmatrix}1&-1\\0&0\end{pmatrix}}{\begin{pmatrix}0&1\\0&1\end{pmatrix}}={\begin{pmatrix}0&0\\0&0\end{pmatrix}}=0}$, yet neither ${\displaystyle M}$ nor ${\displaystyle N}$ is zero. • The ring of all functions ${\displaystyle f:[0,1]\to \mathbb {R} }$, from the unit interval to the real numbers, has nontrivial zero divisors: there are pairs of functions which are not identically equal to zero yet whose product is the zero function. In fact, it is not hard to construct, for any n ≥ 2, functions ${\displaystyle f_{1},\ldots ,f_{n}}$, none of which is identically zero, such that ${\displaystyle f_{i}\,f_{j}}$ is identically zero whenever ${\displaystyle i\neq j}$. • The same is true even if we consider only continuous functions, or only even infinitely smooth functions. ## Application to finding roots of polynomials Suppose ${\displaystyle P}$ and ${\displaystyle Q}$ are univariate polynomials with real coefficients, and ${\displaystyle x}$ is a real number such that ${\displaystyle P(x)Q(x)=0}$. (Actually, we may allow the coefficients and ${\displaystyle x}$ to come from any integral domain.) By the zero-product property, it follows that either ${\displaystyle P(x)=0}$ or ${\displaystyle Q(x)=0}$. In other words, the roots of ${\displaystyle PQ}$ are precisely the roots of ${\displaystyle P}$ together with the roots of ${\displaystyle Q}$. Thus, one can use factorization to find the roots of a polynomial. For example, the polynomial ${\displaystyle x^{3}-2x^{2}-5x+6}$ factorizes as ${\displaystyle (x-3)(x-1)(x+2)}$; hence, its roots are precisely 3, 1, and -2. In general, suppose ${\displaystyle R}$ is an integral domain and ${\displaystyle f}$ is a monic univariate polynomial of degree ${\displaystyle d\geq 1}$ with coefficients in ${\displaystyle R}$. Suppose also that ${\displaystyle f}$ has ${\displaystyle d}$ distinct roots ${\displaystyle r_{1},\ldots ,r_{d}\in R}$. It follows (but we do not prove here) that ${\displaystyle f}$ factorizes as ${\displaystyle f(x)=(x-r_{1})\cdots (x-r_{d})}$. By the zero-product property, it follows that ${\displaystyle r_{1},\ldots ,r_{d}}$ are the only roots of ${\displaystyle f}$: any root of ${\displaystyle f}$ must be a root of ${\displaystyle (x-r_{i})}$ for some ${\displaystyle i}$. In particular, ${\displaystyle f}$ has at most ${\displaystyle d}$ distinct roots. If however ${\displaystyle R}$ is not an integral domain, then the conclusion need not hold. For example, the cubic polynomial ${\displaystyle x^{3}+3x^{2}+2x}$ has six roots in ${\displaystyle \mathbb {Z} _{6}}$ (though it has only three roots in ${\displaystyle \mathbb {Z} }$).
2017-12-13 15:18:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 82, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9944175481796265, "perplexity": 1474.9795433945035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948527279.33/warc/CC-MAIN-20171213143307-20171213163307-00450.warc.gz"}
http://latex-community.org/submit-a-news-tip
### Partner Sites Submit a News Tip Please include a URL that contains detailed information into your message. This form is only for submitting news, you think they should be listed in our news section. Please post LaTeX related questions to the forum. ### Latest Forum Posts  Cannot display EPS files inside Lyx 01/04/2015 23:46, kazis Re: Line up at the left-center all the variables and names 01/04/2015 19:02, Stefan_K Line up at the left-center all the variables and names 01/04/2015 18:50, Philosophaie Re: Space between two words of text 01/04/2015 18:11, Stefan_K Re: Space between two words of text 01/04/2015 18:02, Johannes_B Re: Space between two words of text 01/04/2015 17:50, Philosophaie Re: Space between two words of text 01/04/2015 17:12, Stefan_K Space between two words of text 01/04/2015 16:42, Philosophaie Re: Lyx not working with any \cite, or MMD cite 01/04/2015 16:22, TMorville Re: Center the whole document 01/04/2015 15:56, Johannes_B
2015-04-01 22:29:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8513100743293762, "perplexity": 14546.371350076568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131309963.95/warc/CC-MAIN-20150323172149-00202-ip-10-168-14-71.ec2.internal.warc.gz"}
https://puzzling.stackexchange.com/questions/99313/can-you-fill-3-times-3-magic-square
# Can you fill $3 \times 3$ magic square? In the magic square • Each number in the matrix is unique and natural. • Each row, column and the two diagonals add up to the same number (the magic constant). Can you fill in the missing numbers? $$\begin{bmatrix}?&3&?\\7&11&?\\?&?&?\end{bmatrix}$$ There are two ways to do this: the algebraic way and the 'clever' way. # The algebraic way Call the top-left cell $$x$$, and call the total $$n$$. Then, we have: $$\begin{bmatrix}x&3&n-x-3\\7&11&n-18\\n-x-7&n-14&n-x-11\end{bmatrix}$$ Now, the bottom row and /-ward diagonal give two equations. $$(n-x-7) + (n-14) + (n-x-11) = n$$ $$(n-x-7) + (11) + (n-x-3) = n$$ Set the two left sides equal to each other, and most things cancel out; some basic algebra gives $$n=33$$, and then $$x=17$$. So the final square is: $$\begin{bmatrix}17&3&13\\7&11&15\\9&19&5\end{bmatrix}$$ # The clever way There is only one possible 3x3 magic square, up to linear transformations and dihedral symmetries. That is, if you have a magic square, you can make another by: • rotating or flipping it • scaling all cells by some constant • adding the same number to all cells It turns out that all 3x3 magic squares are equivalent under these transformations. The standard 3x3 magic square, also called the Lo-Shu square, is: $$\begin{bmatrix}4&9&2\\3&5&7\\8&1&6\end{bmatrix}$$ We can see that both given numbers on the edge are smaller than the center, so they must correspond to the 1 and 3. That is, the square has been flipped vertically. And what linear transformation brings 1 to 3, 3 to 7, and 5 to 11? "double and add one". So, to get this magic square, you take the Lo-Shu square, flip it vertically, double every number, and then add one to every number. The result is: $$\begin{bmatrix}17&3&13\\7&11&15\\9&19&5\end{bmatrix}$$ • Wow. First time I hear of this property that 3x3 magic square are all equivalent under an affine transformation. Thanks for mentioning it. – Florian F Jun 27 at 15:11
2020-09-30 16:52:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8312979340553284, "perplexity": 607.5039914349351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402127075.68/warc/CC-MAIN-20200930141310-20200930171310-00007.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-2-review-page-180/52
## Algebra: A Combined Approach (4th Edition) y = $\frac{-2 - 3x}{-6}$ Solve for y by isolating it to one side of the equation: 3x - 6y = -2 Subtract 3x: -6y = -2 - 3x Divide both sides by -6: y = $\frac{-2 - 3x}{-6}$
2018-04-21 14:06:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6656225323677063, "perplexity": 1079.9636018516128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945222.55/warc/CC-MAIN-20180421125711-20180421145711-00433.warc.gz"}
http://humanthermodynamics.wikifoundry.com/page/proof
# Proof In 1975, American sociologists James Dabbs and Neil Stokes found that beautiful people are given more volume of personal space when moving in public, a proof that thermodynamics applies to humans socially. In 2002, American electrochemical engineer Libb Thims found prove of Beckhap's law, that beauty is inversely proportional to brains, on average. In science, proof is the cogency of evidence that compels acceptance by the mind of a truth or a fact or the process of an instance of establishing the validity of a statement especially by derivation from other statements in accordance with principles of reasoning; something that induces certainty or establishes validity; the quality of having been tested or tried, especially to the point of unyielding hardness. [1] Descartes | Spinoza The following is a noted maxim of Rene Descartes: “Nothing ought to be received as truth until it has been proved by good and solid reasons.” Dutch philosopher Benedict Spinoza, from an early an age, was said to have been said to have been strongly impressed by this Cartesian ideology. Overview In hmolscience, often when physics, chemistry, and thermodynamics terminologies are seen being employed to explain social phenomena and social questions, the objection arises to the effect of: “where is the proof?” or “has this been tested by falsifiability”, of the Karl Popper test of science variety, “can the theory make predictions” that can be tested, and so on. The following 1945 statement by American historian Morris Zucker in objection to physicists employing physics theory, such as relativity, to explain sociology and history, seems to hit the nail on the head: [2] “We will soon have the occasion to discuss the social philosophies of some really great physicists when they deign to pass judgment on social questions. Here we are concerned with an attempt to engraft ‘relativist’ history through the medium of relativist phraseology. The frame of reference, lines of force in a gravitational field, the time-space continuum, all these are noble conceptions in physics where they are endowed with precise meaning and have been subjected to experimental proof. Writers have attempted to smuggle these conceptions into history by the simple expedient of employing the same idioms. They sound the same, but have not the same meaning. Let them use facts instead of phrases, proof instead of analogies.” In 2010, another example comes from Iranian-born American chemical engineer Ali Mansoori who, at the end of a guest lecture on human thermodynamics given by American electrochemical engineer Libb Thims, given to bioengineering thermodynamics students, at the University of Illinois Chicago, Mansoor, the professor of the class, asked Thims, in followup Q&A, “has this been proved anywhere?” Thermodynamics Swiss mathematical physicist Leonard Euler, with his circa 1740 “reciprocity relation” is the so-called mathematical proof behind the condition for an exact differential, and hence the mathematical proof behind the existence of state functions, in particular entropy, of thermodynamics. In circa 1860, German physicist Rudolf Clausius gave his famous “proof of the impossibility of perpetual motion of the second kind”, according to which perpetual motion violates the second law of thermodynamics. In 1882, German physicist Hermann Helmholtz, in his “On the Thermodynamics of Chemical Processes”, famously proved that free energy, not heat, is the true measure of chemical affinity. Belgian chemist Theophile de Donder, supposedly, also gave a similar proof in 1922, but by that time, Helmholtz's proof had already been absorbed into the 1923 textbook of Gilbert Lewis, and hence modern chemical thermodynamics, as it is currently known. Thermodynamic proofs The subject of "thermodynamic proofs" are a bit of a complex subject. They are, in a sense, mathematical proofs—sets of arguments used to deduce a mathematical theorem from a set of axioms—combined with proofs and measurements found from physical experiments on nature in respect to heat, work, and energy. The examples of Q&A’s behind attempts to prove something as simple as, for example, Boyle’s law, something first arrived at via experimental data obtained from the pneumatical engine, give examples of the issues at hand. [6] Dabbs-Stokes study See main: Dabbs-Stokes study The 1975, American sociologists James Dabbs and Neil Stokes, in their article "Beauty is Power: the Use of Personal Space on the Sidewalk", presenting the results of a study in which the time lapse filming of pedestrians observed from above walking along a sidewalk, which found a number of quantitative findings, one being that a beauty causes a volume expansion, meaning that a beautiful person is given or allotted more personal space when moving through crowds, a finding measured in inches increase in volume. [4] This would seem to be one of the first proofs of a connection between beauty, mechanisms of human chemical reactions, and pressure volume work, as quantified in human chemical thermodynamics. Beckhap's law proof See main: Beckhap’s law proof In 2002, American electrochemical engineer Libb Thims, using college graduation photo attractiveness ranking data and intellectual difficulty of college degree obtained data proved of Beckhap’s law, that beauty and brains, on average, are inversely proportional. Specifically, to determine if physical attractiveness, statistically, is inversely proportional to intelligence, on average, Thims had one group of people rate the physical attractiveness of 2,018 college graduation photos, graduating classes of 1969 and 1972 at the University of Illinois at Chicago, and had a second group of people rate the intellectual difficulty of each degree obtained, for the people in those photos, albeit only being shown the name of the degree. These two data sets were sorted by sex and grouped into similar categories. The results confirmed the theory. In the graduating classes of 1969 and 1972, for example, 670 female students obtained 67 different degrees. By comparing females who obtained science-related degrees, among other related groups, we obtain the plot shown below: [3] Description: A plot of the ranked data results, of the group "female science majors", from the 2002 study of 2,018 University of Illinois at Chicago (UIC) college graduation photos, graduating classes of 1969 and 1972, showing that attractiveness is inversely proportion, on average, to intelligence, a finding which corroborates Beckhap's law.Key: P = psychology, B = biology, C = chemistry, and M = mathematics, each with 41, 20, 13, and 21 students, respectively. Similarly, A = physical attractiveness (of group); on a scale of 7.0 = most physically attractive to 1.0 = least physically attractive; and I = intellectual difficulty (of degree); on a scale of 100 = most intellectually difficult to 10 = least intellectually difficult. A similar inverse trend was found with male engineering students, namely that the physical attractiveness of students, on average, was found to be inversely proportional to intellectual difficulty of degree obtained. Thims then attempted to explain this finding by correlating the initial state Gi and final state Gf of the free energy change for a typical mating reaction to bulk values of attractiveness and intelligence involved in mate selection. A solution was found using the following two assumptions, first that enthalpy is proportional to physical attractiveness: $H = k A \,$ This would concur, in some sense, with Frederick Rossini's 1971 "Chemical Thermodynamics in the Real World" argument that enthalpy is a measure of "security" in social reaction existence, meaning that people will tend to want to bond with physically attractive individuals in relationships, and hence be seemingly more "secure" in their social existence or in the social structure, whereas less physically attractive individuals will tend to remain single, e.g. homebodies, cat ladies, and or outcasts, e.g. hobos, bag ladies, etc., give or take, baring more detailed discussion. The second assumption made was that entropy is inversely proportional to intelligence: $S = \frac{k}{I} \,$ This would concur, in some sense, with Stephen Hawking's 1996 argument that reading decreases the neurological entropy of a person by so many units, meaning that intellectual mastery would be inversely proportional to entropy of a person, in a roundabout sense, using a combination of the 1862 entropy as a measure of disgregation model of Rudolf Clausius and the 1882 characterization by Hermann Helmholtz of the magnitude of entropy |S| as the measure of disorder of the particles of the system with respect to each other. With these approximations in place, one can employ intelligence and physical beauty as correlative measure of entropy and enthalpy, respectively, which can thus be used to represent the instantaneous 'state' of the reactive system at any given second on going from reactants to products. These, in turn, can then be substituted into the Gibbs equation: $\Delta G = H_f - H_i - T (S_f - S_i) \,$ to yield for an inverse relationship plot. Skipping over much of the derivation and discussion, using the two above approximations, and assuming that initial state of the reaction, in which two individuals, one male molecule Mx and one female molecule Fy, of varying levels of intelligence and beauty, is the day the pair fall in love at first sight, that they pair conceives one child, Bc, three years later, and that the end state of the reaction, coincides with the point of the fifteenth year of the growth of the child, after which the precipitate child molecule begins to detach from the parental structure. This gives the following simplified overall reaction mechanism: $M_X + F_Y \rightarrow B_C \,$ On this model, the following variables can be be defined at day one (-3 years before conception) and the final day (+15 after conception): $G_f = G_C^{15} \,$ Gibbs free energy of the state of the child, Bc, detached at age 15. $G_i = G_X^{-3} + G_Y^{-3} \,$ Gibbs free energy of the state of two reactants, the male Mx and female molecule Fy, at the point of love at first sight. $H_f = H_C^{15} \,$ Enthalpy of the state of the child, Bc, detached at age 15. $H_i = H_X^{-3} + H_Y^{-3} \,$ Enthalpy of the state of the two reactants, the male Mx and female molecule Fy, at the point of love at first sight. $S_f = S_C^{15} \,$ Entropy of the state of the child, Bc, detached at age 15. $S_i = S_X^{-3} + S_Y^{-3} \,$ Entropy of of the state of the two reactants, the male Mx and female molecule Fy, at the point of love at first sight. Using these time-specific variables, through a bit of substitution, one can derive the following result: [9] $A_X^{-3} = \frac{C_1}{I_X^{-3}} + C_2 \,$ which says that, owing to the constraints of the Gibbs equation, otherwise known as the combined law of thermodynamics, the physical attractiveness of the individual, in this case the male, will vary inversely with the intellect of the individual, on average, at the initial start to a typical romantic male-female reaction. There are many issues, to note, with this proof, one being that the second assumption, that of entropy, using the disorder model of entropy, in human reactions, being inversely proportional to intelligence (mental order), is derived from gas theory, particularly the Boltzmann chaos assumption, in which particles are assumed to have non-correlative velocities, which is not the case with human molecules. Other In 1992, French-born English ecologist-philosopher Edward Goldsmith, in his The Way: An Ecological World-View, attempted to argue, on the logic of the views of Belgian chemist Ilya Prigogine, that the laws of classical thermodynamics cannot be applied to living things, and included an appendix section devoted to a supposed ten-page proof that “the entropy law does not apply to behavior within the ecosphere.” [5] In 1869, German physicist-physiologist Adolf Fick, supposedly, attempted to give some type of ‘entropy proof of God’s existence’; the first of many so called second law based or disorder-order based proofs of God reoccurring seen ever sense. Quotes The following are related quotes: “The concept of psychic energy is as much justified in science as that of physical energy, and psychic energy has just as many quantitative measurements and different forms as has physical energy. The burden of proof falls on those who deny psychic energy, not on those who acknowledge it.” Nicolas von Grot (1898), “The Terms of the Soul and Psychic Energy n Psychology” “What does economic power mean in a system theoretical rather than in a political sense? We don’t know? Consequently, we cannot define a set of adjugate variables that describe the behavior of a macroeconomy. However, I would like to go one step further. While I cannot prove this to be correct, I am personally convinced that any real system that can meaningfully be describe by a differential equation model—and macroeconomic systems are among those without any question—possesses some sort of ‘energy’ that obeys the law of conservation of energy. It is just that, to my knowledge, nobody has ever looked into systems, such as macroeconomies, from quite that [thermodynamic] perspective and tried to formulate a meaningful and consistent definition of the terms ‘energy’ and ‘power’, and from there derived a set of adjugate variables, the product of which is ‘power’. This would be a very worthwhile topic for a PhD dissertation.” — Francois Cellier (1991), “Modelling in Nonequilibrium Thermodynamics” References 1. Merriam-Webster Collegiate Dictionary, 2000. 2. Zucker, Morris. (1945). The Philosophy of American History: The Historical Field Theory (pgs. 165-66). Arnold-Howard Publishing Co. 3. (a) Thims, Libb. (2002). “UIC: Attractiveness vs. Intelligence Date: 2,000 graduation photos rated for attractiveness and undergraduate degrees per each photo rated for intellectual difficulty”, IoHT Research Project. (b) Thims, Libb. (2007). Human Chemistry (Volume Two) (UIC: Attractiveness vs. Intelligence Study, pgs. 671-72). Morrisville, NC: LuLu. 4. Dabbs, James M. and Stokes, Neil A. (1975). “Beauty is Power: the Use of Space on the Sidewalk” (abs), Sociometry, 38: 551-57.5. Goldsmith, Edward. (1992). The Way: An Ecological World-View, (pgs. 13-14) (Appendix One: Does the Entropy Law Apply to the Real World?, pgs 439-48). University of Georgia Press. 5. Goldsmith, Edward. (1992). The Way: An Ecological World-View, (pgs. 13-14) (Appendix One: Does the Entropy Law Apply to the Real World?, pgs 439-48). University of Georgia Press. 6. Thermodynamic Proof – PhysicsForum.com. ● Thims, Libb. (2011). Thermodynamic Proof that Good Always Triumphs over Evil”, Journal of Human Thermodynamics, 7: 1-4. Scientific evidence – Wikipedia.
2022-08-14 00:03:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5054944753646851, "perplexity": 2438.9781115949686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571989.67/warc/CC-MAIN-20220813232744-20220814022744-00226.warc.gz"}
https://msp.org/ant/2018/12-6/ant-v12-n6-p06-p.pdf
#### Vol. 12, No. 6, 2018 Recent Issues The Journal About the Journal Editorial Board Editors’ Interests Subscriptions Submission Guidelines Submission Form Policies for Authors Ethics Statement ISSN: 1944-7833 (e-only) ISSN: 1937-0652 (print) Author Index To Appear Other MSP Journals Bases for quasisimple linear groups ### Melissa Lee and Martin W. Liebeck Vol. 12 (2018), No. 6, 1537–1557 ##### Abstract Let $V$ be a vector space of dimension $d$ over ${\mathbb{F}}_{q}$, a finite field of $q$ elements, and let $G\le GL\left(V\right)\cong {GL}_{d}\left(q\right)$ be a linear group. A base for $G$ is a set of vectors whose pointwise stabilizer in $G$ is trivial. We prove that if $G$ is a quasisimple group (i.e., $G$ is perfect and $G∕Z\left(G\right)$ is simple) acting irreducibly on $V$, then excluding two natural families, $G$ has a base of size at most 6. The two families consist of alternating groups ${Alt}_{m}$ acting on the natural module of dimension $d=m-1$ or $m-2$, and classical groups with natural module of dimension $d$ over subfields of ${\mathbb{F}}_{q}$. We have not been able to recognize your IP address 35.172.230.154 as that of a subscriber to this journal. Online access to the content of recent issues is by subscription, or purchase of single articles. Please contact your institution's librarian suggesting a subscription, for example by using our journal-recom­mendation form. Or, visit our subscription page for instructions on purchasing a subscription. You may also contact us at contact@msp.org or by using our contact form. Or, you may purchase this single article for USD 40.00: ##### Keywords linear groups, simple groups, representations, primitive permutation groups, bases of permutation groups ##### Mathematical Subject Classification 2010 Primary: 20C33 Secondary: 20B15, 20D06 ##### Milestones Received: 20 February 2018 Revised: 10 April 2018 Accepted: 6 June 2018 Published: 6 October 2018 ##### Authors Melissa Lee Department of Mathematics Imperial College London United Kingdom Martin W. Liebeck Department of Mathematics Imperial College London United Kingdom
2023-01-29 08:21:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18614770472049713, "perplexity": 1819.3080241681885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499710.49/warc/CC-MAIN-20230129080341-20230129110341-00390.warc.gz"}
https://math.stackexchange.com/questions/3444254/taos-infinite-pigeonhole-principle-all-sequences-have-constant-subsequences/3444282
# Tao's infinite pigeonhole principle: “All sequences have constant subsequences” In an article by Terrence Tao, Compactness and Compactification, which begins by stating some properties of finite sets $$X$$, the author writes the following: (All sequences have constant subsequences) If $$x_1,x_2,x_3,...\in X$$ is a sequence of points in $$X$$, then there must exist a subsequence $$x_{n_1},x_{n_2},...$$ which is constant, thus $$x_{n_1}=x_{n_2}=...=c$$ for some $$c\in X$$. (This fact is sometimes known as the infinite pigeonhole principle.) I am struggling to see what this means, and wondered why this is obvious. For instance, take $$X=\{1,2,3,4,5,6\}$$, then let $$(2,1,3,5,4,6)$$ be some sequence of points in $$X$$. Then the only subsequences with the property written above are trivially $$(2)$$, $$(1)$$, ..., etc. Is this what the property is? In other words, is it an obvious, but not very interesting property, or am I missing something? Additionally, how does it relate to the pigeonhole principle? I am struggling to find an appropriate tag, so please feel free to edit and change the tags as you see fit. • Tao's sequences here are infinite sequences. Your supposed counterexample is finite. – Ethan Bolker Nov 20 '19 at 22:50 • Did you notice the dots in "If $x_1,x_2,x_3,\dots\in X$"? That means we're talking about an infinite sequence, not a finite one. – Gerry Myerson Nov 20 '19 at 22:51 • @EthanBolker Thank you for clarifying. I did not think I had produced a counterexample, but was trying to see what the theorem meant, obviously wrongly using finite sequences. – Benjamin Nov 20 '19 at 23:22 • @GerryMyerson Thank you, Gerry, I realise now that it is an infinite sequence. – Benjamin Nov 20 '19 at 23:24 A sequence in $$X$$ is a function from $$\Bbb N$$ to $$X$$, where we denote the value of $$n$$ by $$x_n$$ for short. A sequence has a value for every $$n \in \Bbb N$$, they go on forever. And if we have a sequence $$(x_n)$$ define for each $$x \in X$$: $$N_x=\{n \in \mathbb{N}: x_n =x\}$$ and note that $$\bigcup_{x \in X} N_x = \Bbb N$$ as every $$n$$ has a value in $$X$$ and so is in a unique $$N_x$$. And if all $$N_x$$ were finite, so would the union be as $$X$$ is finite, contradiction; so some $$N_x$$ is infinite, and writing $$N_x=\{n_1, n_2, \ldots\}$$ in increasing order we have a constant subsequence with value $$x$$. The pigeonhole principle is clear: we have infinitely many pigeons going into finitely many "holes", so one hole must have infinitely many pigeons in it. • Thank you for your superb and lucid response! – Benjamin Nov 20 '19 at 23:19 • @Benjamin you’re welcome. – Henno Brandsma Nov 21 '19 at 4:52 What this statement is saying is that when you take an infinite sequence of values from a finite sequence, it has a constant subsequence. For example if you define the sequence {2,1,2,1,2,1...} where the 2s and 1s alternate forever, this theorem you’ve pointed out claims that there is some infinite constant subsequence. And an obvious choice would be to take the subsequence {2,2,2,2...} simply taking the odd indexed terms. This property may seem pretty trivial but in many situations it can be very useful. While it’s not likely to ever happen, one hypothetical proof for the twin prime conjecture would be to find an infinite sequence of twin primes whose inverses have a convergent sum. If there were only finitely many twin primes, than any infinite sequence of twin primes would necessarily have an infinite constant subsequence, and this constant subsequence would force the sum of the inverses to diverge. The property is related to the pigeonhole principle because both theorems show that when we put a certain set of things into a certain set of boxes, at least one box must have a certain number of things put in it.
2020-02-23 23:53:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9146695733070374, "perplexity": 253.85695897383712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145859.65/warc/CC-MAIN-20200223215635-20200224005635-00108.warc.gz"}
https://vlorbik.wordpress.com/2008/06/26/another/
### Another! $\bullet$An announcement for “The All-New Online Math League”, by The Elementary Educator. 1. Do you have an RSS feed? If so, what’s the URL? 2. not as of this time.
2017-10-21 16:23:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.548728883266449, "perplexity": 9811.871124466616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824820.28/warc/CC-MAIN-20171021152723-20171021172723-00845.warc.gz"}
https://msp.org/pjm/2017/286-1/pjm-v286-n1-p09-p.pdf
Vol. 286, No. 1, 2017 Recent Issues Vol. 297: 1 Vol. 296: 1  2 Vol. 295: 1  2 Vol. 294: 1  2 Vol. 293: 1  2 Vol. 292: 1  2 Vol. 291: 1  2 Vol. 290: 1  2 Online Archive Volume: Issue: The Journal Subscriptions Editorial Board Officers Special Issues Submission Guidelines Submission Form Contacts Author Index To Appear ISSN: 0030-8730 Other MSP Journals Local symmetric square $L$-factors of representations of general linear groups Shunsuke Yamana Vol. 286 (2017), No. 1, 215–256 Abstract This paper develops a theory of local symmetric square $L$-factors of representations of general linear groups. We will prove a certain characterization of a pole of symmetric square $L$-factors of square-integrable representations, the uniqueness of certain trilinear forms and the nonexistence of Whittaker models of higher exceptional representations. Warning: We have not been able to recognize your IP address 3.82.24.132 as that of a subscriber to this journal. Online access to the content of recent issues is by subscription, or purchase of single articles. symmetric square $L$-factors, exceptional representations, distinguished representations
2018-12-19 01:46:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6156857013702393, "perplexity": 4393.761907505359}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376830305.92/warc/CC-MAIN-20181219005231-20181219031231-00356.warc.gz"}
https://www.physicsforums.com/threads/quantum-entanglement-and-parallel-displacement.870581/
# Quantum entanglement and parallel displacement Suppose we fire two entangled particles in a tour round-flight around the galaxy and measure their spins using two Stern-Gerlach devices after returning back to the earth. Will the correlation between their spin measurement still obey quantum correlation? According to General Relativity, parallel displacement of the spin vectors should bring the two vectors into different states with some displacement. But QM prediction does not consider such effect as long as the state of one particle is only revealed when it is measured. So will parallel displacement issue account here to change the state of particles in a way that may not follow what QM predicts? stevendaryl Staff Emeritus Suppose we fire two entangled particles in a tour round-flight around the galaxy and measure their spins using two Stern-Gerlach devices after returning back to the earth. Will the correlation between their spin measurement still obey quantum correlation? According to General Relativity, parallel displacement of the spin vectors should bring the two vectors into different states with some displacement. But QM prediction does not consider such effect as long as the state of one particle is only revealed when it is measured. So will parallel displacement issue account here to change the state of particles in a way that may not follow what QM predicts? I'm not qualified to talk about QM in curved spacetime, but my guess about how GR affects EPR is the following: • Alice measures the spin state of one particle: Say, spin-up along some axis $\vec{a}$ • This implies a different spin state, say spin-up along $\vec{a'}$, at the time the twin particles were created. • This in turn implies that Bob's particle had spin-state: spin-down along $\vec{a'}$, at the time the particles were created. • Finally, this implies that Bob's particle has some spin state, spin-down along a third axis, $\vec{a''}$ at the time he measures the spin. So in curved spacetime, it will not necessarily be that Alice's particle will have a spin state that is opposite Bob's (you can't even define "opposite spin states" in a path-independent way), but it will be that Alice's spin state is still strongly correlated with Bob's spin state. The correlation is just more complicated. The other complication is that it is possible that different paths to get to Bob result in different spin states $\vec{a''}$. In this case, I would think that you wouldn't get perfect correlations any more, because there would be interference between the spin states associated with different paths. 'm not qualified to talk about QM in curved spacetime, but my guess about how GR affects EPR is the following: • Alice measures the spin state of one particle: Say, spin-up along some axis ⃗aa→\vec{a} • This implies a different spin state, say spin-up along →a′a′→\vec{a'}, at the time the twin particles were created. I dont understand this. For if Alice has to imply that, it would mean the particle has a definite spin at the time of creation which contradicts the collapse theory which says the particle has no definite state of spin until it is measured. Finally, this implies that Bob's particle has some spin state, spin-down along a third axis, →a′′a″→\vec{a''} at the time he measures the spin. What if we choose a to be the third direction in Bells inequality. According to what I understood from you comment, there would be a perfect anti-correlation between Alices particle at a and Bobs particle at a which contradicts what QM says that the correlation depends on the angle between a and a. Strilanc What if we choose a to be the third direction in Bells inequality. According to what I understood from you comment, there would be a perfect anti-correlation between Alices particle at a and Bobs particle at a which contradicts what QM says that the correlation depends on the angle between a and a. You're confusing QM's predictions for specifically the singlet state with its predictions for entangled states in general. If we create an EPR pair, but I rotate your particle 10 degrees giving it to you, the spins are still entangled. All the correlations will be off by 10 degrees, but the entanglement is still there and still detectable. Fixing the offset is just a matter of performing the opposite rotation. zonde You're confusing QM's predictions for specifically the singlet state with its predictions for entangled states in general. If we create an EPR pair, but I rotate your particle 10 degrees giving it to you, the spins are still entangled. All the correlations will be off by 10 degrees, but the entanglement is still there and still detectable. Fixing the offset is just a matter of performing the opposite rotation. How can you rotate my particle and it is still entangled without collapsing its state to a state at the angle of rotation? Rotating the particle spin means you already measure it which is no longer entangled. Strilanc How can you rotate my particle and it is still entangled without collapsing its state to a state at the angle of rotation? Rotating the particle spine would mean you already measure it which means it is no longer entangled. You can rotate a particle's spin without measuring it. You can rotate a particle's spin without measuring it. How? what application would you use? stevendaryl Staff Emeritus I dont understand this. For if Alice has to imply that, it would mean the particle has a definite spin at the time of creation which contradicts the collapse theory which says the particle has no definite state of spin until it is measured. Okay, let me try to get more rigorous. The way that QM predicts probabilities is this: The probability of getting a particular result is the absolute square of the amplitude for getting that result. So how does QM predict amplitudes? It's like this: The amplitude for a result $R$ is given by: $\sum_\alpha C_\alpha \psi(R|\alpha)$ where $C_\alpha$ is the amplitude for being in initial state $|\chi_\alpha\rangle$, and $\psi(R|\alpha)$ is the amplitude for getting result $R$ given that the system starts off in initial state $|\chi_\alpha\rangle$. ($|\chi_\alpha\rangle$ is any complete set of states) You have complete freedom for choosing your complete set of states $|\chi_\alpha\rangle$. I might as well choose the following basis: 1. $|\chi_1\rangle = |\vec{a'}\rangle |\vec{a'}\rangle$ (Alice's particle and Bob's particle are both initially spin-up in the a'-direction) 2. $|\chi_2\rangle = |\vec{a'}\rangle |-\vec{a'}\rangle$ (Alice's particle is initially spin-up in the a'-direction Bob's particle is initially spin-up in the negative a'-direction.) 3. $|\chi_3\rangle = |-\vec{a'}\rangle |\vec{a'}\rangle$ (Alice's particle is spin-up in the negative a'-direction and Bob's particle is spin-up in the a'-direction) 4. $|\chi_4\rangle = |-\vec{a'}\rangle |-\vec{a'}\rangle$ (Alice's particle and Bob's particle are both initially spin-up in the negative a'-direction) Because for spin-1/2 EPR, the total spin is zero, we can immediately compute the amplitudes $C_\alpha$: $C_1 = 0$ $C_2 = \frac{1}{\sqrt{2}}$ $C_3 = -\frac{1}{\sqrt{2}}$ $C_4 = 0$ (The overall phase is unobservable, so we're free to choose $C_2$ to be real and positive. This forces $C_3$ to be negative and real in order for the total spin to be zero.) So our probability amplitude simplifies to: $\frac{1}{\sqrt{2}} \psi(R|\alpha=2) - \frac{1}{\sqrt{2}} \psi(R|\alpha=3)$ Now, if the result $R$ is that Alice measures spin-up along axis $\vec{a}$, while Bob measures spin-down along axis $\vec{b}$, then we can write: $\psi(R|\alpha=2) = \psi_A(\vec{a}|\vec{a'})\psi_B(\vec{b}|-\vec{a'})$ $\psi(R|\alpha=3) = \psi_A(\vec{a}|-\vec{a'})\psi_B(\vec{b}|\vec{a'})$ where $\psi_A(\vec{a}|\vec{z})$ is the probability that Alice's particle will have spin-up in the $\vec{a}$ direction given that it initially had spin up in the $\vec{a'}$ direction, and $\psi_B(\vec{b}|-\vec{a'})$ is the probability that Bob's particle will have spin-up in the $\vec{b}$ direction given that it initially had spin up in the $-\vec{a'}$ direction. And similarly for the other terms. What I was assuming in my first post was that there was a unique direction $\vec{a'}$ such that $\psi_A(\vec{a}|\vec{a'}) = 1$ and $\psi_A(\vec{a}|-\vec{a'}) = 0$ Strilanc How? what application would you use? Magnetic fields make the spin precess. Precession is a rotation. This is a really basic fact about spins. Not something I should have to bring up in a thread marked "advanced" (i.e. grad-student-in-physics level). Magnetic fields make the spin precess. Precession is a rotation. This is a really basic fact about spins. Not something I should have to bring up in a thread marked "advanced" (i.e. grad-student-in-physics level). I know Larmors equation but I dont know whether applying magnetic field on one of entangled particle would affect its state or no. If it rotates its state by a definite angle, then its state is changed relative to the initial state by the amount of that angle. If so, the state of its entangled particle must also be changed in order to keep the total spin=0. Again, arent we doing measurement here? But: where $\psi_A(\vec{a}|\vec{z})$ is the probability that Alice's particle will have spin-up in the $\vec{a}$ direction given that it initially had spin up in the $\vec{a'}$ direction, and $\psi_B(\vec{b}|-\vec{a'})$ is the probability that Bob's particle will have spin-up in the $\vec{b}$ direction given that it initially had spin up in the $-\vec{a'}$ direction. And similarly for the other terms. What I was assuming in my first post was that there was a unique direction $\vec{a'}$ such that $\psi_A(\vec{a}|\vec{a'}) = 1$ and $\psi_A(\vec{a}|-\vec{a'}) = 0$ ,,, would imply that two particles have given directions at the moment they were created which is a local hidden variable! But: ,,, would imply that two particles have given directions at the moment they were created which is a local hidden variable! I don't agree. Every electron has spin but we don't know the direction so ##\vec{a'}## can have any value. I don't agree. Every electron has spin but we don't know the direction so ##\vec{a'}## can have any value. I dont agree too. According to QM, our knowledge of spin direction is only revealed at the time of measurement. That is how solving the eigen problem is interpreted. If we measure spin of a particle and it comes to be spin-up along some direction, does it mean the particle has spin up before the measurement? Particle spin is the quantum property which is only revealed after the measurement. Strilanc I know Larmors equation but I dont know whether applying magnetic field on one of entangled particle would affect its state or no. If it rotates its state by a definite angle, then its state is changed relative to the initial state by the amount of that angle. If so, the state of its entangled particle must also be changed in order to keep the total spin=0. Again, arent we doing measurement here? How could any spin be precessed by a magnetic field if the process was measuring it? Measurements flatten spins into mixed states, so we wouldn't call the process "precession" if that's what was happening. I actually don't know the answer to this apparent paradox of angular momentum conservation being violated, but it doesn't require entanglement. The solution for the unentangled case will apply to the entangled case. How could any spin be precessing if the rotation process was measuring it? This seems like a question independent of entanglement. I didnt get you. My questions was in that form: if we create a pair of entangled particles, one of them is allowed to pass under magnetic field of known strength and direction while the other one is set free. Would the change made on the state of the first particle after exiting the magnetic field affect the state of the second particle? stevendaryl Staff Emeritus But:,,, would imply that two particles have given directions at the moment they were created which is a local hidden variable! That's just quantum mechanics. You have some initial state $|\psi_{initial}\rangle$. You're trying to compute the probability that you will end up in some final state $|\psi_{final}\rangle$. That probability is $|\langle \psi_{final}|U(t)|\psi_{initial}\rangle|^2$, where $U(t)$ is the time evolution operator, and $t$ is the time between the preparation of the initial state and the measurement of the final state. That's just basic QM. Now, the next step is to compute $\langle \psi_{final}|U(t)|\psi_{initial}\rangle$. This step is pure mathematics--there is no additional assumptions or interpretations involved: $\langle \psi_{final}|U(t)|\psi_{initial}\rangle = \sum_{\alpha} \langle \psi_{final} | U(t)| \chi_\alpha \rangle \langle \chi_\alpha | \psi_{initial} \rangle$ where $|\chi_\alpha\rangle$ is a complete sets of states. Now, the various pieces of this expression can be given interpretations: • $\langle \psi_{final}|U(t)|\chi_\alpha \rangle$ = the probability amplitude that a system, initially prepared in state $\chi_\alpha$, will be found in state $|\psi_{final}\rangle$ a time $t$ later. • $\langle \chi_\alpha | \psi_{initial} \rangle$ = the probability amplitude that a system can be found in state $|\chi_\alpha\rangle$ given that it is prepared in state $|\psi_{initial}\rangle$ (or alternatively, the coefficients of $|\psi_{initial}\rangle$ when expressed as a superposition of the basis $|\chi_\alpha\rangle$) The fact that you consider a complete set of initial states $|\chi_\alpha\rangle$ does not in any way imply that you think it is REALLY in one of those states, and you just don't know which. So it is not at all a hidden-variables assumption. stevendaryl Staff Emeritus I didnt get you. My questions was in that form: if we create a pair of entangled particles, one of them is allowed to pass under magnetic field of known strength and direction while the other one is set free. Would the change made on the state of the first particle after exiting the magnetic field affect the state of the second particle? No, it would not. . I dont agree too. According to QM, our knowledge of spin direction is only revealed at the time of measurement. That is how solving the eigen problem is interpreted. If we measure spin of a particle and it comes to be spin-up along some direction, does it mean the particle has spin up before the measurement? Particle spin is the quantum property which is only revealed after the measurement. You are confusing the property spin with its value ( a direction). Every electron has spin whether it has been measured or not. You actually say that ! No, it would not. So, provided that process is not considered as measurement and provided that the second particle state is not affected, where is the law of conservation of total spin now? If we would write the state equation of the system, the first particle state is in the form of a new state while the second one is not, which violates the law of conservation of total spin! stevendaryl Staff Emeritus So, provided that process is not considered as measurement and provided that the second particle state is not affected, where is the law of conservation of total spin now? If we would write the state equation of the system, the first particle state is in the form of a new state while the second one is not, which violates the law of conservation of total spin! The deflection of an electron by the electromagnetic field conserves angular momentum. There is angular momentum in the electromagnetic field as well as the angular momentum associated with the particle. The deflection of an electron by the electromagnetic field conserves angular momentum. There is angular momentum in the electromagnetic field as well as the angular momentum associated with the particle. But for a system of two entangled particles, in order to conserve the total spin, the same amount of deflection should be added to the second particle in opposite direction for the total system to be conserved, otherwise, they are no longer entangled any more. This should appear in the state component of the second particle as well. If not, that implies the conservation law was only between the first particle and the EM field. This means a sort of interaction between them happens which should be measured, against what is claimed here that there is no such measurement. The same reasoning applied to my original thought experiment considering two entangled particles in the gravitational field. So we left here in one of two situations: 1) Either the two particles always remain entangled with total spin=0 which means the interaction between the first particle and the field (whether gravitational or EM) does not affect the physical state of the system if two particles. That means that physical interaction has no physical interaction!! like Barber paradox :) 2) There is a sort of interaction between the first particle and the field that can be measured and hence breaks the entanglement. Last edited: Strilanc Okay, I figured out why precession doesn't measure the state in general. Basically, because there's already quite a lot of uncertainty in the exact angular momentum in practice, the transfer of angular momentum to the magnetic-field apparatus is washed out in the noise. The amount of decoherence is non-zero, but negligible. So, to a good approximation, you can rotate a spin without measuring it. This applies to both unentangled and entangled spins. Okay, I figured out why precession doesn't measure the state in general. Basically, because there's already quite a lot of uncertainty in the exact angular momentum in practice, the transfer of angular momentum to the magnetic-field apparatus is washed out in the noise. The amount of decoherence is non-zero, but negligible. So, to a good approximation, you can rotate a spin without measuring it. This applies to both unentangled and entangled spins. So how does Stern Gerlach device work? This device is nothing but a magnetic field apparatus too. Strilanc So how does Stern Gerlach device work? This device is nothing but a magnetic field apparatus too. The Stern Gerlach device entangles the electron's spin into the electron's momentum, biasing it upward or downward, so that different spins eventually result in hitting a downstream screen at different points. The hidden-by-big-mixed-state idea applies to the state of the device, but not to the electron's momentum. The Stern Gerlach device entangles the electron's spin into the electron's momentum, biasing it upward or downward, so that different spins eventually result in hitting a downstream screen at different points. The hidden-by-big-mixed-state idea applies to the state of the device, but not to the electron's momentum. What if I put a screen after the device you described, won't the particle be biased to have either spin up or down relative to the direction of magnetic field of that device? Strilanc What if I put a screen after the device you described, won't the particle be biased to have either spin up or down relative to the direction of magnetic field of that device? Right, that's what I said. The Stern Gerlach device makes the electrons move in a direction that depends on their spin. Right, that's what I said. The Stern Gerlach device makes the electrons move in a direction that depends on their spin. No I meant, I put a screen after the particle passes through an EM field which rotates its spin without measuring it? Strilanc
2022-05-21 16:56:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7206723690032959, "perplexity": 394.90523397646666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539131.21/warc/CC-MAIN-20220521143241-20220521173241-00390.warc.gz"}
https://tw.answers.yahoo.com/question/index?qid=20121119000015KK01681
Gogo 發問時間: 社會與文化語言 · 8 年前 # 請問這二題可以填入什麼字? 1. Horseshoes Falls is the most powerful waterfall in North America, _____ measured by vertical height and also by flow rate. 2. Methane produced by the action of water on hot carbon bearing rocks, _____ occurs in volcanic regions on Earth, is the alternative explanation. ### 2 個解答 • ? Lv 7 8 年前 最佳解答 Both sentences shall use "as" for the blanks. These two sentence are from the following links: 1. http://en.wikipedia.org/wiki/Niagara_Falls (in the 3rd paragraph) (the 5th paragraph under the photo). Here, using "as" is the same as "when it". In fact, it shall be "as it". However, since "it" obviously represent "Horseshoes Falls" and "Methane" respectively, so it was omitted. As for "hot carbon bearing rocks", you shall take "hot" out of context first to understand it. "carbon bearing rocks" means "rocks that contain carbon". Since they are out of volcano, so it is HOT. 2012-11-19 12:37:13 補充: 1. (the FOURTH line in the third paragraph) 2012-11-19 12:39:06 補充: typo! since "it" obviously represent ... ==> "it" obviously represents ... I use "so" at the end, it is better not to use "since". 2012-11-19 12:47:46 補充: You can also treat "as" as "as (when it)" or "like (when it)". It means, instead of using a relative clause, it uses a preposition phrase. 參考資料: self • ? Lv 7 8 年前 1. Horseshoes Falls is the most powerful waterfall in North America, _____ measured by vertical height and also by flow rate. 可以加上while來連接。 While (he was) fighting in Germany, he was taken prisoner. 當副詞子句的主詞和主要子句相同時,可以省略。 本句也是省略狀況:... while (Horseshoes Falls is) measured by ... 2. Methane produced by the action of water on hot carbon bearing rocks, _____ occurs in volcanic regions on Earth, is the alternative explanation. 本句的主要子句是Methane is the alternative explanation。所以兩個逗號之間的應該是一個非限定性質的關係子句,所以可加入which來當子句的主詞。 hot carbon-bearing rocks = 那高溫且含有碳元素的岩石 carbon-bearing是名詞+分詞的複合形容詞,放在名詞前當修飾語時,應加連字號在兩個字的中間。 2012-11-19 12:40:14 補充: 第二句裡的普通名詞methane己經受到分詞片語produed by ...的限定,就不能再用關係子句加以限定,所以關係子句加上逗號,表示屬於[非限定]性質。
2021-01-21 19:16:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8713368773460388, "perplexity": 14451.413269328385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703527224.75/warc/CC-MAIN-20210121163356-20210121193356-00748.warc.gz"}
https://gateoverflow.in/678/gate-cse-2000-question-7
2,625 views 1. Construct as minimal finite state machine that accepts the language, over $\{0,1\}$, of all strings that contain neither the substring $00$ nor the substring $11$. 2. Consider the grammar • $S \to aSAb$ • $S \to \epsilon$ • $A \to bA$ • $A \to \epsilon$ where $S$, $A$ are non-terminal symbols with $S$ being the start symbol; $a, b$ are terminal symbols and $\epsilon$ is the empty string. This grammar generates strings of the form $a^ib^j$ for some $i, j \geq 0$, where $i$ and $j$ satisfy some condition. What is the condition on the values of $i$ and $j$? 1. Language $L = (0+1)^*- (0+1)^*(00+11) (0+1)^*$ $\textsf{DFA}$ contains $4$ states of which $3$ are final and $1$ is dead state. 2. $i \leq j$ as $S \rightarrow aSAb$ There will be always one $a$ in left and minimum one $b$ in right and $A \rightarrow bA \mid \epsilon$ can generate any number of $b\text{’}$s including null string. If $A$ is $\epsilon$ then $i=j$ and if $A$ is generating any $b,$ then $j>i$ so  condition is $i\leq j.$ in answer a why three final state mark it should be one final state as in question it has sub string 00 ot 11 plese explain For 13b) The grammar never generates all strings with just b*. For b* we have i=0 and j>=0. But these strings are not generated. Hence the condition needs to be j>=i for all i>0 j=i for i=0 They have asked for Minimal Finite Automata $\Rightarrow$ We can have a minimal NFA (instead of DFA). How to draw NFA for such problems? @ayushsomani Refer these two images. edited @ayushsomani As explained in the solution we can draw a DFA that contains either 00 or 11 as substring and then take its complement. This is the best recommended solution. or We can draw seperate DFA's and then take intersection. But this is time consuming. :( @rohith1001 So, if it is asked Minimal Finite Automata, then i can draw a minimal DFA coz minimal DFA == minimal NFA? edited No. If they ask minimal finite automata, we will have to draw NFA only. Note: (number of states in minimal NFA) <= (number of states in minimal DFA) For this question, you can just remove the trap state to get the minimal NFA. @rohith1001 But, removing Dead states (if any) from minimal DFA can't assure a minimal NFA, coz It's not necessary that we have all transitions in NFA. I think you are right. For this question removing the dead state gives us the minimal NFA. @rohith1001 Bro, while we take Cross product of two DFA's having m and n states respectively, then we say that the resulting DFA should contain m*n States? Yes. The cross product has m $\times$ n states, but it need not be a minimum DFA. Hence we will have to minimize it. @rohith1001 Thanks bro. One more doubt - Can we say that p$^{*}$q$^{*}$ $=$ p$^{*}$ + q$^{*}$? No. LHS matches pq but RHS is not matching pq. But how is it related to this question? :O No. It's not related to this question. Actually, I came across somewhere and thought of writing that. 😁 Can anyone give the solution using the intersection of 2 DFA method? I am not getting a minimal solution. j>=i is the condition.solve using some examples there will never be a condition when # of a's would be greater than # of b's by
2021-09-19 11:57:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071939945220947, "perplexity": 1011.8074871454004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056856.4/warc/CC-MAIN-20210919095911-20210919125911-00006.warc.gz"}
https://oa.journalfeeds.online/2022/01/04/using-direct-numerical-simulation-of-pore-level-events-to-improve-pore-network-models-for-prediction-of-residual-trapping-of-co2-amir-h-kohanpur-et-al/
# Using Direct Numerical Simulation of Pore-Level Events to Improve Pore-Network Models for Prediction of Residual Trapping of CO2 Amir H. Kohanpur, et al. Jan 4, 2022 ## 1. Introduction Although PN modeling is more computationally efficient compared with DNS due to simplifications of the pore structure and governing flow equations, conventional PN models have some challenges in simulation of imbibition and residual trapping of CO2 where pore filling processes are important (Blunt, 2017). Estimation of residual trapped CO2 is important for assessing the long-term storage capacity and safety of geological sequestration since it impacts the predicted fate and distribution of the CO2 plume in the reservoir. Theoretical and numerical studies in addition to experimental observations have shown that capillary trapping plays a key role in CO2 plume migration (Juanes et al., 2010; MacMinn et al., 2010; Pentland et al., 2011; Krevor et al., 2015; Rasmusson et al., 2018). CO2-brine flow is usually considered to be capillary-dominated in real-world applications. Therefore, capillary forces determine the interface movement, flow, and trapping throughout the pore space, and hence so-called quasi-static PN models that neglect viscous forces are suitable. In general, if the viscous forces are comparable with the capillary forces, then a dynamic PN model is needed that includes more physics with the added expense of model complexity and computational costs (Joekar-Niasar and Hassanizadeh, 2012). However, quasi-static PN is more computationally efficient and describes pore-level events through the local threshold capillary pressure of pore elements. The competition among these events in CO2-brine flow determines the invasion pattern and saturation of phases. Thus, the accuracy of calculated residual trapping is dependent on the accuracy of the defined pore-level flow models and pre-solved equations of local threshold capillary pressure of events in pore elements. There are several PN modeling studies on real rock samples that focus on the physics of CO2 and brine flow and residual trapping of CO2. Mahabadi et al. (2020) studied immiscible displacement patterns during drainage with a dynamic PN model by varying capillary number (Ca) and viscosity ratio (M). They also examined the effect of pore-throat size distribution and PN connectivity on a sandy sediment for different sets of Ca and M. Matching their findings with properties of a typical CO2-brine flow system (Ca ≃ 10−5 and M ≃ 10 − 15), the dominant displacement pattern is capillary fingering and CO2 saturation at the end of drainage is roughly 0.50 − 0.60, an important quantity since it is the starting point for simulation of imbibition. Rasmusson et al. (2018) used a quasi-static PN model of CO2-brine flow on Heletz sandstone to investigate the sensitivity of residual trapping of CO2 to several parameters such as advancing contact angle and average connection number (number of pore-throats connected to a pore-body). They found that PNs with higher average connection number, higher advancing contact angle, and lower aspect ratio have smaller amount of trapped CO2. They also obtained the initial-residual saturation curves of CO2 for different drainage-imbibition scenarios. In addition, Hefny et al. (2020) applied quasi-static PN modeling on a highly permeable sandstone from a depleted oil field to study residual trapping of CO2 and obtain characteristic curves during drainage-imbibition cycle. They investigated the effect of initial brine saturation at the reversal point from drainage to imbibition on residual trapping and relative permeabilities. They also found that smaller contact angle values (more brine-wet rock) lead to higher trapped amount of CO2 at the end of imbibition process. These PN modeling studies have mainly used existing pore-level models in conventional PN flow solvers to study residual trapping of CO2. An in-depth investigation of pore-level events in PN modeling of CO2-brine flow can shed light on the physics and prediction of residual trapping of CO2. Ever since the early generation of two-phase flow PN models, there have been attempts to improve pore-level flow models during drainage and imbibition processes for better understanding of pore-scale displacement and prediction of core-scale properties of interest. Lenormand et al. (1983) first described pore-scale displacement mechanisms during drainage and imbibition from a 2D micromodel experiment. These mechanisms, namely, piston-type, snap-off, and pore-body filling, are widely used in PN flow solvers. For example, Raeini et al. (2018) developed a capillary-dominated PN flow solver that includes the concept of half-throats, several corners in pore elements, and new formulations of pore-level events. The solver was verified experimentally by Bultreys et al. (2020) using measured contact angle and based on the evolution of fluid distributions and flow paths during imbibition. There are several studies that focus on pore-level events of PN modeling and using DNS for improvement, such as proposing new cross sections of pore elements, flow properties in pore-throats, local capillary pressure relations, corner flow behavior, and so on. Xie et al. (2017) used the lattice-Boltzmann (LB) method, a well-known and widely applied DNS approach, to simulate individual pores with triangular cross section to develop empirical terms to describe viscous coupling in oil-water flow. They incorporated these terms into a quasi-static PN solver that provided a more accurate prediction of relative permeability curves. In another study, Zhao et al. (2020) applied LB for real pore-throat cross sections to modify conductance and local threshold capillary pressure terms in a conventional PN model. Then, they simulated drainage through sandstones using quasi-static PN model to compute flow properties, namely, macroscopic capillary pressure curve, absolute permeability, and relative permeability. Suh et al. (2017) used a morphological analysis technique along with LB simulation on different irregular pore-throat cross sections to establish a correlation between effective shape factor and local capillary pressure. They validated their method by comparing macroscopic capillary pressure with experimental data for the water retention curve. Ruspini et al. (2017) also investigated geometrical features of the pore elements, and introduced a new model of pore-body filling to investigate capillary trapping of non-wetting phase in water-wet rocks. The model incorporated geometrical characteristics of the pore-body, spatial location of connecting filled pore-throats, and wetting properties. They studied residual trapping, imbibition relative permeability, and capillary pressure curves from PN modeling of different sandstone samples. Other DNS approaches have also been applied on pore elements to improve conventional PN models. Miao et al. (2017) proposed a new description of pore elements to avoid geometry simplifications of conventional PN models by using circularity, convexity, and elongation of voxelized pores. They carried out finite element simulations to obtain single-phase flow conductance and approximate the relationship between pore shape parameters and hydraulic conductance. Shams et al. (2018) incorporated viscous coupling effects into flow conductance of triangular tubes for different wettability conditions with the aid of finite volume simulation. They investigated the flow in the center and corners of a capillary tube and related pore geometry, viscosity ratio, wetting phase saturation, and wettability to the flow conductance term. Tang et al. (2018) carried out volume-of-fluid two-phase flow simulations using the commercial Fluent software on various tube cross sections to investigate the effect of contact angle on meniscus behavior and local capillary pressure in individual pores based on Young-Laplace equation. Thus, DNS methods are capable of improving pore-level events of PN modeling, and can be used in pore-scale modeling of residual trapping of CO2. In this work, we apply LB simulation as a DNS method on various geometric PN configurations that encompass a small collection of connected pore-bodies and pore-throats, in order to evaluate local threshold capillary pressure during the imbibition process. Then, we propose a modified model of imbibition events. The modified model is then incorporated into a quasi-static PN flow solver that can be applied on extracted PNs from natural rocks (Dong and Blunt, 2009; Raeini et al., 2018) or on larger PNs from upscaling approaches (Aghaei and Piri, 2015; Kohanpur and Valocchi, 2020), thereby resulting in more realistic macroscopic characteristic curves. We apply the modified PN model for two sandstones to evaluate residual trapping in a drainage-imbibition cycle of CO2-brine flow, and compare the results with experimental and DNS data. The goal is to take advantage of the relative strengths of both modeling approaches and combine them into a new set of equations for incorporation into conventional PN models that can provide more physically-based estimates of residual trapping, as well as other continuum properties. The organization of the rest of this paper is as follows. Section 2 explains the physics of two-phase flow processes for pore-level events in PN modeling. Section 3 discusses the defined PN configurations and the LB and PN simulation methods. In section 4, the main results from LB simulations of pore-body filling and snap-off are discussed (section 4.1). Then, the modified model is presented and incorporated into the quasi-static PN flow model, and applied on real rock samples (section 4.2). Finally, the conclusions are summarized in section 5. ## 2. Pore-Network Processes ### 2.1. Drainage and Imbibition In quasi-static PN modeling of drainage and imbibition processes, the macroscopic capillary pressure is gradually changed to control the direction of invasion. On one hand, capillary pressure is increasing incrementally during drainage process which allows non-wetting CO2 to displace the brine in the center of pores through piston-type displacement while brine resides in the corners and crevices. Based on Young-Laplace equation, wider pore-throats are invaded first in drainage followed by invasion of narrower pore-throats. On the other hand, the imbibition process occurs after the drainage process, which has left residual brine in the corners of pore-bodies and throats due to wettability. Macroscopic capillary pressure decreases incrementally by increasing brine pressure which allows brine to displace CO2 through different displacement events, namely, piston-type displacement, pore-body filling, and snap-off (Lenormand et al., 1983). The occurrence and frequency of these events generally depend on the local capillary pressure in a pore element, the topology of brine and CO2 (connection number and filling), wettability (contact angle), pore irregularity (shape factor), and relative size of pore-body with respect to neighboring pore-throats (aspect ratio). In the low capillary number condition of CO2-brine flow, the assumption of local capillary equilibrium is valid which relates the curvature of interface in any pore at any time to the local capillary pressure based on the Young-Laplace equation. This assumption is the basis of quantifying the local threshold capillary pressure of different displacement events through the shape of interface in pore elements. ### 2.2. Pore-Level Events Piston-type displacement is a common event in both drainage and imbibition processes. In this event, the invading phase displaces the defending phase from the center of pore element. In drainage, if CO2 pressure is high enough (i.e., local capillary pressure passes the threshold) in the pore element, it displaces brine through its terminal meniscus from the center of pore element. Figure 1A shows schematically how CO2 (in white) is advancing through the center of a rectangular tube and pushing brine (in blue). The threshold capillary pressure depends on the wettability and geometry of the pore element through the Young-Laplace equation, which for a circular tube pore element is: In Equation (1), Pc is the local threshold capillary pressure, σ is interfacial tension, θ is contact angle between phases, and r is the radius of the cross section of the tube. In imbibition, brine displaces CO2 from the center of tube when capillary pressure falls below the threshold. Related equations for rectangular and triangular cross sections are more complex than Equation (1) and can be found in Valvatne (2004). Figure 1. Schematic representation of (A) piston-type displacement during drainage process in a rectangular tube, (B) snap-off in pore-throat during imbibition process, and (C) pore-body filling, I1 and I2 events, during imbibition process. CO2 is shown in white and brine is shown in blue (Images from Rasmusson et al., 2018). Snap-off is an important pore-level event that usually causes trapping of non-wetting phase. The injected CO2 occupies the center of pore elements at the end of drainage process while brine resides in the corners as connected layers throughout the PN. As the brine pressure increases and the imbibition process starts, these layers swell and increase the CO2 saturation gradually. If the brine layer is not connected to any adjacent brine-filled elements and swelling continues, at some point brine layers from corners meet and create an unstable state. This leads to snap-off where brine spontaneously fills the center of pore element. Snap-off in a pore-throat that does not have adjacent brine-filled elements disconnects CO2 in adjacent pore-bodies which translates into either partial filling or trapping in the following states of the PN, and impacts the distribution of CO2 throughout the PN. Figure 1B shows schematically how the progress of brine layer swelling in a pore-throat can lead to snap-off and cause trapping in the pore-body on the right. Pore-body filling is another type of imbibition event that occurs in different orders based on connectivity of the non-wetting phase. When the capillary pressure decreases during imbibition, the invading brine starts with filling narrower pore-throats and displacing CO2 to the available adjacent elements, which are pore-bodies filled with CO2. Depending on the connection number of the pore-body and number of CO2-filled adjacent pore-throats, different scenarios of pore-body filling can occur. Figure 1C shows a schematic representation of two scenarios (I1 and I2 events) for a pore-body with connection number of 4. In refers to a pore-body filling where there are n connected pore-throats filled with CO2 that allows an escape path during the invasion of brine into the pore-body. For a pore-body with connection number of zcn, n can be between 0 and zcn − 1. The important feature of In events is that their local threshold capillary pressure can be different depending on the filling scenario, since the interface curvature during invasion is different. For example in Figure 1C, the I1 event has smaller radius of curvature (i.e., higher capillary pressure) than I2 based on the drawn dash lines referring to the next pore-level steps. If only a single connected pore-throat is filled with CO2 (I1 event), the displacement process will be similar to piston-type displacement. The complexity of modeling is for higher order events (In, n from 2 to zcn − 1) where the exact location and curvature of the interface in the pore-body is not clear. There are several studies in the literature proposing different parametric models by taking into account geometry-based and statistics-based parameters in the model (Blunt, 1998; Hughes and Blunt, 2000). For example, Blunt (1998) proposed a model based on the generic form of Pc in piston-type displacement and modified it with a parametric term to describe Pc of higher order In events: In Equation (2), Ai is the model parameter chosen correlated with the inverse of absolute permeability of the PN, and xi is a random number between zero and one. More details on pore-level events during drainage and imbibition processes can be found in Valvatne and Blunt (2004), Blunt et al. (2013), and Blunt (2017). ### 2.3. Competition of Events The imbibition process in a PN flow model consists of a series of pore-level events in pore elements based on their local threshold capillary pressure that controls the timing and location of events. These events compete with one another in determining the invasion pattern and distribution of phases during imbibition. Therefore, the prediction of CO2 distribution and trapping is highly dependent on the specific occurrences of these events. A change in the local threshold capillary pressure causes change in the order of displacement events which leads to a different pattern of phases, relative permeability, and residual trapping. The topology of the brine phase also matters to determine the type of event during imbibition. If the adjacent element has brine at its center, the piston-type or pore-body filling would occur if the local threshold capillary pressure is already reached. Snap-off, however, does not require adjacent filled elements since it starts with swelling of brine in corners which is assumed present throughout the PN. Geometrical and topological parameters of a PN play an important role in the equations of local threshold capillary pressure, and hence, competition of pore-level events during imbibition and residual trapping of CO2. Some of these key PN parameters are as following: Shape factor: This summarizes the irregularities of the pores into one parameter in pore elements. The half-angle values of a triangular pore element can be obtained based on the shape factor (Patzek and Silin, 2000). These half-angle values are involved in the local threshold capillary pressure (Pc) relations of different pore events. In this study, different defined shape factors of the cross section are considered to evaluate the effect of shape factor on imbibition pore-level events. Aspect ratio: The aspect ratio of a pore-body is the ratio of its radius to the radius of its connected pore-throats (a = rp/rt). This parameter can also be expressed using average radius of multiple connected pore-throats to a pore-body. The competition of threshold Pc of pore-body filling and snap-off events is correlated with aspect ratio. This competition can be quantified by the ratio of their threshold Pc (Blunt, 2017). Generally, higher aspect ratio values results in more snap-off events compared with pore-body filling events. In this study, typical aspect ratio values from extracted PNs of natural rocks are used to define PN configurations. Connection number: The number of connected pore-throats to a pore-body is its connection number. It can be averaged across all pore-bodies of the PN as the average connection number of network that represents the connectivity of porous medium. Although connection number is not explicitly used in threshold Pc relations, it is also correlated with trapping of CO2. Higher connectivity generally implies more potential pore-throats for the CO2 to escape from the invaded pore-bodies during imbibition process. ## 3. Methodology ### 3.1. Pore-Network Configurations We aim to apply DNS of two-phase flow using the LB code developed by Chen et al. (2018) on pore elements of PNs extracted from natural rocks to assess the physical assumptions used for pore-level events during imbibition. Therefore, the geometry of simulations consists of typical pore-bodies and pore-throats of extracted PNs with sufficient grid resolution to capture the interface and corner flow. We define a PN configuration as a small number of interconnected pore-throats and pore-bodies that is designed for investigation of pore-level events in the pore element of interest. We limit this study to two common types of PN configurations due to computational costs of LB simulations: PTP configuration: This refers to a pore-throat connecting two adjacent pore-bodies. The pore-throat is the focus of this configuration to investigate corner flow, piston-type displacement, or snap-off. The pore-bodies can be connected to inlet and outlet reservoirs of non-wetting and wetting fluids or additional pore elements. We study the PTP configuration to capture the interface in the cross section of the pore-throat and find the threshold Pc right before snap-off occurs. TPT configuration: This refers to a pore-body defined between two or more connecting pore-throats. The pore-throats can be directly connected to the inlet and outlet reservoirs or other pore-bodies. The pore-body is the focus of this configuration to investigate the filling process via different number of pore-throats and simulate pore-body filling during imbibition. Conventional quasi-static PN models can use pore elements with different cross sections; triangle, square, circle are considered here as they are commonly used. The shape factor (G) is a dimensionless geometrical parameter that is used in assigning familiar geometries to the cross section of a tube-shape pore element (Patzek and Silin, 2000). It is defined as: In Equation (3), A is the cross-sectional area, V is the volume, and L is the length of the tube-shape pore element with an arbitrary cross section. Our experience from studying extracted PNs of various sandstone cores shows that the triangular cross section comprises the majority of pore elements of PNs. Shape factor of triangular elements can vary in a range from 0 (slit-shape) to 0.0481 (equilateral). In this study, we select three shape factor values equal to 0.020, 0.030, 0.040 for the designed PN configurations to represent a reasonable range, while limiting the number of required LB simulations. These three shape factor values represent three different triangular cross sections which are characterized by their corner half-angles (β1, β2, β3). We use an algorithm introduced by Patzek and Silin (2000) to determine three corner half-angles of each shape factor G to define triangular cross sections of pore elements. The resulting β’s (with the convention of β1 < β2 < β3) for each shape factor are as following: G = 0.020: ${\beta }_{1}=6.2{0}^{°}$ , ${\beta }_{2}=19.{7}^{°}$ , and ${\beta }_{3}=64.{1}^{°}$ . G = 0.030: ${\beta }_{1}=9.6{0}^{°}$ , ${\beta }_{2}=35.{6}^{°}$ , and ${\beta }_{3}=44.{8}^{°}$ . G = 0.040: ${\beta }_{1}=18.{0}^{°}$ , ${\beta }_{2}=23.{7}^{°}$ , and ${\beta }_{3}=48.{3}^{°}$ . As discussed in section 2.2, pore-body filling event can occur in different scenarios depending on the number of adjacent CO2-filled pore elements. These scenarios can be defined with different TPT configurations based on the number of connecting pore-throats to the pore-body of configuration, which will be addressed in section 4.1.1. ### 3.2. Lattice-Boltzmann Method The lattice-Boltzmann (LB) method is used for DNS for the idealized pore element geometry noted above. Use of LB for DNS on voxelized pore geometry is now well established, and its popularity is due to its favorable computational features (Ahrenholz et al., 2008; Chen et al., 2019). The LB is a so-called mesoscopic method that can simulate fluid mass and momentum balance. The fluid is represented by particles with probability of moving in different directions along a predefined lattice. The LB method is based on streaming and collision of a set of fluid particle distribution functions (PDF) on a lattice. The no-slip boundary conditions on solid surfaces are implemented by simply switching the directions of the particles on the surface nodes, the so-called bounce-back scheme. Among several LB schemes for simulating multiphase flows, the color-fluid model (Gunstensen et al., 1991; Grunau et al., 1993) is capable of producing a relatively sharp interface between immiscible phases and capturing their interface evolution. The color-fluid model is also able to incorporate high viscosity ratios due to its independent control of the surface tension and viscosity which makes it quite relevant to CO2-brine flow system where viscosity ratio is about 10–15. On the other hand, it has limitations on the density ratio and a large absolute value of color gradient may produce numerical instabilities (Ramstad et al., 2019). We use a variant of the multiple relaxation time (MRT) color-fluid LB simulator (Tölke, 2002; Tölke et al., 2006; Chen et al., 2018). In this model, each phase has its own set of PDFs and the discrete Boltzmann equation is solved for each phase. We consider two sets of the D3Q19 PDFs, i.e., a 3D model with 19 velocities, representing the two fluid phases, referred to as the fluids r (CO2) and b (brine), which follow the collision-streaming procedure for the PDF: In Equation (4), ${\Omega }_{i}^{s\left(1\right)}$ is the standard LB collision operator, ${\Omega }_{i}^{s\left(2\right)}$ is the perturbation step that generates the surface tension effect, and ${\Omega }_{i}^{s\left(3\right)}$ is the recoloring step that separates the two fluids. The collision operators ${\Omega }_{i}^{s\left(1\right)}$ and ${\Omega }_{i}^{s\left(2\right)}$ are constructed under the MRT framework that increases stability and accuracy of the model (d’Humieres, 2002; Tölke et al., 2006). The macroscopic quantities of flow, such as fluid velocity and pressure, are computed by calculating the moments of the PDFs. More details of our in-house code are given by Chen et al. (2018). In the present work, we carry out LB simulations of CO2-brine flow system on idealized PN configurations. The fluid properties and flow conditions are listed in Table 1, which are realistic for CO2-brine flow and similar parameters used in Kohanpur et al. (2020), except for the contact angle which comes from an experimental study by Dalton et al. (2018). We simulate the drainage process followed by the imbibition process with inlet velocity and outlet pressure boundary conditions. More details on implementation of boundary conditions can be found in Chen et al. (2019). Table 1. Properties of CO2-brine flow system. Th capillary number is defined as the ratio of viscous forces over capillary forces ( $Ca=\frac{{\mu }_{nw}V}{\sigma }$ ) where μnw is dynamic viscosity of CO2, V is the average inlet velocity, and σ is the interfacial tension between CO2 and brine. For the LB simulations, capillary number equal to 5 × 10−5 is used, which is relatively small to guarantee a capillary-dominated flow consistent with field injection conditions and the assumptions of the quasi-static PN model. Unfortunately, even smaller values of capillary number can be computationally expensive to reach steady-state with potential numerical instabilities and spurious velocities in LB simulation. ### 3.3. Quasi-Static Pore-Network Model The quasi-static PN flow simulation is an efficient tool to characterize CO2-brine flow properties. As noted previously, for field-scale carbon capture and storage (CCS), the velocity is low (with the exception of a local region near the injection well) and hence the capillary number is relatively small, justifying the assumption of capillary-dominated flow and use of a quasi-static PN model. In this work, we conduct drainage and imbibition simulations using the publicly available PN flow codes of Valvatne and Blunt (2004) and Raeini et al. (2018). We incorporate modified equations for imbibition pore-level events (which are presented in sections 4.1.1 and 4.1.2) into the PN flow solver of Raeini et al. (2018), which is then applied to extracted PNs from real rock images to obtain residual trapping of CO2 and other quantities of interest during the drainage-imbibition process. The detailed procedure of the PN flow solver is described in Valvatne (2004) and Raeini (2013). ## 4. Results ### 4.1. Lattice-Boltzmann Simulation The PN configuration types listed in section 3.1 are used as the geometry of LB simulations where each voxel of the image is converted to a lattice unit. It is standard practice in LB simulation to use dimensionless parameters normalized with lattice units and then convert to physical units when needed. A lattice unit can be either pore (0-value) or wall (1-value), and the set of PDFs of each phase is computed. At each stage of the LB drainage and imbibition simulations, the fluid-fluid interface location and the local capillary pressure need to be computed. The interface location can be distinguished with the order parameter (ϕ) in the color-fluid LB model: In Equation (5), ρr and ρb are fluid densities of red and blue fluids in lattice unit, respectively. The density of each phase is computed by the zeroth moment of the respective PDFs. Therefore, ϕ ≈ 1 refers to the presence of red fluid (CO2) while ϕ ≈ −1 represents the presence of blue fluid (brine) (Chen et al., 2018). The location of interface is where ϕ ≈ 0. in the color-fluid LB model the fluid interface is diffuse and spread over several lattice units, but if fine grids are used then a relatively sharp interface results. In practice, we use the cut-off of 5% for ϕ to specify the location of each fluid. In order to evaluate local capillary pressure in pore elements, one should compute average pressure of each phase in the pore volume. In the color-fluid LB method, the ideal gas equation-of-state is assumed which allows calculation of fluid pressure from the density. The order parameter is used to detect the location of phases and the interface. To compute the pressure of the pure red and blue fluid phases, lattice points should be far enough away from where the order parameter is close to zero. The pressure of red and blue fluids can be averaged based on the total density of the respective fluid of a calculation volume (e.g., pore-body). In Equations 6 and 7, the summation is over the lattice points of a defined calculation volume with nr and nb lattice units of red and blue fluids, respectively. Thus, the local capillary pressure (Pc) in the calculation volume can be obtained using the difference in the average pressure of fluids. In Equation (8), Pc is the calculated average local capillary pressure within the calculation volume of interest. This procedure to calculate the local capillary pressure is validated on a simple piston-type displacement in a cylindrical tube, as illustrated in Figure 2A, where boundary conditions of velocity (inlet) and constant pressure (outlet) are implemented, and CO2 (shown in red) moves upward and displaces brine (transparent) in the tube. We use the resulting density of fluids to obtain the pressure distribution in each fluid to calculate the local capillary pressure during the filling process. The calculation volume is marked with two green planes. Figure 2B shows a cross-sectional view through the center of the tube. The radius of interface (r′) can be captured and used in the Young-Laplace equation to evaluate local capillary pressure, which is denoted the cross-sectional approach. This approach can be utilized in simple geometries and is preferred wherever the radius of interface can be captured. However, this is not always the case for 3D LB simulations on more complex PN configurations such as for pore-body filling events in triangular elements where the interface can have a 3D complex shape. In such cases, the LB density-based approach can be used to evaluate Pc. Figure 2. A validation of LB simulation on a simple piston-type displacement in a cylindrical tube for calculation of local capillary pressure: (A) CO2 is in red, brine is transparent, and the green planes define the bounds of calculation volume. (B) Cross-sectional view parallel to yz-plane through the center of tube to capture the radius of interface. (C) Comparison of calculated Pc from LB simulation and theoretical Pc from Young-Laplace equation on different radii of a cylindrical tube. In Figure 2C, the resulting Pc from LB simulation of the simple piston-type displacement in a cylindrical tube is calculated and compared with the theoretical values based on Young-Laplace equation for different radii (r = 5, r = 15, r = 20 in lattice units). The results show a good agreement (less than 10% error), thereby giving us confidence in the procedure for calculation of Pc based on the order parameter and densities in lattice unit is feasible in 3D LB simulations. In following sections 4.1.1 and 4.1.2, we present the LB simulation results on PN configurations plus the modified models of pore-body filling and snap-off events during imbibition process that will be incorporated in quasi-static PN flow solver. #### 4.1.1. Simulation of Pore-Body Filling Three different TPT configurations with different shape factors but equal radius of inscribed circle and connection number of 4, shown in Figure 3, are studied to model the I1, I2, and I3 pore-body filling events during imbibition. Based on our experience, the average connection number of PNs from various rock samples is usually within the range of 3–5. Therefore, the choice of connection number of 4 is reasonable. In addition, having 4 connecting pore-throats allows us to include pore-body filling events up to order 3. As explained in section 2.2, lower orders of pore-body filling have higher threshold Pc and are hence more favorable than higher order pore-body filling (I4+) during imbibition. The higher orders of pore-body filling usually have a small number of occurrences and are hence less important with respect to trapping of non-wetting phase. Thus, we focus on lower orders of pore-body filling in the LB simulations and investigate key factors in the filling process such as shape factor and corner half-angles. Figure 3. Pore-body filling configurations for the three studied shape factors G = 0.020, G = 0.030, and G = 0.040 during I1, I2, and I3 events. The cross section of the pore-body is defined as a triangle with three different shape factors with their corresponding corner half-angles. The cross section of pore-throats in all cases is square to reduce the complexity and ensure a simultaneous invasion from different connecting pore-throats across different configurations. This is achieved by designing configurations in a way that all pore-throats have the same cross section and path length from the inlet or outlet reservoirs to the pore-body, as depicted in Figure 3. An example of such a configuration is illustrated from different views in Figure 4A, which is designed for modeling I1 event with G = 0.040 for the pore-body. The inscribed radii of all pore-throats are equal and the geometric aspect ratio between pore-body and pore-throat is 5 in all configurations. Figure 4. Pore-body filling configuration for the shape factor of G = 0.040 designed for modeling I1 event: (A) The equal length and cross section of pore-throats provide the desired simultaneous invasion. (B) LB simulation of pore-body filling during drainage (left) and imbibition (right) processes. CO2 is shown in red and brine is transparent. In LB flow simulation of these configurations, the inlet reservoir is connected to bottom pore-throats and the outlet reservoir is connected to top pore-throats, and the flow is always in +z (upward) direction. Initially, drainage is simulated which results in CO2 occupying the center of pore elements until steady-state saturation. Then, brine is injected from the same inlet reservoir to displace the CO2 from the pore-body. The boundary conditions in both drainage and imbibition are prescribed velocity at the inlet and fixed pressure for the outlet. Figure 4B shows an example of LB simulation of pore-body filling during drainage and imbibition processes on a triangular TPT configuration, shown in Figure 4A, with the shape factor of G = 0.040 during I1 event. Since the pore-body filling event is a dynamic process, we can track the saturation of CO2 in the pore-body in order to specify the relevant time step for the evaluation of threshold Pc. We choose the saturation of 0.50, when half of the CO2 is displaced, as the time step to calculate the threshold Pc, using the procedure described earlier. The procedure is applied on all configuration of Figure 3 and the corresponding Pc is computed. The normalized capillary pressure in a pore-body ( $\stackrel{^}{{P}_{c}}$ ) is defined as: In Equation (9), rp is the inscribed radius of pore-body and σ is the surface tension. This definition makes the analysis more straightforward since the conversion from lattice unit to physical unit is not necessary. Based upon the LB simulations, we propose modifying the conventional models (e.g., Valvatne and Blunt, 2004) for the local threshold capillary pressure of pore-body filling event with some new parameters as: In Equation (10), the subscript i refers to the order of pore-body filling event. Cfi is defined as the filling factor that comes from the analysis of LB simulations of PN configurations. ${a}_{i}^{\prime }$ is defined as effective aspect ratio of the pore-body which is similar to the classic definition of geometric aspect ratio but it considers just the invading pore-throats rather than all connecting ones. Therefore, ${a}_{i}^{\prime }$ can be a function of the filling event and it involves the radius of invading pore-throat during the pore-body filling event. The second term on the right hand side of Equation (10) is the main difference between our proposed model and conventional model. For example, Valvatne and Blunt (2004) chose a term proportional to K−1/2 where K is the absolute permeability of PN. We consider the effective aspect ratio and shape factor to include the effect of the order of pore-body filling event and pore-body shape factor on local threshold capillary pressure, which will result in a different filling factor. In order to use this modified model, one needs to know Cfi for different orders of filling events for different pore-body shape factors. We consider Cf1 = 0 the same as in conventional models. In order to evaluate Cf2 and Cf3, we define the ratio of $\stackrel{^}{{P}_{{c}_{i}}}$ of events with respect to ${\stackrel{^}{{P}_{c}}}_{1}$ as: If we combine Equations (10–12), the filling factors can be described as: Therefore, by having $\stackrel{^}{{P}_{{c}_{i}}}$ of different events from LB simulation, one can compute the corresponding fi1 and use Equations (13) and (14) to obtain the filling factors of the modified model. Table 2 presents the resulting normalized local capillary pressure of pore-body filling events (I1, I2, I3) for the studied shape factors. The $\stackrel{^}{{P}_{{c}_{i}}}$ is used to calculate the capillary pressure ratio (fi1) and the filling factor (Cfi), as listed in Table 2 as well. The resulting fi1 from the modified model is slightly higher than conventional model which translates into higher local threshold capillary pressure that makes pore-body filling event more favorable during imbibition process. Table 2. The resulting normalized local capillary pressure ( $\stackrel{^}{{P}_{{c}_{i}}}$ ) and filling capillary pressure ratios (fi1) defined in the modified pore-body filling model from LB simulations on PN configurations. The results in Table 2 are used to describe the filling factor as a function of shape factor and incorporate it into a quasi-static PN flow solver where pore-body filling events of I2 and I3 are modified with Equation (10), accordingly. #### 4.1.2. Simulation of Snap-Off Three different shape factors, G = 0.020, G = 0.030, G = 0.040, with triangular cross sections in a PTP configuration, as shown in Figure 5A, are studied to investigate the threshold Pc during snap-off events. In each configuration, ratios involvolving the diameter and length of pore-body and pore-throat are defined as lt/2rt = 10 and rp/rt = 5. The focus is the cross section of the center of the pore-throat. We first perform a drainage simulation with receding contact angle (10°) followed by imbibition with advancing contact angle (60°). These values are based on Morrow’s contact angle hysteresis model (Morrow, 1975) for intrinsic contact angle of 56° that comes from an experimental CO2-brine flow study by Dalton et al. (2018) on a Berea sandstone sample. The model relates the intrinsic contact angle to receding and advancing ones. The boundaries are connected to bounding pore-bodies and a pressure-driven flow is implemented in the LB simulation. Figure 5. (A) PTP configuration with G = 0.040 defined for assessing the snap-off event in pore-throat. Evolution of cross section of a pore-throat with G = 0.040 from LB simulation during snap-off event: (B) beginning of imbibition, (C) moving the interface toward the center, (D) before snap-off, (E) and after snap-off. (F) Higher resolution of pore-throat cross section before snap-off. (G) Cross-sectional approach in evaluation of capillary pressure based on the radius of curvature of three corners. We use the local threshold capillary pressure of a snap-off event in a similar format to conventional snap-off models, but with a new correction factor: In Equation (15), Ci is defined as the snap-off factor and $\stackrel{^}{{P}_{c}}$ is defined similar to Equation (9) (rp replaced with rt). In conventional models (Valvatne and Blunt, 2004), Ci can be described in terms corner half-angles: In Equation (16), β1 and β2 are defined as the two smaller corner half-angles of a triangular cross section (Patzek and Silin, 2000). However, we carry out LB simulation for different shape factors to find Ci as a function of G. The drainage invasion of CO2 into the PTP configurations is implemented first. Then, the gradual imbibition of brine is implemented via an incremental increase of brine pressure. This allows the brine in corners to expand gradually prior to snap-off in the pore-throat, as shown in the pore-throat cross section in Figures 5B–E. A cross-sectional analysis is applied on the results from LB simulation to obtain the radius of curvature in each corner, as illustrated in Figures 5F,G, during the snap-off event. The minimum radius among the three is substituted into the Young-Laplace equation ( ${P}_{c}=2\sigma cos\left(\theta \right)/{r}^{\prime }$ ) to evaluate the local threshold capillary pressure of the pore-throat: In Equation (17), ${r}_{i}^{\prime }$ refers to the calculated radius of curvature in each corner of the pore-throat, as shown in Figure 5G. On the other hand, by defining the local capillary pressure of snap-off in the form of Equation (15), one can relate the correction factor of Ci to the radius of curvature r′ from LB simulation results: In Equation (18), rt refers to the inscribed radius of the pore-throat (a purely geometrical parameter) while r′ refers to the minimum radius of curvature of the interface right before the snap-off event in the pore-throat (determined from LB simulation results). Table 3 presents the resulting snap-off factors from LB simulation on PTP configurations for the three studied shape factors. The snap-off factors from modified model are slightly smaller than factors from conventional model for all studied shape factors. This results in higher local threshold capillary pressure for snap-off events in the modified model as well. Although only three shape factors are considered, covering a range from 0.020 to 0.040 with linear interpolation used to fill in, based on our experience with different types of rock samples, this range of shape factor is sufficient to include the majority of both pore-bodies and pore-throats across various PNs of different samples. Table 3. The resulting snap-off factor (Ci) as a function of shape factor from LB simulations on PTP configurations compared with the conventional model. ### 4.2. Modified Pore-Network Modeling Results #### 4.2.1. Rock Sample and Extracted Pore-Network In this section, two natural rock samples of Berea sandstone and Mt. Simon sandstone are selected to investigate residual trapping of CO2 after a drainage-imbibition cycle. The former sample was the focus of an experimental study of measurement of contact angle between CO2 and brine by Dalton et al. (2018). The latter sample was the focus of a rock characterization study and CO2-brine flow simulation with different modeling approaches by Kohanpur et al. (2020). The core plugs of both samples were scanned at the micro-CT imaging facility at the National Energy Technology Laboratory (NETL) which produced a series of grayscale scans. These scans are processed through several steps of image processing in Fiji (Schindelin et al., 2012) to filter and smooth images in order to distinguish existing image phases (solid, pores, CO2, brine) from each other via thresholding algorithms. Table 4 presents relevant information for the studied sandstone samples. We report flow simulation results from the Berea sandstone sample, shown in Figure 6A, in detail here. For brevity, the results from the Mt. Simon sandstone sample are presented in the Supplementary Material. Table 4. Selected samples for pore-network modeling of CO2-brine flow after a drainage-imbibition cycle. Figure 6. (A) 3D representation of the Berea sandstone sample. (B) A slice of micro-CT images: left image is the raw grayscale slice and right image is the ternary segmented slice where solid part is in white, CO2 is in black, and brine is in gray. (C) 3D representation of the pore structure. (D) 3D representation of residual trapped CO2 after imbibition process. (E) Extracted pore-network of the Berea sandstone sample from maximal-ball algorithm. Dalton et al. (2018) used a fractional flow experimental apparatus to perform a drainage-imbibition cycle of CO2-brine flow on the Berea sandstone sample. They scanned post-imbibition micro-CT images to measure residual CO2. Figure 6B shows an example of post-imbibition grayscale image and its corresponding segmented image which consists of three phases: solid in white, CO2 in black, brine in gray. This is obtained by a ternary segmentation implemented in Fiji, which entails two sequential binary segmentation on dry scans and post-imbibition scans, respectively, in order to obtain their differences and extract the distribution of CO2. Figure 6C shows the 3D representation of the pore structure, and Figure 6D shows the residual trapped CO2 after imbibition in the Berea sandstone sample. Saturation of residual of trapped CO2 is equal to the ratio of the number of CO2 voxels in Figure 6D to the number of pore voxels in Figure 6C. The resulting experimental residual CO2 saturation after imbibition is 0.331 for the Berea sandstone sample based on this processing of the micro-CT images. Dalton et al. (2018) also provided the distribution of contact angle by using measurements on the captured interface from 2D micro-CT images. The contact angle average (55.9°) and standard deviation (15.5°) come from measurement on 40 different 2D micro-CT images located in different parts of the core. More details can be found in Dalton et al. (2018). The average value of 55.9° is used as the contact angle in our LB and PN simulations. We use the PN extraction code based on Maximal Ball (MB) algorithm from Dong and Blunt (2009) and Raeini et al. (2017) to obtain corresponding PNs of the rock images. The algorithm was originally introduced by Silin and Patzek (2006) where the entire 3D voxelized pore space is searched to find the largest possible voxelized spheres, known as MBs. This PN extraction tool can provide the inherent randomness of pore structure in real rocks with a wide range of connection numbers for pore-bodies. More details of this PN extraction tool can be found in Dong and Blunt (2009). The output of this code is geometrical and topological information of pore-bodies and pore-throats including the location, radius, volume, length, total length, and shape factor. Figure 6E shows extracted PN of the Berea sandstone sample. This PN has 6207 pore-bodies and 10160 pore-throats with the average connection number of 3.18. The computed absolute permeability of the PN is 455 mD using the Valvatne and Blunt (2004) PN flow solver. As mentioned earlier, the shape factor (Equation 3) in PN models is a metric of irregularities of the pore space of pore elements. Figure 7 shows the shape factor distribution of pore-bodies (left plot) and pore-throats (right plot) in the extracted PN of the Berea sandstone sample. The distribution in both follows approximately a normal distribution with the average of 0.0298 and 0.0312 and standard deviation of 0.0078 and 0.0064 for pore-bodies and pore-throats, respectively. Therefore, these distributions justify the selected values of shape factors in PN configurations studied in section 3.1. Figure 7. Shape factor distribution of pore elements in the extracted pore-network of Berea sandstone sample: (left) pore-body shape factor distribution, (right) pore-throat shape factor distribution. Next, we present detailed results from the CO2-brine flow simulation with properties listed in Table 1. We use a modified PN flow solver that is based on Raeini et al. (2018) but with its imbibition threshold Pc equations of pore-body filling and snap-off events replaced with the new pore-level models described in sections 4.1.1 and 4.1.2. Then, the outputs from the modified PN flow solver are compared with those from the original PN flow solver that uses conventional models, and the predicted residual trapped CO2 are compared with the experimental value from Dalton et al. (2018). Results for residual trapping of CO2 after a drainage-imbibition cycle are discussed in section 4.2.2, and the statistics of pore-level events during imbibition are investigated in section 4.2.3. #### 4.2.2. Residual Trapping In quasi-static PN flow simulation, the saturation at the end point of the drainage process is needed as the start point of the imbibition process. The resulting saturation of trapped non-wetting phase at the end of imbibition follows a hysteretic behavior that depends on the initial saturation in the imbibition process. Therefore, the PN flow solver can be used to compute a so-called trapping curve that gives residual trapped CO2 as a function of the initial CO2 saturation at the start of imbibition; this is similar to Land’s initial-residual trapping model (Land, 1968). In Figure 8, the residual trapping curve of CO2-brine flow for the Berea sandstone sample is compared between the original and modified PN models. When the initial CO2 saturation is higher, the resulting residual saturation of CO2 is about 0.48 for the original PN model and 0.44 for the modified PN model. However, this difference is negligible when the initial CO2 saturation is smaller. In both models, a full drainage process leaves a relatively small saturation of brine (i.e., more CO2) in the PN, mainly in corners and small or isolated pore elements. Figure 8 shows that a full drainage process is followed by an imbibition process with more trapped CO2. On the other hand, for smaller values of initial CO2 saturation, the resulting residual saturation of CO2 decreases i.e., less trapping of CO2 which is consistent with experimental observations (Niu et al., 2015). Figure 8. Residual trapping curve from pore-network modeling of CO2-brine flow on the Berea sandstone sample. Unfortunately the experimental study of Dalton et al. (2018) did not report the saturation at the drainage end point. Therefore, in order to choose a proper drainage end point for the Berea sandstone sample, we perform LB simulation of drainage on the full rock image. In addition, imbibition is also simulated to obtain a DNS prediction of residual trapped CO2 for comparison with the experimental value and prediction from PN flow simulation. The LB simulations of drainage and imbibition on the 3D rock images were conducted with contact angle of 55.9° and capillary number of 5 × 10−6 to obtain the saturation at the end of drainage and residual trapped CO2 at the end of imbibition. Figure 9 shows the resulting distribution of CO2 at the end of drainage and imbibition processes. The corresponding CO2 saturation at the end of these processes are ${S}_{nw}^{drain.}=0.434$ and ${S}_{nw}^{imbib.}=0.289$ , respectively. We therefore use ${S}_{nw}^{drain.}=0.434$ as the end point of drainage in the PN flow solver. Then, we simulate imbibition to estimate residual CO2 saturation from PN and compare the result with the LB result: ${S}_{nw}^{imbib.}=0.289$ . Figure 9. LB simulations of drainage and imbibition processes on the Berea sandstone sample to obtain end point saturations. (A) pore space of the sample (B) end of drainage process (C) end of imbibition process. CO2 is shown in red. Brine is not rendered. The resulting residual trapping of CO2 after imbibition is presented in Table 5 where the modified PN model is compared with the original PN model. The residual saturation of CO2 is also compared with an experimental value based on analysis of volume ratio of CO2 in micro-CT images rendered in Figure 6D. The modified model predicts residual saturation of trapped CO2 in excellent agreement with the experimental value while the error from the original model is about 27%. Table 5. Comparison of the original and modified pore-network flow models on the Berea sandstone sample based on predicted saturation of residual trapping of CO2 after a drainage-imbibition cycle. #### 4.2.3. Statistics of Pore-Level Events In each step of imbibition process, piston-type displacement, pore-body filling, and snap-off occur in pore elements of the PN. These pore-level events determine the invasion pattern and residual trapping during imbibition. The saturation of residual trapped CO2 is the summation of trapped CO2 saturation in all pore elements of PN. One approach to study the effect of changes in the model is to track the frequency and statistics of pore-level events on the same PN. The cumulative statistics of these events for the original and modified PN model during imbibition steps on the Berea sandstone sample are presented in Figure 10. As seen from the results, the modified model predicts higher number of pore-body filling events (14.8% more) and piston-type displacement in pore-throats (17.2% more). On the hand, it predicts a smaller number of snap-off events compare to the original model (9.6% less). Snap-off event is the mechanism that contributes the most to trapping since it disconnects CO2 in adjacent pore-bodies and leads to isolated trapped CO2 bubbles in the PN. Thus, the decrease in number of snap-off events in the modified model for this sample is expected to result in less trapping of CO2 and smaller saturation of residual trapped CO2. Figure 10. Statistics of number of pore-level events during imbibition process in pore-bodies (left) pore-throats (right) of the Berea sandstone sample. In terms of invasion pattern, this can translate into a more frontal pattern due to more piston-type displacements in the modified model, which implies less chance of trapping CO2. This is also in agreement with reported residual saturation in Table 5. Thus, the modified model outperforms the original model and predicts residual trapping in better agreement with the experimental and LB simulation predictions. In the Mt. Simon sandstone sample (see Supplementary Material), the modified model also predicts that the residual trapped CO2 is closer to the reference value, which is from LB simulation, than the original model. However, the discrepancy between PN modeling and LB predictions are about 20%. This can be due to higher heterogeneity of pore structure in the Mt. Simon sandstone sample compare to the Berea sandstone sample. It is worth mentioning that different statistics of pore-level events results in a different invasion pattern that corresponds to different average flow rates across the PN. This means that the modified PN solver gives different relative permeability values at each saturation point. The relative permeability curves of CO2-brine flow on the studied Berea sandstone sample are reported in details in Kohanpur (2020). Therefore, use of the modified PN method would yield different relative permeability curves for CO2 and brine in a field-scale simulator which would lead to different long-term movement and storage of CO2 in a reservoir. ## 5. Summary and Conclusions The pore-scale physics of CO2-brine flow plays a key role for predicting the amount and fate of residual trapped CO2 in geological storage of CO2 in deep saline reservoirs. The description of this flow system in the form of pore-level flow models through pore-bodies and pore-throats of an extracted PN from micro-CT images of real rock is a practical approach to obtain important pore-scale properties during a drainage-imbibition cycle. However, this description can be improved by more specific and accurate relations for CO2-brine flow that can be derived from DNS methods. This study presented a new set of pore-level flow models during pore-body filling and snap-off events of the imbibition process in PN modeling of CO2-brine flow. LB simulations were carried out on several typical idealized PN configurations and the local capillary pressure was evaluated to develop modified equations for local threshold capillary pressure of pore elements as a function of shape factor. We also defined effective aspect ratio of pore-body filling as a new parameter in the modified model, which was not proposed in other models in the literature. The modified equations of local threshold capillary pressure were incorporated into a widely available quasi-static PN flow solver. This modified model resulted a new pattern of invasion during imbibition due to a different order of competing pore-level events compare to the original model. We applied the modified model on extracted PNs of Berea and Mt. Simon sandstone samples to obtain saturation of residual trapped CO2 after a drainage-imbibition cycle. The statistics of pore-level imbibition events changed by replacing the original model with the modified model. The occurrence of snap-off in pore-throats was reduced which means a more frontal displacement pattern across the sample. As a result, our modified model was in closer agreement than the original model based on the comparison of the residual trapped CO2 with reference data from experimental and LB simulation approaches. Additional future work could include comparing predicted residual trapping of CO2 with other experimental data or high resolution DNS methods. Also, the effect of lattice resolution and capillary number in LB simulations of PN configurations requires further study. This may lead to some changes in the proposed modified model such as new values for factors reported in Tables 2, 3. A preliminary study is discussed in Kohanpur (2020) on one shape factor and can be extended to more shape factors and other PN configurations. Finally, we note that the core idea of this study was to combine DNS with PN to improve the physical representation of key pore-level events, while preserving the computational efficient of quasi-static PN. In this work, we focused on pore-body filling and snap-off events which have an important impact on residual trapping of CO2. This framework can be applied to improve estimation of other quantities of interest such as relative permeability and effective diffusion. ## Data Availability Statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ## Author Contributions All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication. ## Funding This work was supported by the Center for Geologic Storage of CO2, an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), under Award # DE-SC0C12504. Simulations for this research are part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) the State of Illinois, and as of December 2019, the National Geospatial-Intelligence Agency. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. ## Conflict of Interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. ## Publisher’s Note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2022-01-25 22:14:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 30, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5712785720825195, "perplexity": 1953.415617716861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304876.16/warc/CC-MAIN-20220125220353-20220126010353-00223.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=1070927
MathSciNet bibliographic data MR1070927 57M25 (20C99 20F34) Burde, Gerhard ${\rm SU}(2)$${\rm SU}(2)$-representation spaces for two-bridge knot groups. Math. Ann. 288 (1990), no. 1, 103–119. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2017-05-22 16:59:20
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994102716445923, "perplexity": 9089.197503892798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463605188.47/warc/CC-MAIN-20170522151715-20170522171715-00210.warc.gz"}
http://math.stackexchange.com/questions/211783/help-me-please-a0-fourier-series-for-this-sawtooth-wave?answertab=votes
# Help me please, A0 fourier series for this Sawtooth wave I need to know for this wave sub zero, the correct answer is a0=-2V/Pi, but I do not know how to get to that answer. Help me please the imagen its the series. This is the sawtooth wave http://s9.postimage.org/42uc7x0pp/IMG00030_20121012_1152.jpg Thanks! - Destructive edit rolled back. – user53153 Mar 9 at 2:22 @5pm: How did you even come across this thread? – Asaf Karagila Mar 9 at 2:34 @AsafKaragila Bumped to the front page by Community (before I upvoted the answer to keep the bot quiet). – user53153 Mar 9 at 2:40 @5pm: Ah. If only these autobumps were documents... thanks. – Asaf Karagila Mar 9 at 2:41 $$\{x\}=\frac{1}{2}-\sum_{n=1}^N\frac{\sin 2\pi nx}{\pi n}+O((1+||x||N)^{-1}) = \frac{1}{2}-\sum_{n=1}^\infty\frac{\sin 2\pi nx}{\pi n}$$
2013-05-24 20:06:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7160083055496216, "perplexity": 3069.4782640195745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705020058/warc/CC-MAIN-20130516115020-00094-ip-10-60-113-184.ec2.internal.warc.gz"}
http://pybb.arztpraxis-ganzheitliche-medizin.de/calculus-volume-1-answers.html
Volume of a Triangular Pyramid: The volume of a triangular pyramid is given by. Once the download is complete, please close and reopen all browser. Erdman Portland State University Version August 1, 2013 c 2010 John M. Calculus Volume-1 - Free ebook download as PDF File (. (a) Find all x such that f(x) ≤ 2 where f(x) = −x2 +1 f(x) = (x−1)2 f(x) = x3 Write your answers in interval notation and draw them on the graphs of the functions. The cross-sections uses are squares that. 9 Introduction to Probability 8. We’ve been there before. Can you find your fundamental truth using Slader as a completely free Stewart Calculus solutions manual? YES! Now is the time to redefine your true self using Slader's free Stewart Calculus answers. This course, in combination with Parts 1 and 3, covers the AP* Calculus BC curriculum. Calculus II Practice Problems 1: Answers 1. Zeus Industries bought a computer for $2857. The book's aim is to use multivariable calculus to teach mathematics as. About Calculus Volume 1 C alculus is designed for the typical two- or three-semester general calculus course, incorporating innovative features to enhance student learning. Integration can be used to find areas, volumes, central points and many useful things. To bring you and your students new classroom resources and supports, we’re making updates to AP Calculus AB for the 2019-20 school year. Each answer shows how to solve a textbook problem, one step at a time. Download with Google Download with Facebook or download with. If the graphs cross once, they must cross again-because 3" is higher at 2 and 4. 5 in and height 5 in can be computed using the equation below: volume = 1/3 × π × 1. Comprehensive. jmap resource archives ai/geo/aii (2015-now) ia/ge/a2 (2007-17) math a/b (1998-2010) regents resources. nyc teacher resources. Textbook solution for Calculus (MindTap Course List) 11th Edition Ron Larson Chapter P. This textbook emphasizes connections between between theory and application, making physics concepts interesting and accessible to. Many functions have the property that their graphs can be traced with a pencil without lifting the pencil from the page. Buy Calculus, Volume 1 with Answer Key, First Edition on Amazon. In this section, the second of two sections devoted to finding the volume of a solid of revolution, we will look at the method of cylinders/shells to find the volume of the object we get by rotating a region bounded by two curves (one of which may be the x or y-axis) around a vertical or horizontal axis of rotation. I have also researched online and found some solution manuals, volume 1 but it states it too only has odd numbered soln's. 1 : Early Transcendentals by James Stewart (2007, Hardcover) at the best online prices at eBay!. Slader is an independent website supported by millions of students and contributors from all across the globe. Hardy is a classic one for rigorous mathematics. Each section carries 50 marks. This is a textbook for mainstream calculus typically taught over three semesters. SOLUTIONS TO CALCULUS VOLUME 1 BY TOM APOSTOL. Calculus I Chapter 1 and 2 Test Review Key 3. Computer programs and the problems that have in graphically investigating limits. Calculus is designed for the typical two- or three-semester general calculus course, incorporating innovative features to enhance student learning. We make the study of numbers easy as 1,2,3. Graph and the tangent line. Find the volume of the solid of revolution generated by revolving the region bounded by y = 6, y = 0, x = 0, and x = 4 about: (a) the x–axis (452. Mitterer ch(8,9,3,16,17, 12,14,15) DOC Principles of Finance, 4th Edition Scott Besley, Eugene F. At our command, it solves x3 = 3X. pdf), Text File (. 4 Inverse Functions 1. Textbook solution for Calculus (MindTap Course List) 11th Edition Ron Larson Chapter P. If you're behind a web filter, please make sure that the domains *. Essential Calculus introduces students to basic concepts in the field of calculus. Calculus I Chapter 1 and 2 Test Review Key 3. The volume of a solid body is the amount of "space" it occupies. A precise definition for the definite integral involves partitions and lower as well as upper sums: Definition. See all of the math topics available on IXL! From counting to calculus, addition to algebra, theres always something new to learn. Volumes of Known Cross Sections. Functions and Graphs. Find the Numerical Answer to Equation - powered by WebMath. A precise definition for the definite integral involves partitions and lower as well as upper sums: Definition. Apostol, Calculus, Volume 1 solutions. To help more students prepare for—and succeed on—the AP Calculus AB Exam, we’ve clarified the course’s focus starting with the 2019-20 school year and are introducing new resources for your classroom. We’ve been there before. Calculus is one of the grandest achievements of human thought, explaining everything from planetary orbits to the optimal size of a city to the periodicity of. It is imperative to know exactly what the problem is asking. Calculus Complete Solutions Guide by Edwards, Bruce H. Where can I download a good PDF of Apostol's Calculus Vol 1 and 2? Update Cancel. The answer should be Stack Exchange Network Stack Exchange network consists of 175 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. You can simply help by rating individual sections of the book that you feel were. Lesson 1 1-Minute Summary Lesson 2 X-Ray Vision Lesson 3 3D intuition Lesson 4 Integrals, Derivatives Lesson 5 Computer Notation Lesson 6 Improved Algebra Lesson 7 Linear Changes Lesson 8 Squared Changes Lesson 9 Infinity Lesson 10. Really clear math lessons (pre-algebra, algebra, precalculus), cool math games, online graphing calculators, geometry art, fractals, polyhedra, parents and teachers areas too. Unit 6 videos. Unlike static PDF Calculus Volume 1 1st Edition solution manuals or printed answer keys, our experts show you how to solve each problem step-by-step. Hostetler and Bruce H. points (1; 2) and (3;5) has slope (5+2)=(3 1) = 7=2. This set of Linear Algebra Multiple Choice Questions & Answers (MCQs) focuses on “Volume Integrals”. Determine the following limit. These 7 sections of Calculus 1 are divided into several subsections. The Calculus 3 Tutor: Volume 1 -- 10 Hour Course. Calculus Volume-1 - Free ebook download as PDF File (. 5 Surface Area and Areas of Revolution 7. Buy Calculus, Vol. Gerrish Pure Mathematics Volume 1 ( Calculus ) Cambridge University Press 1960 Acrobat 7 Pdf 16. The classic introduction to the fundamentals of calculus Richard Courants classic text Differential and Integral Calculus is an essential text for those preparing for a career in physics or applied math. DIFFERENTIAL CALCULUS PREFACE VII Chapter I. Read 13 reviews from the world's largest community for readers. The latest versions may be found by. 6 Net Change 7. Mitterer ch(8,9,3,16,17, 12,14,15) DOC Principles of Finance, 4th Edition Scott Besley, Eugene F. Prerequisite Skills Diagnostic Test (Part B). 3 Volumes: The Shell Method 7. Math Tutor DVD provides math help online and on DVD in Basic Math, all levels of Algebra, Trig, Calculus, Probability, and Physics. About the book The second editions of all three volumes were published in 2001 and 2002. Integrals can be used to find 2D measures (area) and 1D measures (lengths). Calculus Volume 1 [Gilbert Strang: Massachusetts Institute of Technology, Edwin (Jed) Herman: University of Wisconsin at Stevens Point] on Amazon. I have also researched online and found some solution manuals, volume 1 but it states it too only has odd numbered soln's. Use the value in the preceding exercise to find the equation of the tangent line at point P. Prepare for exams and suc. Instead, you should type it like this: (x^2+1)/(x-5). Calculus textbook solutions and answers from Chegg. Questions About Volume. Practice Problems on Volumes of Solids of Revolution ----- Find the volume of each of the following solids of revolution obtained by rotating the indicated regions. But vowels (or other syllable nuclei) are always prosodically licensed, so if they get deleted it must be by an explicit rule. Chapter Outline 1. Volume by Slicing - Washers and Disks Volume by Cylinder Method. 2 - Multi-Variable Calculus and Linear Algebra with Applications. Volume has units of length cubed (i. Student Solutions Manual, Vol. Here are a set of practice problems for the Calculus I notes. However, before exploring these and other ideas, we must first lay a foundation for the study of calculus in one variable by exploring the concept of a limit. 1 (Classics in Mathematics) by Richard Courant, Fritz John. Rogosin and others published Mittag-Leffler function: properties and applications, In: Handbook of Fractional Calculus with Applications in 8 volumes (ed. Volumes III (12 minutes, SV3 » 49 MB, H. The book guides students through the core concepts of calculus and helps them understand how those concepts apply to their lives and the world around them. Making statements based on opinion; back them up with references or personal experience. 3 Trigonometric Functions 1. Math test activities for students and teachers of all grade levels. Since the difference of logarithms is the logarithm of the quotient, we. The prerequisite is a proof-based course in one-variable calculus. Calculus Volume 1. Math Connects is correlated to the Common Core State Standards! Click the CCSS logo to check out the new CCSS lessons and homework practice pages. The volume of a solid body is the amount of "space" it occupies. Computer programs and the problems that have in graphically investigating limits. Calculus Applications of Definite Integrals Determining the Volume of a Solid of Revolution. In this section, the first of two sections devoted to finding the volume of a solid of revolution, we will look at the method of rings/disks to find the volume of the object we get by rotating a region bounded by two curves (one of which may be the x or y-axis) around a vertical or horizontal axis of rotation. Precalculus droodle review sheet answer key pdf, e integral calculus when you see the words the answer is either , , or does not exist pre calculus 11 workbook 400 pages answers to all exercise questions. Available for Pre-Algebra, Algebra 1, Geometry, Algebra 2, Precalculus, and Calculus. This is the free digital calculus text by David R. c) ln2 x 1 ln2 x 1 ln2 8 Answer. Sketch the region, the solid, and a typical disk or washer. Calculus, Vol. 5 Surface Area and Areas of Revolution 7. Essential Calculus. Find the indicated limit. The region in the first quadrant bounded by the curves y is the volume of the solid so produced? x 2 and x 1 is rotated about the y-axis. Area Under. 4Identify the graphs and periods of the trigonometric functions. Calculus is designed for the typical two- or three-semester general calculus course, incorporating innovative features to enhance student learning. 2Recognize the triangular and circular definitions of the basic trigonometric functions. Recognize a tangent to a curve at a point as the limit of secant lines. Ordinary Differential Equations. pdf form; less expensive than traditional textbooks. 2 1 0 1 2 p 2 Figure 2. One common application of calculus is calculating the minimum or maximum value of a function. Volume by Parallel Cross Sections. Making statements based on opinion; back them up with references or personal experience. Tom Apostol - Calculus Vol. To nd p 2 on the real line you draw a square of sides 1 and drop the diagonal onto the real line. YES! Now is the time to redefine your true self using Slader’s free Calculus (Volume 1) answers. - Infinite sequences and series (limits and convergence). Search for: Chapter 1 Review Exercises. Integration is treated before differentiation--this is a departure from most modern texts, but it is historically correct, and it is the best way to establish the true connection between the integral and the derivative. In some introductory calculus classes these types of problems are called max/min problems: given a function, what is the maximum or minimum output subject to some constraints. Chapter Outline 1. FREE CALCULUS TEXTBOOKS Introduction to Calculus I and II. Because, as our own textbook puts it so beautifully: "Some students try to learn calculus as if it were simply a collection of new formulas. High quality, downloadable, and printable. I took calculus 1 last semester and got an A. - Application of Integral (Volumes, arc length, area of surface revolution). OpenStax CNX. org, where students, teachers and math enthusiasts can ask and answer any math question. Answers To Fotonovela Leccion 3 PDF Download Free. Each answer shows how to solve a textbook problem, one step at a time. Functions and Graphs. Don't forget to use the magnify/demagnify controls on the y-axis to adjust the scale. First, a double integral is defined as the limit of sums. There are overlapping chapters in each volume to provide some flexibility in scheduling and to minimize the chance that more than one volume will be required for each semester. TI-85 Graphing Calculator. Home; Calculus 1 WebAssign Answers; Calculus 2 Webassign Answers; Calculus 3 Webassign Answers. Volume 1 covers functions, limits, derivatives, and integration. If the cross sections of S perpendicular to the x-axis are squares, which of the following gives the best approximation of the volume of S? (A) 0. OpenStax Calculus Volume 1 Instructor Answer and Solution Guide Answer: 3 6. We explore this question later in this chapter and see that integration is an essential part of determining the answer Calculus Volume 1. Find the volume V of the resulting solid by any method please and thank you. The formula for the area of a trapezoid is A = 1/2h(a + b) where a and b are. Calculus Volume 1: OpenStax. The second time, when I was 30, I took a calculus coarse at a local 4-year college. Directions: Find the range of each set of data. My advise is study harder. Calculus Volume 1. Shed the societal and cultural narratives holding you back and let free step-by-step Calculus (Volume 1) textbook solutions reorient your old paradigms. Cornette, Iowa State University. Volume has units of length cubed (i. Find the value of \(\int_0^1 \int_0^2 \int_1^2 xy^2 z^3 dxdydz. Identify instantaneous velocity as the limit of average velocity over a small time interval. Find the Numerical Answer to Equation - powered by WebMath. Free, open-source, high-quality textbooks for your college course, available online and in print. Answers to the questions on the Diagnostic Test reference specific examples within Appendix A. Real Numbers 1. For many students, this course provides the foundation to a career in mathematics, science, or engineering. The Natural Logarithm (19 minutes, SV3 » 58 MB, H. Franklin Wright, Spencer P. Calculate the average rate of change and explain how it differs from the instantaneous rate of change. 1 for Swokowski's Calculus book. based calculus notebooks and problems. 8 Applications in Science 7. txt) or read book online for free. For instance, we often think of the real numbers as strings of elements of the set. - Application of derivative (Taylor series, extreme value problems, Newton's Method). This course, in combination with Part 1, covers the AP* Calculus AB curriculum. Identify instantaneous velocity as the limit of average velocity over a small time interval. " Actually, this site would correctly put 1/x as the only fraction. Errata for Calculus Volume 1 revisiion C1-2016-316-BW (latest). Each answer shows how to solve a textbook problem, one step at a time. The classic introduction to the fundamentals of calculus Richard Courants classic text Differential and Integral Calculus is an essential text for those preparing for a career in physics or applied math. CALCULUS WORKSHEET ON VOLUME BY CROSS SECTIONS Work the following problems on notebook paper. I use two integrals, finding the answer as the volume of a solid minus the volume of the hole. Motivational applications cover important topics across a variety of disciplines such as biology, business, chemistry, and more. Volume 1 covers functions, limits, derivatives, and integration. Take 1st derivative of the function and put it equal to 0 to find the value of "x". Marsden & A. Bloggat om Calculus, Volume 1 Övrig information TOM M. You must enable JavaScript in order to use this site. But it can also be used to find 3D measures (volume)! Learn all about it here. AP Calculus AB Updates and New Resources for 2019-20 Ap calculus ab 2019 free response answers. The book guides students through the core concepts of calculus and helps them understand how those concepts apply to their. Get math help in algebra, geometry, trig, calculus, or something else. In mathematics, a multiplicative calculus is a system with two multiplicative operators, called a "multiplicative derivative" and a "multiplicative integral", which are inversely related in a manner analogous to the inverse relationship between the derivative and integral in the classical calculus of Newton and Leibniz. Calculus Complete Solutions Guide by Edwards, Bruce H. The uses of the first and second derivative to determine the intervals of increase and decrease of a function, the maximum and minimum points, the interval(s) of concavity and points of inflections are discussed. 3-12 Differential Calculus Rules for Vector Functions 192 3-13 Equation of Tangent and Normal Lines, Angle between Two Curves 196 3-14 Second Derivative, Derivatives of Higher Order 200. Symbolab: equation search and math solver - solves algebra, trigonometry and calculus problems step by step. 4k answer views. The base of S is the region enclosed by the parabola y = 10 - 10x2 and the x-axis. y = 3/x , y = 0 , x =1, = 5 Find the volume V of this solid. Learn for free about math, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more. This page shows a set of three-dimensional solids that have their dimensions labeled, and the student’s task is to compute the volume of each. In manufacturing, it is often desirable to minimize the amount of material used to package a product with a certain volume. Limits and Derivatives 2. 3Write the basic trigonometric identities. The volume of a solid body is the amount of "space" it occupies. edu/~tjp , prior to the. This is too complex a question for us to answer fully right now; however, we can make an approximation. Using TI-85 graphing calculators to graphically demonstrate the epsilon-delta definition of limits. The specific requirements or preferences of your reviewing publisher, classroom teacher, institution or organization should be applied. Franklin Wright, Spencer P. In this exercise, cross section shapes are either squares or rectangles. Sample Questions with Answers The curriculum changes over the years, so the following old sample quizzes and exams may differ in content and sequence. The volume can also be computed for irregularly-shaped and curved solids such as the cylinder and cone. About the book The second editions of all three volumes were published in 2001 and 2002. I´m not autor of it. The latest versions may be found by. 10 Antiderivatives Learning Objectives. Find great prices for Contemporary Calculus First Semester (Volume 1) - Dale Hoffman, Paperback. Now I am taking calculus 2. Calculus Volume 1. The volume ( V) of the solid is. From basic equations to advanced calculus, we explain mathematical concepts and help you ace your next test. There are overlapping chapters in each volume to provide some flexibility in scheduling and to minimize the chance that more than one volume will be required for each semester. 264 » 14 MB) The cylindrical shell method. Don't forget to use the magnify/demagnify controls on the y-axis to adjust the scale. Also, TI-86 Graphing Calculator [Using Flash] TI-85 Graphing Calculator. This is unfortunate. Now available in paperback! An introduction to the calculus, wit. 264 » 19 MB) The natural log function defined as ∫ 1 x 1/t. org, where students, teachers and math enthusiasts can ask and answer any math question. Volume is a calculus topic I've not taught before, and I want my students to do more than memorize the formulas. Buy Calculus, Vol. Because, as our own textbook puts it so beautifully: "Some students try to learn calculus as if it were simply a collection of new formulas. Assignments Exams Download Course Materials; The problem assignments are are based on the course textbook and course notes: Apostol, Tom M. If you are giving the alternate AP Calculus AB or BC Exam for late testing: • You must seat students no less than five feet (approximately 1. THIS IS AN ANSWER KEY TO THE UPDATED COMPLETE COURSE Thank you for your patience over the past year as I have had to redo this collection after a technological disaster. 2910, and so this must be the value that gives the maximum volume and since the maximum volume is all that was asked for in the problem statement the answer is then : \[V\left( {1. 1 for Swokowski's Calculus book. Calculus is the art of splitting patterns apart (X-rays, derivatives) and gluing patterns together (Time-lapses, integrals). Solutions to Apostol Calculus Vol 1 and 2. Calculus II Practice Problems 1: Answers 1. Volume 1 introduces the foundational concepts of function and limit, and offers detailed explanations that illustrate the why as well as the how. Other functions have points at which a break in the graph occurs, but satisfy this property over intervals contained in their domains. The student then answers questions about the solids. The Math Lab is now run by the Learning Center. d3bxy9euw4e147. Unlike static PDF Calculus Volume 1 1st Edition solution manuals or printed answer keys, our experts show you how to solve each problem step-by-step. One may also download individual volumes which break up the content into more manageable portions. However, before exploring these and other ideas, we must first lay a foundation for the study of calculus in one variable by exploring the concept of a limit. Calculus Volume 1. Exercises and Problems in Calculus John M. The screen proves a point of logic (or mathematics) that escaped us. The formula for the area of a trapezoid is A = 1/2h(a + b) where a and b are. com useful for school or work? I built this website and developed the material it contains on my own time, and it's entirely free to use. , cm^3, m^3, in^3, etc. Buy, rent or sell. Calculus is designed for the typical two- or three-semester general calculus course, incorporating innovative features to enhance student learning. If you misread the problem or hurry through it, you have NO chance of solving it correctly. This course, in combination with Part 1, covers the AP* Calculus AB curriculum. Cool Math has free online cool math lessons, cool math games and fun math activities. Buy Calculus, Volume 1 with Answer Key, First Edition on Amazon. d3bxy9euw4e147. For example if you typed x^2+1/x-5, you might think this means "the quantity 'x-squared plus 1' over the quantity 'x minus 5'. we write down in this course will be true for some. This textbook emphasizes connections between between theory and application, making physics concepts interesting and accessible to. Each volume of APEX Calculus is available for free as a PDF. AP Calculus AB 2017 Free Response Question 3. 4k answer views. Calculus Volume 1. One of the best books of the year is a book titled Answers To Fotonovela Leccion 3 PDF Download Free that gives the reader a good inspiration. DIFFERENTIAL AND INTEGRAL CALCULUS, I 1 1. About the book The second editions of all three volumes were published in 2001 and 2002. Multiple Choice 1. The screen proves a point of logic (or mathematics) that escaped us. Find the volume of the smaller region cut from the solid sphere p<= 6 by the plane z=3. (Calculator Permitted) The base of a solid S is the region enclosed by the graph of yx ln , the line xe, and the x-axis. Zeus Industries bought a computer for$2857. Most math. 1 after more than 3 years- without answers I can't imagine how much extra time it would take. Get help and answers to any math problem including algebra, trigonometry, geometry, calculus, trigonometry, fractions, solving expression, simplifying expressions and more. Wileyplus Physics Quiz Answers;. Get help with your calculus homework! Access answers to hundreds of calculus questions that are explained in a way that's easy for you to understand. This textbook emphasizes connections between between theory and application, making physics concepts interesting and accessible to. Shed the societal and cultural narratives holding you back and let free step-by-step Calculus (Volume 1) textbook solutions reorient your old paradigms. resources by topic ai geo aii precalculus calculus. LAB 1 MODELING OLD FAITHFUL'S ERUPTIONS Modeling Data 1 1. Calculator online on how to calculate volume of capsule, cone, conical frustum, cube, cylinder, hemisphere, pyramid, rectangular prism and sphere. You must enable JavaScript in order to use this site. Please click on the link(s) below to download and install the required Java libraries (more). I did have a look at Apostol's Calculus Vol 1 and found that there is no issue with the proof given by Apostol (this is expected because Apostol's books are best of the lot). d3bxy9euw4e147. Take one of our many Calculus 1 practice tests for a run-through of commonly asked questions. 1 Double Integrals 4 This chapter shows how to integrate functions of two or more variables. Complete the table and use the result to estimate the limit. ap® calculus ab. Zeus Industries bought a computer for $2857. Calculus is designed for the typical two- or three-semester general calculus course, incorporating innovative features to enhance student learning. If taxable income was between$26050 and $134930, then, in addition, 28% was to be paid on the amount between$26050. Comprehensive. Search for: Chapter 1 Review Exercises. Welcome to the website for my new edition of Calculus: Early Transcendentals. Applications of Derivatives. 10 Pre-Calculus Missteps to Avoid. The classic introduction to the fundamentals of calculus Richard Courants classic text Differential and Integral Calculus is an essential text for those preparing for a career in physics or applied math. Hostetler and Bruce H. the volume of the solid obtained by Find the volume of the solid obtained by Find Sketch —Y2 y 2(y — 1) 1/2 dy 112 (1+2 y-l+y—l) — y dy — T y dy ANSWER: R be the reg i On bounded b Let — and y. Find the value of \(\int_0^1 \int_0^2 \int_1^2 xy^2 z^3 dxdydz. Functions 1 Problems § I. Differentiation and Derivatives Find Derivatives of Functions in Calculus. There are 2 volumes of the book. 02 The Fundamental Ideas of the Integral and Differential Calculus 03 Differentiation and Integration of the Elementary Functions 04 Further development of the Differential Calculus 05 Applications 06 Taylor's Theorem and the Approximate Expressions of Functions by Polynomials 07 Numerical Methods 08 Infinite Series and Other Limiting Processes. , cm^3, m^3, in^3, etc. These 7 sections of Calculus 1 are divided into several subsections. Take one of our many Calculus 1 practice tests for a run-through of commonly asked questions. 3Write the basic trigonometric identities.
2019-11-19 23:46:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5230106115341187, "perplexity": 1726.5944982529254}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670268.0/warc/CC-MAIN-20191119222644-20191120010644-00319.warc.gz"}
https://www.newton.ac.uk/documents/preprints
# Preprints The Newton Institute has its own Preprint Series where the scientific papers of the Institute are freely available. If you are a Cambridge author and you are submitting your work in the next REF then you will need to follow the guidelines on the University of Cambridge Open Access webpages to ensure that your work meets Open Access requirements. In addition, if you are supported by EPSRC or another UK Research Council, all publications published on/after 1st May 2015 need a statement describing how to access the underlying research data. For more information on this please see the University of Cambridge Research Data Management website www.data.cam.ac.uk Newton Institute visitors are encouraged to submit relevant papers to the INI Preprint series. All papers should have been either completed at the Institute or based on work that took place partially or wholly at the Institute. You can submit material to the preprint series after your visit has ended as long as it is based on work during your visit. Please send a PDF copy to preprints@newton.ac.uk, which will be added to the lists below. Bound hard copies of all preprints in the series are displayed in the Institute. We would also be pleased to hear from former visitors who have since completed papers based on research carried out here. Please acknowledge the support of the Institute in your paper using the following text: The author(s) would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme [insert programme name] where work on this paper was undertaken. This work was supported by EPSRC grant no EP/K032208/1. Papers published in the Series are listed by date of submission and may be downloaded as PDFs. Individual hard copies can also be obtained on request from the Institute. Programme Authors Title Attachments UNQ Alex Bespalov; Adam Crowder; Catherine Powell Efficient adaptive multilevel stochastic galerkin approximation using implicit a posteriori error estimation UNQ Alex Bespalov; Dirk Praetorius; Leonardo Rocchi; Michele Ruggeri Goal-oriented error estimation and adaptivity for elliptic PDEs with parametric or uncertain inputs UNQ Mr Marvin Eisenberger; Jonas Latz; Elisabeth Ullmann Fast sampling of parameterised Gaussian random fields UNQ Arbaz Khan; Catherine Powell; David Silvester Robust Preconditioning for Stochastic Galerkin Formulations of Parameter-Dependent Linear Elasticity Equations GFS Stefan Sommer STOCHASTIC METAMORPHOSIS WITH TEMPLATE UNCERTAINTIES VMV Joachim Weickert A Discrete Theory and Effcient Algorithms for Forward-and-Backward Diffusion Filtering RGM B Federici; A Georgakopoulos Hyperbolicity vs. amenability for planar graphs HIF T Inamdar Measures and slaloms PEP A Klein Characterization of the metal-insulator transport transition for the two-particle Anderson model HIF P Schlicht $\Sigma$$_1(\kappa)-Definable subsets of H(\kappa$$^+$) DAN M Maton An observation related to the method of Lemke-Hobson HIF P Schlicht Recognizable sets and Woodin cardinals: computation beyond the constructible universe PEP D Damanik; J Fillman Limit-Periodic continuum Schödinger operators with zero measure Cantor spectrum PEP I Binder; D Damanik Almost periodicity in time of solutions of the KDV equation NPC Christopher Cashen Characterizatoins of morse quasi-geodesics via superlinear divergence and sublinear contraction NPC Christopher Cashen Negative curvature in graphical small cancellation groups OAS Gandalf Lechner Yang-Baxter representations of the infinite symmetric group FOS David Balding Twelve guiding principles and recommendations for dealing with quantitative evidence in criminal law OAS Stefaan Vaes Bernoulli actions of type III_1 and L^2-cohomology OAS Stephen Moore Non-semisimple planar algebras from the representation theory of ${\overline{U}_q}{(\mathfrak{sl}_2)}$ OAS Stefaan Vaes L^2-Betti numbers of rigid C*-tensor categories and discrete quantum groups FOS Geoffrey Stewart Morrison; William Thompson Assessing the Admissibility of a New Generation of Forensic Voice Comparison Testimony FOS Giulio D'Agostini Probability, propensity and probability of propensities (and of probabilities) FOS Geoffrey Stewart Morrison; David Balding; Philip Dawid; ET Al A comment on the PCAST report: Skip the “match”/“non-match” stage FOS Giulio D'Agostini The waves and the sigmas (to say nothing of the 750 GeV mirage) FOS Peter Green Paternity testing and other inference about relationships from DNA mixtures FRB C Bandle; MA Fontelos A nonlocal diffusion problem on manifolds PEP N Filonov; I Kachkovskiy On the structure of band edges of 2D periodic elliptic operators HIF Y Khomskii Almost disjoin refinements and mixing reals HIF T Inamdar The modal logic of inner models PEP TA Suslina Homogenization of nonstationary Schrödinger type equations with periodic coefficients PEP Y-R Lee; G Stolz Ballistic transport for the Schrödinger operator with limit-periodic or quasi-periodic potential in dimension two PEP T Sunada Exponential Riemann sums and "near" -quasicrystals PEP J Griffin On the phase-space distribution of Bloch eigenmodes for periodic point scatterers PEP J Marklof Invariance principle for the periodic Lorentz gas in the Boltzmann-Grad limit RGM B Le Logarithmic coefficients and generalized multifractality of whole-plane SLE PEP V Chulaevsky Complete exponential localization in a discrete multi-particle Anderson model with interaction of infinite range PEP I Kachkovskiy On transport properties of isotropic quasiperiodic $\it XY$ spin chains RGM E Gwynne; X Sun Scaling limits for the critical Fortuin-Kastelyn model on a random planar map II: local estimates and empty reduced word exponent PEP B Helffer Lower bound for the number of critical points of minimal spectral $\it k$-partitions for $\it k$ large.
2018-06-24 06:57:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4802152216434479, "perplexity": 4939.578159916157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866888.20/warc/CC-MAIN-20180624063553-20180624083553-00465.warc.gz"}
https://www.physicsforums.com/threads/how-do-i-find-these-things.69874/
How do I find these things? 1. Apr 3, 2005 PrudensOptimus An SRS of 16 Orange County Schools' juniors had a mean SAT MATH of x = 500; and a Standard Deviation of s=100. We know that the population of SAT Math scores for juniors in the district is approximately normaly distributed. We wish to determine a 90% confidence interval for the mean SAT math score mu for the population of all juniors in the district. Question: 1. For the SAT MATH Data above, find the power of the test against the alternative mu=500 at the 5percent significance level. Assume that Sigma=100. How do I find the alternative? I know power... mu = 500... what is 5% significance level mean?
2017-06-25 00:23:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8180814981460571, "perplexity": 1344.4381734921187}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320368.57/warc/CC-MAIN-20170624235551-20170625015551-00001.warc.gz"}
https://electronics.stackexchange.com/questions/374578/arduino-high-side-driver-with-up-to-30v
# Arduino high side driver with up to 30V I am experimenting with the circuit for an Arduino high side driver from Nick Gammon: I need the circuit to work with a voltage range from 22V up to 30V and ideally stay cold. Everything else is fine, the MOSFET handles high currents without problem, but my problem is with the npn-transistor - it just gets extreme hot, especially at 30V. I am using 2N2222. Can the circuit be adjusted in such a way that there will be no need of heat sinks or more powerful elements? EDIT: After the suggestions of @Nick and @Edgar I changed R1 with 1.2K and R2 with 4.7k. Now the circuit is running for ~45 min without a problem, and is much cooler, I can even touch the transistor :) I maybe had to better explain what I meant by 'extreme hot' - after a couple of minutes the thermal shutdown kicked in. That hot. I am also considering switching to MOSFET-only solution, based on Figure 3 site 2 in this application note from Vishay: Setting the divider to 4.7k/15k should give me less than 2mA through the N-MOSFET (according to TINA I made it and it works like a charm!) so this should be even more cooler. It also should be able to handle even higher voltages without the zener, but I suppose it doesn't hurt to leave it as a surge protection. ## migrated from arduino.stackexchange.comMay 16 '18 at 11:33 This question came from our site for developers of open-source hardware and software that is compatible with Arduino. • Is there any reason for using high side switch other than experimenting? Maybe some opto-isolated transistor + 7909 (negative voltage regulator) would be better – KIIV May 16 '18 at 8:32 • As this is a purely electronics question I am going to migrate it to Electronics Stack Exchange. – Nick Gammon May 16 '18 at 11:32 ## 3 Answers The current through the transistor is controlled by R1 so I would suggest increasing that somewhat in this case. Perhaps to 1k or 2k. If D2 conducts then you will have a (roughly) 30V drop over 330Ω which would be 90 mA, and therefore 2.7W of power dissipation, which is more than the transistor is rated for. • Actually I think the above explanation is slightly flawed, however I still think that increasing the value of R1 would be a good way to go. – Nick Gammon May 16 '18 at 8:13 • Increasing R1 will lower the current through both the transistor and R2, potentially preventing Q2 to fully turn on at 22 V. I would increase R2 by the same factor as R1 to be safe. Maybe something like R1 = 1.5 kΩ and R2 = 4.7 kΩ, to stay within the E6 series. – Edgar Bonet May 16 '18 at 8:17 • I can confirm this - with 2k the transistor does not switch and with 1k was still hot. Will try as soon as possible the new suggestion. – Vladimir May 16 '18 at 8:23 • @Edgar is right regarding the fact Q2 could not turn on if R1 is too high. However, increasing R2 has a limit too, because at some point the zener will take over. Actually, this zener stuff isn't really appropriate and will prevent you to size things correctly. What should be done is 1) completely remove the zener 2) put R1 between Q1 and R2 (this way you have a resistor divider with R1 and R2, and the emitter of Q1 is directly to ground) 3) size R1/R2 so that the gate voltage stays within the FET ratings (e.g.: R1 = 3xR2 should be fine), 4) size R1+R2 so that the current is just a few mA. – dim May 16 '18 at 11:46 • To make it more explicit: something like that, with a PFET instead of the Q2 PNP. Simpler, IMO. – dim May 16 '18 at 11:52 Following on suggestions from @dim I propose an alternative schematic: In this case, if Q1 is conducting, then R2 and R3 form a voltage divider: Vout = (30 * 3300) / (1000 + 3300) Vout = 23V Thus Vgs on Q2 is 7 (30 - 23). Current through Q1 (Ic) would be only 7 mA so it wouldn't get hot. Current through the base would be about 0.5 mA which is well within spec for a microcontroller output pin. The NPN transistor Q1 would need to be chosen such that its collector-emitter voltage (Vce) was in range. The 2N3904 has a absolute maximum of 40V, so 30V there (if Q1 was not conducting) would be acceptable. A PN2222 would be marginal (its maximum Vce is 30V) however the PN2222A could be OK (Max Vce of 40V). I need the circuit to work with a voltage range from 22V up to 30V ... The above would be a bit marginal at 22V because the output of the voltage divider would be: Vout = (22 * 3300) / (1000 + 3300) Vout = 17V Thus Vgs on Q2 is 13 (30 - 17). The MOSFET quoted (FQP47P06) should be OK as it has a maximum Vgs of 25V. Q1 acts as a switched current source, supplying either 0 or 4.4V/330R = about 13mA to ...uuuh, R2D2 ... defining the voltage across Q2 gate (assuming Q1 base is connected to 0V or 5V). This 13mA is pretty much independent of your variable 22-30V supply, which is a nice feature of the design. As R2 is 1K and would drop 13V, D2 turns on, limiting Vgs to -10V (passing 10mA through R2 leaving 3mA for D2). This means, at 30V in, Vce in Q1 is 20V-4.4V, call it 16V for a power dissipation of over 200mw; probably tolerable but worth reducing. Obviously increase R1 : but if you increase it to 1kilohm you only have 4.4mA split between R2 and D2. Let's say you're willing to drop the zener current to 2.2mA (which will reduce Vgs slightly; look up the I-V curve for a zener in its datasheet) leaving 2.2mA through R2 : at 10V that means R2=4.545K - call it 4.7K. If you decide you need 3mA in the zener, you can't reliably increase R1 so much without losing safety margins; double it to 680R giving 6.5mA, 3-ish for the zener and 3-ish for R2; making R2 somewhere around 3.3K.
2019-10-19 14:41:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5011658668518066, "perplexity": 2483.045364270437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986696339.42/warc/CC-MAIN-20191019141654-20191019165154-00360.warc.gz"}
https://analyticphysics.com/Three-Body%20Problem/How%20Gravitational%20Choreographies%20Are%20Possible.htm
The gravitational three-body problem has some very special particular solutions in its choreographies. These are periodic solutions with the masses evenly spaced on a single orbit. Their existence arises from the highly nonlinear nature of the three-body system. In a certain sense they represent a kind of ‘quantization’ of this classical system. Understanding how they are possible could give insight into how quantum mechanics itself is possible in the context of a fully analytic theory. The equations of motion for the three-body problem are simplest and most symmetric in coordinates that represent the Cartesian differences between the locations of the masses. Defining the difference variables $x12 =x1 -x2 x23 =x2 -x3 x31 =x3 -x1 y12 =y1 -y2 y23 =y2 -y3 y31 =y3 -y1 r122 =x122 +y122 r232 =x232 +y232 r312 =x312 +y312$ the two-dimensional equations of motion in differences are $mx ··12 =−3kx12 r123 +X mx ··23 =−3kx23 r233 +X mx ··31 =−3kx31 r313 +X my ··12 =−3ky12 r123 +Y my ··23 =−3ky23 r233 +Y my ··31 =−3ky31 r313 +Y$ where the additional terms, identical for triples of equations, are $X=k( x12 r123 +x23 r233 +x31 r313 ) Y=k( y12 r123 +y23 r233 +y31 r313 )$ For a choreography these functions are periodic by construction with a period equal to one-third of that of the choreography. The equations of motion can be extended to higher dimensions with the only change being in making the definitions of the radial variables consistent with the spatial dimension used. It is also worth noting that this simple symmetric form of the equations does not hold for the four-body problem or higher numbers of masses. For the three-body system each equation includes all possible terms for interactions among the masses. For the four-body problem there would be terms missing because all interactions do not enter into each equation. For example, an equation for ${x}_{12}$ in the four-body problem would not naturally have a term proportional to ${x}_{34}$ and it would have to be included manually, leading to much more complicated expressions. For the purposes of this presentation, the focus will be restricted to three-body planar choreographies, although some extensions to three dimensions will be indicated when appropriate. Adding triples of equations gives the constraints $x ··12 +x ··23 +x ··31 =0 y ··12 + y··23 + y··31 =0$ which can be translated into constraints on the center of mass $x12+x23 +x31 =0 x·12 +x·23 +x·31 =0 y12+y23 +y31 =0 y·12 +y·23 +y·31 =0$ by suitable choices of the initial conditions. These constraints translate directly into a general constraint on the terms that can appear in a Fourier series for each variable. Since the equations of motion are nonlinear, the period of the resulting solution is far from obvious. If however one scales the spatial and temporal variables simultaneously, one can set the period of the system to any desired value. Apply this to a schematic equation for the x-coordinate: $md2sx d(wt )2 =−3k sx s3 r3 +Xs2 md2x d(w s3/2 t)2 =−3k xr3 +X$ For scaling to the period of circular functions, the ratio w will be 2π divided by the original system period. To keep equations of motion unchanged, spatial variables must be multiplied by $s=w2/3 =(2πT )2/3$ Velocity variables must be multiplied by this factor divided by the ratio w for a multiplicative factor of ${w}^{-1/3}$ . The cumulative effect of scaling on the energy is to divide it by the scaling factor for spatial variables, or a multiplicative factor of ${w}^{-2/3}$ . General expansions for each Cartesian spatial variable with period scaled to that of a standard circular are $x(t) =∑n≥0 ancosnt +∑n≥1 bnsinnt y(t) =∑n≥0 cncosnt +∑n≥1 dnsinnt$ Adding three component circular functions at equally spaced times gives $cos[nt] +cos[n(t +2π3 )] +cos[n(t -2π3 )] ={ 0, 3∤n 3cosnt, 3∣n sin[nt] +sin[n(t +2π3 )] +sin[n(t -2π3 )] ={ 0, 3∤n 3sinnt, 3∣n$ For the constraints on the center of mass to hold, this implies that all Fourier coefficients with indices divisible by three must be identically zero. That immediately determines a third of the expansion coefficients for all three-body choreographies. Since this constraint arises from linear relations, it will continue to apply for higher-dimensional systems such as a three-dimensional choreography. Knowing that Cartesian variables have no coefficients with indices divisible by three, such terms cannot occur on the left-hand sides of equations of motion. Dividing each Cartesian variable by the cube of the radial combination will reintroduce such terms on the right-hand sides of equations of motion. For the equations to hold for Fourier expansions, these terms will be cancelled by the additional functions X and Y on the right-hand sides. That is essentially the purpose of these additional functions in the equations, since they only contain Fourier expansion terms with indices divisible by three. That means that when one wants to determine Fourier coefficients directly from the equations of motion, one can ignore these additional functions and focus on the nontrivial terms arising from the ratio of the Cartesian variables and the radial combination. Unfortunately there does not appear to be a simple way to write down the Fourier series for this ratio given the underlying expansions of Cartesian variables. One can though assume values for the expansion coefficients, numerically evaluate the expansion coefficients for the ratio and compare them to the second derivatives on the left-hand side. With this presumably clear notation for extracting Fourier coefficients $Cn [x(t)] =an Sn [x(t)] =bn$ the nontrivial expansion coefficients are thus determined by this infinite set of interlinked equations: $mn2an -3kCn [ xr3 ] =0 mn2bn -3kSn [ xr3 ] =0 mn2cn -3kCn [ yr3 ] =0 mn2dn -3kSn [ yr3 ] =0$ A three-body choreography is only possible when this set of equations has a consistent solution that produces a simultaneous zero for all of them. There is a trivial solution with all coefficients zero for either Cartesian variable that should be avoided. For given values of the expansion coefficients, the left-hand sides of these equations can be plotted graphically side by side for a range in coefficient values around the initial point. The possibility of a choreography will be indicated when all of the resulting graphs display a zero simultaneously. From each individual graph it will be clear in which direction the coefficient needs to be altered to move towards a choreography. As a example, consider the figure eight in difference variables, also called dual variables on this site. It is known from Fourier analysis that the x-coordinate can be described using sine functions with odd indices and the y-coordinate using cosine functions with even indices. Consider the first three nontrivial terms of each expansion: $x(t) =b1sint +b5sin5t +b7sin7t y(t) =c2cos2t +c4cos4t +c8cos8t$ With a mass of one third and a coupling constant of one ninth, and taking all coefficients to be positive rather than negative, the side-by-side plots of the interlinked equations around an initial point look like this: Adjusting the initial point, beginning with the largest coefficients first, one can literally see the approach to the choreography. A rough initial point can then be used for a numerical by clicking the button. This takes some time to evaluate and results will appear below: These results are only good to a few significant digits, but this is to be expected for such a simple approximate expansion. One could repeat these interactive graphics for other known choreographies but unfortunately they quickly become much more complicated, requiring six or more terms in each expansion for decently accurate results. Presenting an approach to these special solutions with twelve or more simultaneous plots becomes cumbersome and thus less clear. Another way to see the approach to a choreography is by assuming values for the expansion coefficients and plotting the total energy and angular momentum of the system over one complete orbit. With energy in green, the same level of approximation for the figure eight gives Both values become more constant as the choreography is approached. It is quite interesting to note that the higher-order expansion terms are critical for producing constant lines in this graphic. This makes this visualization less useful in the attempt to search for choreographies, but instructive all the same. Given the difficulty in explicitly expanding the algebraic inverse of an expansion representing the radial variable or a power of it, one might think to try an analytic solution using the angular momentum alone since it is merely the product of two individual expansions for Cartesian coordinates and their derivatives. This unfortunately fails because of the property of choreographies noted above: adding three component circular functions at equally spaced times contains only terms with the index divisible by three. This does not provide enough equations to determine the free coefficient values. Schade! Uploaded 2019.03.10 — Updated 2019.03.12 analyticphysics.com
2020-04-03 21:50:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7755917310714722, "perplexity": 338.7008916370917}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370518622.65/warc/CC-MAIN-20200403190006-20200403220006-00202.warc.gz"}
https://astronomy.stackexchange.com/questions/38889/how-is-it-possible-that-saturns-gravitational-acceleration-felt-by-mimas-is-str/38900
# How is it possible that Saturn's gravitational acceleration felt by Mimas is stronger than Mimas' own surface gravity? The surface gravity on Mimas is $$≈ 0.063\text{ m}/\text{s}^2$$ and Saturn's gravitational acceleration at the distance of Mimas' orbit is: $$\frac{{GM}}{{r}^2} = \frac{{6.674 \times 10^{-11} \times 568.34 \times 10^{24}}}{{(185.52 \times 10^{6})}^2} ≈ 1.102 \text{ m}/\text{s}^2$$ How can this be? An object on Mimas' surface would be much more attracted to Saturn than it is to Mimas. Shouldn't Mimas itself get ripped apart or is my math wrong? • The earths gravitational acceleration "felt" by astronauts is immensely more than the space station's own "surface gravity". Sep 14 '20 at 18:14 • – uhoh Sep 14 '20 at 23:16 • @MichaelHardy yes, but that's not really a fair comparison. The ISS isn't bound by gravity but structural trusses, but planetary bodies generally do require gravity. Specifically, if Mimas were within Saturn's Roche limit, like the ISS is withing Earth's, then it would get ripped apart. Sep 15 '20 at 19:41 An object on Mimas' surface would be much more attracted to Saturn than it is to Mimas. You are missing that Mimas as a whole accelerates gravitationally toward Saturn. What this means is that a point on the surface of the Mimas will feel the acceleration at that point toward Saturn minus the acceleration of Mimas as a whole toward Saturn. This is the tidal acceleration. It is equal to $$a_\text{tidal} = \left|\frac{GM}{(R\pm r)^2}-\frac{GM}{r^2}\right| \approx 2 \frac{GMr}{R^3} = 2\frac{GM}{R^2}\frac{r}{R}$$ where $$R$$ is Mimas' semi major axis length and $$r$$ is Mimas' mean radius. The approximation assumes that $$r\ll R$$, which most certainly is the case given that Mimas's radius is about 1/1000 of the semi-major axis length of its orbit about Saturn. The result is rather small, about 0.002355 m/s2. • "What this means is that a point on the surface of the Mimas will feel the acceleration at that point toward Saturn less the acceleration of Mimas as a whole toward Saturn." I really don't understand this sentence. Are you using "less" to mean "minus"? Sep 15 '20 at 0:44 • That's how I read it. It is a legitimate, though slightly uncommon, use of the word "less". Sep 15 '20 at 2:11 • @DavidZ - I'm not thinking lesser of you for calling that usage "slightly uncommon" (pun intended; no slight intended; I couldn't resist). I will change my answer to use minus rather than less as I have learned that critiques against my writing are almost always valid. Sep 15 '20 at 6:06 • @Acccumulation - What I wrote was perfectly valid. "Less" when used as a preposition is a synonym for "minus". Nonetheless I did edit my answer to replace my use of "less" with "minus". Sep 15 '20 at 6:08 • @user177107 - When Saturn is directly overhead or underfoot, yes. (A better value is 0.06135 m/s^2 based on 0.06370 m/s^2). When Saturn is on the horizon, the tidal acceleration is halved in magnitude and directed toward the center of Mimas, resulting in an acceleration of about 0.06488 m/s^2. Sep 15 '20 at 19:27 Since Mimas is in orbit around Saturn, it is in freefall; just as an astronaut in a space station appears to not experience the Earth's gravity because that gravity is acting equally on the space station and the astronaut, the outside of Mimas will appear to not experience Saturn's gravity, as the center is also experience Saturn's gravity and thus they are moving together. The only effect Saturn will have on the integrity of Mimas is Saturn's tidal force. Also, for the tidal force to rip apart a satellite, it has to overcome not only the satellite's gravity, but also any intermolecular forces. For instance, for a space station to be ripped apart by Earth's gravity, the tidal forces would have to overcome the tensile strength of whatever the station is made out of. • For celestial bodies the tensile strength can be considered zero (and it will disintegrate when the orbital radius becomes smaller than the Roche limit). For space stations it is exactly the opposite: its gravitation can be assumed zero. Sep 15 '20 at 9:09 How is it possible that Saturn's gravitational acceleration felt by Mimas is stronger than Mimas' own surface gravity? That's just the way it is. An apple hanging from a tree is more strongly attracted to the Earth than to the tree. A worm crawling on it is more attracted to the Earth than the apple. Yet they retain some forces keeping them from falling to the ground. Since Mimas and any object on it's surface are orbiting Saturn and in free fall, Saturn's force of gravity mainly curves their paths and they don't get tugged straight down to Saturn. The gravity of Mimas itself is enough to keep things from flying off it's surface. There are also cohesive forces tending to keep it together. How can this be? An object on Mimas' surface would be much more attracted to Saturn than it is to Mimas. Shouldn't Mimas itself get ripped apart or is my math wrong? You don't go flying off Mimas because Mimas is being affected by Saturn's gravity as well, and Mimas exerts a large enough pull to keep you in place. And since you would be travelling in orbit with Mimas you would both be experiencing Saturns's gravity. What would make Mimas tend to break apart is tidal forces, except it's dense enough and far enough away from Saturn to avoid that fate. There's a calculation to tell you if an object in orbit will break apart called the Roche Limit. Part of the calculation is the ratio of the density of the primary to the density of the secondary, and Saturn's low density helps keep it small in this case. Calculating it myself I get 61,826 kilometers for a rigid body. That fits well with what this page says given that the density of Mimas is about 2/3 higher than that of Saturn. So Mimas orbits about 3 times the Roche limit and will not disintegrate due to Saturn's gravity. Even for the other extreme of a fluid body, the Roche Limit is just under twice that for a rigid body so Mimas still wouldn't come apart. Using your calculation for gravity and plugging in an extra 414km for the diameter of Mimas shows that the difference in Saturn's gravity on the near side to Saturn and the far side to Saturn is just 0.005 m/s^2, less than 1/12th the surface gravity of Mimas (0.063 m/s^2) Some thought experiments: If you were on Mimas and it suddenly disappeared, leaving you in space, you wouldn't be sucked to Saturn. You would continue in basically the same orbit as Mimas would have. You are going fast with respect to Saturn's surface, and Saturn's gravity is just enough to curve your path to maintain your orbit so you don't go flying off into space, and so you don't crash into Saturn. If you could somehow stop Mimas (and you) in it's tracks with respect to Saturn, Mimas and you would still be in free fall, but would both be pulled towards Saturn. The gravity of Mimas would still tend to pull you towards its center as well so you wouldn't fly off the surface. If you could somehow stop Mimas, create an adamantite shell around it to keep it's form, and suspend it at a point above Saturn the same distance as it's orbit, you would fly off the surface towards Saturn if you were on the Saturn side. This is because you are preventing Mimas from falling with you. You would accelerate at roughly (1.102 - 0.063) m/s^2 because Saturn is pulling you down and Mimas is pulling you up. If you could somehow stop Mimas and create an adamantite platform under it at the same distance from Saturn, it should collapse and form huge pile of ice on the platform. The Roche Limit works for orbiting bodies. • Sorry but I feel this is too many impossible hypothetical scenarios in prose only, and it invokes purely fictional materials. It also does not even clearly address the question as asked. I usually don't down vote but in this case I must. Firmly. – uhoh Sep 15 '20 at 13:33
2021-11-30 11:58:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5982027649879456, "perplexity": 1089.0064256794144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358973.70/warc/CC-MAIN-20211130110936-20211130140936-00279.warc.gz"}
http://typo3master.com/standard-error/guide-standard-error-measurement-calculator.php
Home > Standard Error > Standard Error Measurement Calculator # Standard Error Measurement Calculator As the SDo gets Reliability Analysis - Süre: 5:18. The manual calculation can be you to... Estimate the sample standard deviationclean it up slightly & display the formula with $\LaTeX$.Bu videoyu bir oynatmacomment, you might be interested in this recent article: Tighe et al. The relationship between these statistics aşağıdan değiştirebilirsiniz. Daha fazla göster Dil: Türkçe İçerik konumu: Error Clicking Here & Validity - Süre: 8:19. Measurement Sb1 Calculator the request again. Error up to date? Hakkında Basın Telif hakkı İçerik Oluşturucular Reklam Verme Geliştiriciler +YouTube Şartlar the estimation of the standard deviation. To understand this, first we need to the change in mean with different experiments conducted each time. The system returned: (22) Invalid argument The Calculator secure is a fingerprint sensor versus a standard password? Standard Error of Measurement using Excel - Süre: 10:49. Cody Lewis Chemistry 9.578 görüntüleme 8:46 Standard understand how to calculate standard error using above formulas. 1. I am using the formula : $$\text{SEM}\% =\left(\text{SD}\times\sqrt{1-R_1} \times 1/\text{mean}\right) × 100$$ where SD Mean And Standard Error Calculator This is not a practical way ofYükleniyor...error of estimate (SEE) - Süre: 8:57. DrKKHewitt 17.365 görüntüleme 4:31 4280 - DrKKHewitt 17.365 görüntüleme 4:31 4280 - http://stats.stackexchange.com/questions/9312/how-to-compute-the-standard-error-of-measurement-sem-from-a-reliability-estima 00:12:25 GMT by s_hp94 (squid/3.5.20) Oturum açrumour that Santa isn't real?In the last row the reliability is Why does Davy Jones not. . .Reklam Otomatik oynat Otomatik oynatma etkinleştirildiğinde, Standard Error Of The Estimate Calculator encountered while trying to retrieve the URL: http://0.0.0.9/ Connection to 0.0.0.9 failed.Yükleniyor... of the Mean. As the r gets- Copyright © 2008-2016.Using the formula: {SEM = So x Sqroot(1-r)} where So is the Observed StandardError of Measurement and Confidence Intervals - Süre: 9:32.error of the mean - Süre: 4:31. page Make text field readonly Binary to decimal converter How viewing YouTube in Turkish.ADDITIONAL INFO Links About FAQ Terms Privacyiçin oturum açın. The SEM is an estimate of how http://ncalculators.com/statistics/standard-error-calculator.htm number of students, you would have the average amount of error in the test.Please tryYükleniyor... All şu anda kullanılamıyor. this preference below.Uygunsuz içeriği bildirmekUnderstanding Standard Error - Süre: 5:01.In the first row there is a Daha fazla göster Daha az göster Yükleniyor... Measurement you wherever you go.The standard error of measurement is a more appropriate measure of quality you steal from many it's research." Don't steal, do research. . Bu videoyu Daha Sonra İzle oynatma listesine Standard Error Calculator Excel standard error of the mean is the standard error of the estimate.For example, the median of data set 1,2,3,4,5 is the middle value konuşma metni yüklenemedi. Flyingforearm 1.671 görüntüleme 5:54 2-3 try here Thus if the effect of random changes are significant, view publisher site Bozeman Science 185.065 görüntüleme 7:05 Module 10: Standard Standard To Handle Social Anxiety Social Anxiety Course Handling Break-ups Separation Course Struggling With Arachnophobia?Want to stay Measurement us! No problem, save it as a braces What are the downsides to multi-classing? Tess Standard Error Of Proportion Calculator administrator is webmaster.prefer to take several measurements and take a mean each time.I will show you amount of consistency in the test. more variation there is in the scores.smaller the SEM gets larger.Maths Buddy 403 görüntüleme 8:18or hold the whole party?New Year?" Help my maniacal wife decorate our christmas tree Idiomatic Expression that basically says Sn are samples. µ is read this post here Standard Error Of The Mean In SPSS - Süre: 2:35.SHARE Tweet ADDITIONAL INFO .If you subtract the r from 1.00, Çalışıyor... Why is bench pressing your Standard Error Of Estimate Calculator Regression a sampling distribution. I took the liberty of editing your post toestimating the amount of error in the test. larger the SEM gets larger. comment| up vote 1 down vote There are 3 ways to calculate SEM. Bu özellik Sıradaki Standard Error of Measurementthe standard deviation - Süre: 3:26. Error Unfortunately, the only score we 95 Ci Calculator 2:35 Measurement and Error.mp4 - Süre: 15:00. Standard How2stats 14.974 görüntüleme 6:24St. Smaller SD value means samples The standard error of the mean now refers tothe error score would be 6. The below step by step procedures help users to Standard Error Of Estimate Calculator Ti-84 on psychology, science, and experiments.Statisticsfun 122.221 görüntüleme 3:41 We could be 68% sure that the students and the result is a higher SEM at 1.18. Share|improve this answer answered Apr 8 '11 at 20:40 chl♦ 38k6127244 add a Measurement a "lower bound for reliability" since this might have confused you. Or, if the student took the test 100 times, 64from Explorable.com: https://explorable.com/standard-error-of-the-mean . Example: Consider a set of data 1,3,5,7 Step oy verilebilir. Bernstmj 71.401 görüntüleme 5:18 by √n is the SE of the sample. Up vote 3 down vote favorite 1 to have SEM agreement or SEM consistency. Take it with Kapat önerilen bir video otomatik olarak oynatılır. It's unfortunate that we also talk of Cronbach's alpha as very low and the SEM is larger. It can also be referred as can be seen at the right. The SEM can be added and subtracted to a students test and therefore .12 amount of inconsistency or error. Who is spreading the for the given data. 3. Follow the population, the variation is called as standard error (SE).
2018-12-19 16:30:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6690804958343506, "perplexity": 8790.670485908642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832559.95/warc/CC-MAIN-20181219151124-20181219173124-00134.warc.gz"}
https://proofwiki.org/wiki/Definition:Model_of_Logical_Formula
# Definition:Model (Logic)/Logical Formula ## Definition Let $\mathscr M$ be a formal semantics for a logical language $\mathcal L$. Let $\mathcal M$ be a structure of $\mathscr M$. Let $\phi$ be a logical formula of $\mathcal L$. Then $\mathcal M$ is a model of $\phi$ iff: $\mathcal M \models_{\mathscr M} \phi$ that is, if $\phi$ is valid in $\mathcal M$.
2019-12-11 20:41:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9883332848548889, "perplexity": 280.03657823417063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540532624.3/warc/CC-MAIN-20191211184309-20191211212309-00139.warc.gz"}
https://studydaddy.com/question/technetium-99m-the-isotope-notation-is-98-tc-43-is-the-atomic-weight-98-amu
Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer. QUESTION # Technetium-99m the isotope notation is 98 TC 43. Is the atomic weight 98 amu? The isotope notation for technetium-99m is ""_43^(99"m")"Tc", and its atomic mass is 98.906 u or approximately 99 u. The amu is equal to the number of protons plus the number of neutrons (mass number) because 1 proton = 1 amu and 1 neutron = 1 amu.
2019-05-20 11:33:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5217217206954956, "perplexity": 1970.0915698424087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255943.0/warc/CC-MAIN-20190520101929-20190520123929-00126.warc.gz"}
https://gomathanswerkey.com/texas-go-math-grade-4-lesson-9-5-answer-key/
# Texas Go Math Grade 4 Lesson 9.5 Answer Key Division and the Distributive Property Refer to our Texas Go Math Grade 4 Answer Key Pdf to score good marks in the exams. Test yourself by practicing the problems from Texas Go Math Grade 4 Lesson 9.5 Answer Key Division and the Distributive Property. ## Texas Go Math Grade 4 Lesson 9.5 Answer Key Division and the Distributive Property Essential Question How can you use the Distributive Property to find quotients? The Distributive Property of division says that dividing a sum by number is the same as dividing each addend by the number and then adding the quotients. We can use distributive property to find quotients by breaking the Dividend into two addends which can be quotients. Explanation: For Example; Outline a rectangle on a grid to model 69 ÷ 3. Shade columns of 3 until you have 69 squares. Investigate Materials • color pencils • grid paper You can use the Distributive Property to break apart numbers to make them easier to divide. The Distributive Property of division says that dividing a sum by a number is the same as dividing each addend by the number and then adding the quotients. A. Outline a rectangle on a grid to model 69 ÷ 3. Shade columns of 3 until you have 69 squares. Explanation: Outline a rectangle on a grid to model 69 ÷ 3. Shade columns of 3 until you have 69 squares. How many groups of 3 can you make? ____________ 20 groups of 3 Explanation: B. Think of 69 as 60 + 9. Break apart the model into two rectangles to show (60 + 9) ÷ 3. Label and shade the smaller rectangles. Use two different colors. Explanation: Break apart the model into two rectangles to show (60 + 9) ÷ 3. Label and shade the smaller rectangles. Use two different colors to identify. C. Each rectangle models a division. 69 ÷ 3 = (____ ÷ 3) + (________ ÷ 3) = _________ + ___________ = ___________ 69 ÷ 3 = (60 ÷ 3) + (9 ÷ 3) = 20 + 3 = 23 Explanation: Break apart the model into two to show (60 + 9) ÷ 3, (9 ÷ 3) = 20 + 3 = 23 D. Outline another model to show 68 ÷ 4. How many groups of 4 can you make? ___________ 17 groups of 4 Explanation: Break apart the model into two to show 68 ÷ 4 = (60 ÷ 4) + (8 ÷ 4) = 15 + 2 = 17 E. Think of 68 as 40 + 28. Break apart the model, label, and shade to show two divisions. 68 ÷ 4 = (__________ ÷ 4) + (_________ ÷ 4) = _________ + ___________ = __________ 68 ÷ 4 = (60 ÷ 4) + (8 ÷ 4) = 15 + 2 = 17 Explanation: Break apart the model into two to show 68 ÷ 4 = (60 ÷ 4) + (8 ÷ 4) = 15 + 2 = 17 Make Connections You can also model 68 ÷ 4 using base-ten blocks. STEP 1 Model 68. 68 = ________ + ________ 68 = 40 + 28 Explanation: Base ten is a counting system that uses ten digits to write down all numbers. If we want to write any number, we have ten digits we can use: 0 1 2 3 4 5 6 7 8 9. If you notice, there is no one digit for the number ten- we need to use two digits, the 1 and the 0. We can only count to nine without the need for two digits. Therefore, we use base ten in order to make bigger numbers. STEP 2 Divide the longs into 4 equal groups. 4 longs divide into 4 equal groups with 2 longs left. Regroup 2 longs as 20 small cubes. Divide them evenly among the 4 groups. 60 ÷ 4 = ________ Explanation: Divide the longs into 4 equal groups. 4 longs divide into 4 equal groups with 2 longs left. Regroup 2 longs as 20 small cubes. Divide them evenly among the 4 groups. We get, 15. STEP 3 Divide the 8 small cubes into the 4 equal groups. 8 ÷ 4 = __________ So, 68 ÷ 4 = (60 ÷ 4) + (8 ÷ 4) = _________ + _________ = _________ So, 68 ÷ 4 = (60 ÷ 4) + (8 ÷ 4) = 15 + 2 = 17 Explanation: Divide the longs into 4 equal groups. 4 longs divide into 4 equal groups with 7 left in each group. Divide the 8 small cubes into the 4 equal groups. Math Talk Mathematical Processes Describe another way you could use the Distributive Property to solve 68 ÷ 4. Share and Show Model the division on the grid. Question 1. 26 ÷ 2 = (______ ÷ 2) + (______ ÷ 2) = ______ + ______ = ______ 26 ÷ 2 = (20 ÷ 2) + (6 ÷ 2) = 10 + 3 = 13 Explanation: Break apart the model into two rectangles to show 26 ÷ 2 = (20 ÷ 2) + (6 ÷ 2) = 10 + 3 = 13 Label and shade the smaller rectangles. Use two different colors to identify. Question 2. 45 ÷ 3 = (______ ÷ 3) = ______ + ______ = ______ 45 ÷ 3 = (30 ÷ 3) + (15 ÷ 3) = 10 + 5 = 15 Explanation: Break apart the model into two rectangles to show 45 ÷ 3 = (30 ÷ 3) + (15 ÷ 3) = 10 + 5 = 15 Label and shade the smaller rectangles. Use two different colors to identify. Question 3. H.O.T. Evaluate To find the quotient 91 ÷ 7, would you break up the dividend into 90 .+ 1 or 70 + 21? Explain. Problem Solving. 91 ÷ 7 = (70 ÷ 7) + (21 ÷ 7) = 10 + 3 = 13 H.O.T. Multi-Step Pose a Problem Question 4. Christelle went to a gift shop. The shop sells candles in a variety of sizes and colors. The picture shows a display of candles. Write a problem that can he solved using the picture. Explanation: Christelle went to gift shop consists of 72 candles. The picture shows a display of 72 candles in 6 rows Number of equal rows placed 72 ÷ 6 = 12 Question 5. H.O.T. Describe how you could change the problem by changing the number of rows of candles. Then solve the problem. 72÷4 = 18 Explanation: Christelle went to gift shop consists of 72 candles. The picture shows a display of 72 candles in 4 rows Number of equal rows placed 72 ÷ 4 = 18 Question 6. Apply During the day on Mercury, the temperature can reach 816°F, which is 6 times warmer than the highest temperature found on Earth. What temperature is the highest found on Earth? (A) 129°F (B) 132°F (C) 136°F (D) 137°F Option (C) Explanation: During the day on Mercury, the temperature can reach 816°F, which is 6 times warmer than the highest temperature found on Earth. The highest temperature found on Earth 816 ÷ 6 =136 Question 7. The chorus has 72 singers. The singers practice in 3 groups of equal size. Which expression is same as 72 ÷ 3? (A) (42 ÷ 3) + (30 ÷ 3) (B) (45 ÷ 3) + (25 ÷ 3) (C) (40 ÷ 3) ÷ (30 ÷ 3) (D) (57 ÷ 3) + (18 – 3) Option(A) Explanation: The chorus has 72 singers. The singers practice in 3 groups of equal size. The possible way to break apart the model 72 ÷ 3 = (42 ÷ 3) + (30 ÷ 3) Question 8. Multi-Step Analyze Terrance needs $150 to buy a bike. He has$36 saved. If he earns an equal amount over each of the next 3 weeks, how much must he earn each week to save enough for his bike? (A) $50 (B)$40 (C) $38 (D)$62 Option(C) Explanation: Terrance needs $150 to buy a bike. He has$36 saved. 150 – 36 = 114 If he earns an equal amount over each of the next 3 weeks 114 ÷ 3 = $38 each week TEXAS Test Prep Question 9. Max had 200 baseball cards. He gave 14 of them to his younger brother. Max wants to arrange his remaining cards in equal rows of 6 cards each. How many rows of cards will Max have? (A) in 35 rows (B) in 31 rows (C) in 33 rows (D) in 36 rows Answer: Explanation: Max had 200 baseball cards. He gave 14 of them to his younger brother. 200 – 14 = 186 Max wants to arrange his remaining cards in equal rows of 6 cards each. 186 ÷ 6 = 31 rows. Texas Go Math Grade 4 Lesson 9.5 Homework and Practice Answer Key Model the division on the grid. Question 1. 48 ÷ 4 = (__________ ÷ 4) + (_________ ÷ 4) = _________ + _________ = _________ Answer: 48 ÷ 4 = (40 ÷ 4) + (8 ÷ 4) = 10 + 2 = 12 Explanation: Break apart the model into two rectangles to show 48 ÷ 4 = (40 ÷ 4) + (8 ÷ 4) = 10 + 2 = 12 Label and shade the smaller rectangles. Use two different colors to identify the groups. Question 2. 36 ÷ 3 = (__________ ÷ 3) + (__________ ÷ 3) = ___________ + ___________ = __________ Answer: 36 ÷ 3 = (30 ÷ 3) + (6 ÷ 3) = 10 + 2 = 12 Explanation: Break apart the model into two rectangles to show 36 ÷ 3 = (30 ÷ 3) + (6 ÷ 3) = 10 + 2 = 12 Label and shade the smaller rectangles. Use two different colors to identify thee groups. Question 3. 28 ÷ 2 = (________ ÷ 2) + (________ ÷ 2) = __________ + _________ = ___________ Answer: 28 ÷ 2 = (20 ÷ 2) + (8 ÷ 2) = 10 + 4 = 14 Explanation: Break apart the model into two rectangles to show 28 ÷ 2 = (20 ÷ 2) + (8 ÷ 2) = 10 + 4 = 14 Label and shade the smaller rectangles. Use two different colors to identify thee groups Question 4. 48 ÷ 3 = (_______ ÷ 3) + (_______ ÷ 3) = _________ + ________ = _________ Answer: 48 ÷ 3 = (30 ÷ 3) + (18 ÷ 3) = 10 + 6 = 16 Explanation: Break apart the model into two rectangles to show 48 ÷ 3 = (30 ÷ 3) + (18 ÷ 3) = 10 + 6 = 16 Label and shade the smaller rectangles. Use two different colors to identify thee groups. Problem Solving Question 5. There are 69 jobs for workers at the amusement park. There are 3 workers for each ride. Flow many rides are there? Answer: 23 riders. Explanation: There are 69 jobs for workers at the amusement park. There are 3 workers for each ride. Number of rides 69÷3 = 23 Question 6. The music club needs to sell 856 raffle tickets in order to buy new music stands. If each of the 8 members in the club wants to sell the same number of tickets, how many tickets does each member need to sell? Answer: 107 tickets. Explanation: The music club needs to sell 856 raffle tickets in order to buy new music stands. If each of the 8 members in the club wants to sell the same number of tickets, Number of tickets each member need to sell 856÷8=107 Lesson Check Fill in the bubble completely to show your answer. Question 7. Mr. Dominguez divided 68 students into 4 groups of equal size. Which of the following correctly uses the Distributive Property to find the number of students in each group? (A) (30 ÷ 4) + (28 ÷ 4) (B) (60 ÷ 4) + (10 ÷ 4) (C) (3o ÷ 4) + (3o ÷ 4) (D) (32 ÷ 4) + (36 ÷ 4) Answer: Option(D) Explanation: Mr. Dominguez divided 68 students into 4 groups of equal size. (32 ÷ 4) + (36 ÷ 4) 8 + 9 = 17 The number of students in each group 17 Question 8. The store clerk divided 168 shirts equally onto 4 different display tables. How many shirts did the clerk place on each table? (A) 32 (B) 44 (C) 42 (D) 31 Answer: Option(C) Explanation: The store clerk divided 168 shirts equally onto 4 different display tables. Number of shirts the clerk place on each table 168÷4 = 42 Question 9. On his vacation, Walter drove 780 miles. If he drove an equal number of miles each day for 6 days, how mans’ miles did he drive each day? (A) 130 miles (B) 140 miles (C) 120 miles (D) 150 miles Answer: Option(A) Explanation: Walter drove 780 miles. If he drove an equal number of miles each day for 6 days, Total miles he drive each day 780 ÷ 6 = 130 Question 10. A bait shop placed 128 worms into cups to sell. If there were 8 worms in each cup, how many cups of worms were there? (A) 24 (B) 16 (C) 12 (D) 18 Answer: Option(B) Explanation: A bait shop placed 128 worms into cups to sell. If there were 8 worms in each cup, Total cups of worms 128 ÷ 8 = 16 Question 11. Multi-Step Justin earned$50 mowing yards and $34 washing cars. He wants to divide his money into 3 equal accounts. How much will he put in each account? (A)$14 (B) $18 (C)$24 (D) $28 Answer: Option(D) Explanation: Justin earned$50 mowing yards and $34 washing cars. Total earned 50 + 34 =$84 He wants to divide his money into 3 equal accounts Total amount he put in each account 84 ÷ 3 = $28 Question 12. Multi-Step Kristen needs$325 to buy a plane ticket. She has saved $277. If she saves an equal amount each week over the next 4 weeks, how much must she save each week to have enough for the ticket? (A)$48 (B) $12 (C)$44 (D) $14 Answer: Option(B) Explanation: Kristen needs$325 to buy a plane ticket. She has saved $277.$325 – $277 =$48 If she saves an equal amount each week over the next 4 weeks, 48÷4 = \$12 Scroll to Top
2022-12-05 17:51:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5709909200668335, "perplexity": 3276.6643689963685}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711042.33/warc/CC-MAIN-20221205164659-20221205194659-00268.warc.gz"}
http://ctan.unsw.edu.au/macros/latex2e/contrib/pm-isomath/
Index of /macros/latex2e/contrib/pm-isomath Name Last modified Size Description Parent Directory 07-Dec-2019 05:17 - manifest.txt 12-Jan-2018 10:50 1k pm-isomath.dtx 12-Jan-2018 10:46 48k pm-isomath.pdf 12-Jan-2018 10:37 579k The PM-ISOmath package, version 1.0.04 of 2018 Original author: Claudio Beccari, 2017 LaTeX Project Public Licence LPPL v.1.3c (or later) The PM-ISOmath name stands for "Poor Man ISO Math". In substance this package is a poor man solution to the task of typesetting math fulfilling the ISO regulations "for physical sciences and technology" (formerly regulations ISO 31/XI, now ISO\,80000). These regulations refer mostly to the family, series and shape of fonts to be used with symbols of various nature. This package gets inspiration from the ISOmath package by Günter Milde, but tries to get the same results without using any math [font] groups (or families). As pdfLaTeX users may recall, this typesetting program may see at maximum 16 math [font] groups (or math font families); sometimes this number results in an error that forbids the user to use the symbols s/he needs. The trick used in this package consists in employing text fonts within the \text command (defined by the amsmath package that, therefore, is a dependence to which pm-isomath is subjected) and chose text font families, series, and shapes to be used within that command argument. The commands are such as to fulfil some math requirements; for example while in the scope of the \boldmath declaration, the series is automatically set to bold without any user intervention. The font size is automatically taken care by \text, so that fonts have the correct size also while typesetting exponents or subscripts. Nevertheless, through proper advanced command options, the user remains the person principally responsible of using the right font for the right symbol in a document that must fulfil the ISO regulations. This package is usable only with pdfLaTeX; LuaLaTeX and XeLaTeX can access OpenType math fonts through the package unicode-math, and with the "math-style=ISO" option they have the math switching commands agree with the ISO regulations. pdfLATeX users have available some packages to fulfil the ISO requirements; principally the ISOmath package that is subject to a number of limitations due the the particular math environment of the user, and libertinust1math that produces a complete set-up with math fonts that match very well text fonts that are darker than the standard default Computer Modern ones (including the CM-super and the Latin Modern ones). This package works very well with the Latin Modern fonts; in practice in math mode it uses the same Latin text fonts, and the corresponding families, series, and shapes of the LGR encoded CBfonts; it may work also with the CM and the CM-super fonts, but the original author never uses them, therefore he cannot guarantee any suitable result. For installation of this package, simply run the pm-isomath.dtx through pdfLaTeX (and only pdfLaTeX); move the produced sty file to the .../tex/latex/pm-isomath/ folder; if it does not exist, create it; similarly move pm-isomath.dtx to .../source/latex/pm-isomath/ and pm-isomath.pdf to .../doc/latex/pm-isomath/.
2019-12-08 10:24:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.807309627532959, "perplexity": 13607.505548184594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540508599.52/warc/CC-MAIN-20191208095535-20191208123535-00235.warc.gz"}
https://tex.stackexchange.com/questions/373471/pgfplots-problem-with-evaluating-a-function
# PGFPlots - Problem with evaluating a function I'm using PGFPlots to graph some functions, and I'm facing the following problem: I need to plot the following function f over this interval: It can be verified that in the left point of the interval, f has a value of 1. Nonetheless, when I plot f I get this result (the red and teal lines are for my guidance): As you can see, the value of f in the left point is not plotted. I used the default samples to plot (25) and I count only 24 points. The problem is worse later, because I have to plot atan(f(x)), and this causes two errors: Missing number, treated as zero. ...\x^(1-\ga)-\km)/(\co*\x^(1-\ga)+\km)))))}; Illegal unit of measure (pt inserted). ...\x^(1-\ga)-\km)/(\co*\x^(1-\ga)+\km)))))}; How can I fix this? I realized that plotting from the left point plus a little number fixes this, but nothing more. I provide a MWE to plot f. Thank you very much in advance. \documentclass{article} \usepackage{pgfplots} \pgfplotsset{compat=1.14} % this is to avoid a backwards compatibility warning \begin{document} \thispagestyle{empty} \begin{tikzpicture}% function \pgfmathsetmacro{\T}{1}; \pgfmathsetmacro{\co}{1}; \pgfmathsetmacro{\km}{4}; \pgfmathsetmacro{\ga}{0.1}; \pgfmathsetmacro{\la}{((\km/\co)^(1/(1-\ga))}; \pgfmathsetmacro{\lb}{((sqrt(5)-1)*\km/\co)^(1/(1-\ga))}; \begin{axis}[domain=\la:\lb] \addplot {sqrt( (2*sqrt( \km*\co*\x^(1-\ga)*(\co*\x^(1-\ga)-\km)/(\co*\x^(1-\ga)+\km) )+\km)/(\co*\x^(1-\ga)-2*sqrt( \km*\co*\x^(1-\ga)*(\co*\x^(1-\ga)-\km)/(\co*\x^(1-\ga)+\km))))}; \end{axis} \end{tikzpicture} \end{document} EDIT: I've just realized that, although my LaTeX editor throws the aforementioned errors when atan() or rad(atan()) is added, it still generates a .pdf. By ploting this \addplot {rad(atan(sqrt( (2*sqrt( \km*\co*\x^(1-\ga)*(\co*\x^(1-\ga)-\km)/(\co*\x^(1-\ga)+\km) )+\km)/(\co*\x^(1-\ga)-2*sqrt( \km*\co*\x^(1-\ga)*(\co*\x^(1-\ga)-\km)/(\co*\x^(1-\ga)+\km))))))}; the result is this As Symbol 1 already stated in the comment below the question you can either use a small offset for the lower bound to "correct" the inaccuracy TeX's/Lua's calculation or you can use unequal sampling. I present both solutions • adding an offset for the linear spacing solution using TeX and Lua as calculation engine and • using unequal spacing for the Lua solution. I added markers to the solutions so you can see the difference. When you uncomment the line no markers you will notice that the unequal spacing solution shows a little better result when sticking to 25 samples. For more details on how the solution works, please have a look at the comments in the code. \documentclass[border=5pt]{standalone} \usepackage{pgfplots} \pgfplotsset{ compat=1.12, /pgf/declare function={ % declare constants k = 4; alpha = 1; gamma = 0.1; % declare help function b(\x) = (alpha*\x^(1-gamma) - k)/(alpha*\x^(1-gamma) + k); % declare the main function f(\x) = sqrt( (2*sqrt( k*alpha*\x^(1-gamma) * b(\x) ) + k)/ (alpha*\x^(1-gamma) - 2*sqrt( k*alpha*\x^(1-gamma)*b(\x))) ); % declare an small amount to compensate for TeX's/Lua's inaccuracy infi = 1e-3; % for linear spacing % infi = 0; % for non-linear spacing % calculate the lower and upper boundaries (the domain values) llb = (k/alpha)^(1/(1-gamma)); lb = llb + infi; ub = ((sqrt(5)-1)*k/alpha)^(1/(1-gamma)); % % ----------------------------------------------------------------- %%% nonlinear spacing: <https://stackoverflow.com/a/39140096/5776000> % "non-linearity factor" a = 5.0; % function to use for the nonlinear spacing Y(\x) = exp(a*\x); % rescale to former limits X(\x) = (Y(\x) - Y(lb))/(Y(ub) - Y(lb)) * (ub - lb) + lb; }, } \begin{document} \begin{tikzpicture} \pgfmathsetmacro{\co}{1}; \pgfmathsetmacro{\km}{4}; \pgfmathsetmacro{\ga}{0.1}; % "infinitesimal" small amount (for TeX) \pgfmathsetmacro{\infinitesimal}{1e-3} \pgfmathsetmacro{\la}{(\km/\co)^(1/(1-\ga)) + \infinitesimal}; \pgfmathsetmacro{\lb}{((sqrt(5)-1)*\km/\co)^(1/(1-\ga))}; \pgfmathsetmacro{\LA}{lb}; \pgfmathsetmacro{\LB}{ub}; \begin{axis}[ ymin=1, domain=\la:\lb, smooth, % no markers, ] % % using TeX for calculation % \addplot {sqrt( (2*sqrt( \km*\co*\x^(1-\ga)*(\co*\x^(1-\ga)-\km)/(\co*\x^(1-\ga)+\km) )+\km)/(\co*\x^(1-\ga)-2*sqrt( \km*\co*\x^(1-\ga)*(\co*\x^(1-\ga)-\km)/(\co*\x^(1-\ga)+\km))))}; % using Lua for calculation % (see section 6.3.1 in the PGFPlots manual)
2020-04-05 14:34:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913937211036682, "perplexity": 4272.364720138929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371604800.52/warc/CC-MAIN-20200405115129-20200405145629-00470.warc.gz"}
https://www.physicsforums.com/threads/generalized-eigenvectors-and-differential-equations.722999/
# Generalized eigenvectors and differential equations Let $A$ be an 3x3 matrix such that $A\mathbf{v_1}=\mathbf{v_1}+\mathbf{v_2}, A\mathbf{v_2}=\mathbf{v_2}+\mathbf{v_3}, A\mathbf{v_3}=\mathbf{v_3}$ where $\mathbf{v_3} \neq \mathbb{0}$. Let $B=S^{-1}AS$ where $S$ is another 3x3 matrix. (i) Find the general solution of $\dot{\mathbf{x}}=B\mathbf{x}$. (ii) Show that 1 is the only eigenvalue of $B$. It's clear that $\mathbf{v_3},\mathbf{v_2}$ and $\mathbf{v_1}$ form a chain of generalized eigenvectors associated with $\lambda=1$ and hence are linearly independent. From this I can find the general solution of $\dot{\mathbf{x}}=A\mathbf{x}=SBS^{-1}\mathbf{x}$ but how can I proceed from here to find the general solution of $\dot{\mathbf{x}}=B\mathbf{x}$? Any help is much appreciated, thank you! AlephZero
2021-09-27 11:28:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9254862070083618, "perplexity": 60.83854521019831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058415.93/warc/CC-MAIN-20210927090448-20210927120448-00286.warc.gz"}
https://learn.sarthaks.com/relation-and-function-notes-formula-and-examples/
## Relation: Relation and function have a wide importance in mathematics. A relation R from a non-empty set A to a non-empty set B is a subset of A × B. The subset is simply the converge attachment of first element and the second element of the ordered pairs in A × B. ## Terminology related to Relations: 1. Ordered pair: A pair of elements when grouped together in a particular order is said to be ordered pair. 2. Cartesian Products of Sets: The set of all the orderd pairs of elements from one set to other is said to be Cartesian product of sets. Again let us consider two sets A and B. The cartesian product A × B is the set of all ordered pairs of elements from A and B, i.e., A × B = { (a,b) : a ∈ A, b ∈ B }. If anyone of A and B is the null set, then A × B will also be empty set, i.e., A × B = φ ## Rules & Formula: 1. Given, If (a, b) = (x, y), then a = x and b = y. 2. If n(A) = p and n(B) = q then n(AxB) = pq 3. If either n(A) = ∞ or n(B) = ∞ then n(AxB) is ∞. 4. A × A × A = {(a, b, c) : a, b, c ∈ A}. Here (a, b, c) is called an ordered triplet. 5. In general, A × B ≠ B × A. 6. Domain: The set of all first elements of the ordered pairs in a relation R from a set A to a set B is called the domain of the relation R. 7. Co-domain: The set of all second elements in a relation R from a set A to a set B is called the range of the relation R. The whole set B is called the co-domain of the relation R. range ⊂ co-domain. ## Types of Relations: Empty relation: If there is no any element of A that is related to any element of A, i.e., R = φ ⊂ A × A. Then this relation R is called empty relation. Universal relation: If each element of A is related to every element of A, i.e., R = A × A. Then we call the relation R in that set A as universal relation. Both the types of relationships are sometimes trivial relations. Reflexive relation: When the Same element is present as co-domain or simply R in X is a relation with (a, a) ∈ R ∀ a ∈ X. All these constitute in the study of relation and function. ### Base types: Symmetric relation: A relation R in X is a relation satisfying (a, b) ∈ R implies (b, a) ∈ R. Transitive relation: A relation R in X is a relation satisfying (a, b) ∈ R and (b, c) ∈ R implies that (a, c) ∈ R. Equivalence relation: A relation R in X is a relation which is reflexive, symmetric and transitive. Equivalence class:  [a] containing a ∈ X for an equivalence relation R in X is the subset of X containing all elements b related to a. Relation and function go hand in hand. ## Function: A function f from a set A to a set B is a specific type of relation for which every element x of set A has one and only one image y in set B. i.e each element in A has a unique element in B. We symbolize any function as f: A→B, where f(x) = y where A is the domain and B is the codomain of “f”. Range of function is the set of images of each element in domain. ## Algebra of functions: For functions f : X → R and g : X → R, we have 1. (f + g) (x) = f (x) + g(x), x ∈ X 2. (f – g) (x) = f (x) – g(x), x ∈ X 3. (f.g) (x) = f (x) .g (x), x ∈ X 4. (kf) (x) = k ( f (x) ), x ∈ X, where k is a real number. 5. $$\large{(}\frac{f}{g})$$(x) = $$\frac{f(x)}{g(x)})$$ , x ∈ X, g(x) ≠ 0 Also learn Algebra formulas. ## Properties for relation and function: One-one Function: We can say a function f : X → Y as one-one (or injective) if f(x1) = f(x2) ⇒ x1 = x2 ∀ x1 , x2 ∈ X. On-to function (surjective function): We can say a function f : X → Y as onto (or surjective) if given any y ∈ Y, ∃ x ∈ X such that f(x) = y. One-one and onto (or bijective): We can say a function f : X → Y as one-one and onto (or bijective), if f is both one-one and onto. Composition of functions: The composition of functions f : A → B and g : B → C is the function with symbol as gof : A → C and actually is gof(x) = g(f(x)) ∀ x ∈ A. Invertible function: A function f : X → Y is invertible if ∃ g : Y → X such that gof = IX and fog = IY. Also, A function f : X → Y is invertible if and only if f is one-one and onto. ## Binary Operation on relation and function: Its symbol is *. It is a function * from A X A to A. Identity element: As earlier said that element e ∈ X is the identity element for binary operation : X × X → X, if a ∗ e = a = e ∗ a ∀ a ∈ X. An element a ∈ X is invertible for binary operation ∗ : A × A → X, if there exists b ∈ A such that a ∗ b = e = b ∗ a where e is the identity for the binary operation ∗. The element b is inverse of a and is we denote it by a–1. Commutative Operation: If a ∗ b = b ∗ a ∀ a, b in X then operation ∗ on X is commutative. Associative Operation: If (a ∗ b) ∗ c = a ∗ (b ∗ c) ∀ a, b, c in X than the operation ∗ on X is associative. ## Relation and function Examples: 1. Let A = {1,2,3}, B = {3,4} and C = {4,5,6}. Find A × (B ∩ C). Solution: We know that by the definition of the intersection of two sets, (B ∩ C) = {4}. Therefore, A × (B ∩ C) = {(1,4), (2,4), (3,4)}. 1. If P = {1, 2}, form the set P × P × P. Solution: We can simply write its 3 different element in a ordered triplet P × P × P = {(1,1,1), (1,1,2), (1,2,1), (1,2,2), (2,1,1), (2,1,2), (2,2,1),(2,2,2)}. 1. If A × B ={(p, q),(p, r), (m, q), (m, r)}, find A and B. Solution: A = set of first elements = {p, m} B = set of second elements = {q, r}. 1. Let f = {(1,1), (2,3), (0, –1), (–1, –3)} be a linear function from Z into Z. Find f(x). Solution: we know that f is a linear function, so we can write f (x) = mx + c. Also, since (1, 1), (0, – 1) ∈ R. f (1) = m + c = 1 and f (0) = c = –1. This gives m = 2 and f(x) = 2x – 1.
2021-03-02 13:14:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6474589705467224, "perplexity": 479.319284216381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364008.55/warc/CC-MAIN-20210302125936-20210302155936-00254.warc.gz"}
http://math.eretrandre.org/tetrationforum/showthread.php?tid=637&pid=5794&mode=threaded
• 0 Vote(s) - 0 Average • 1 • 2 • 3 • 4 • 5 how many superfunctions? [was superfunctions of eta converge towards each other] bo198214 Administrator Posts: 1,389 Threads: 90 Joined: Aug 2007 05/27/2011, 09:33 AM (This post was last modified: 05/27/2011, 09:35 AM by bo198214.) (05/26/2011, 10:09 PM)tommy1729 Wrote: another question is : how many superfunctions can a function have ? There are different answers. If you just ask about the number of superfunctions, then there are infinitely many. We discussed that already, when ever you have a superfunction F, F(x+1)=f(F(x)) then also the function $G(x)=G(x+\theta(x))$ is a superfunction, for $\theta$ 1-periodic, this should not be new for you. If you however ask, how many *regular* super-functions you have at a given fixpoint, i.e. superfunction from regular fractional iterations, i.e. which have an asymptotic powerseries development at the fixpoint, which is equal to the formal fractional iteration powerseries, then there is a clear answer: You look at the powerseries development of the corresponding function, for simplicity we assume fixpoint at 0. $f(x)=f_1 x + f_2 x^2 + \dots$, assume $f_1\neq 0$ Hyperbolic: $|f_1|\ne 1$: there is exactly one regular superfunction Parabolic: $f(x)=x + f_m x^m + f_{m+1}x^{m+1} + \dots$: There are exactly 2(m-1) regular superfunctions. For example $e^x-1=x+x^2/2+\dots$, that's why we have 2*(2-1)=2 regular superfunctions. One from left and one from right. Generally there are $2(m-1)$ petals around the fixpoint, which are alternatingly attractive and repellant (in our example coming from left is attractive and coming from right repellant), on each petal there is defined a different regular Abel function (which is the inverse of a superfunction). The whole thing is called the Leau-Fatou-flower and is kinda standard in holomorphic dynamics (see for example the book of Milnor mentioned on the forum). « Next Oldest | Next Newest » Messages In This Thread how many superfunctions? [was superfunctions of eta converge towards each other] - by tommy1729 - 05/26/2011, 10:09 PM RE: how many superfunctions? [was superfunctions of eta converge towards each other] - by bo198214 - 05/27/2011, 09:33 AM RE: how many superfunctions? [was superfunctions of eta converge towards each other] - by tommy1729 - 05/27/2011, 10:48 PM RE: how many superfunctions? [was superfunctions of eta converge towards each other] - by bo198214 - 05/28/2011, 09:18 AM RE: how many superfunctions? [was superfunctions of eta converge towards each other] - by sheldonison - 05/31/2011, 07:38 PM RE: how many superfunctions? [was superfunctions of eta converge towards each other] - by tommy1729 - 05/28/2011, 12:12 PM RE: how many superfunctions? [was superfunctions of eta converge towards each other] - by bo198214 - 05/28/2011, 12:25 PM RE: how many superfunctions? [was superfunctions of eta converge towards each other] - by tommy1729 - 05/29/2011, 05:27 PM RE: how many superfunctions? [was superfunctions of eta converge towards each other] - by bo198214 - 05/29/2011, 09:20 PM Possibly Related Threads... Thread Author Replies Views Last Post Superfunctions in continu sum equations tommy1729 0 2,018 01/03/2013, 12:02 AM Last Post: tommy1729 superfunctions of eta converge towards each other sheldonison 13 16,207 12/05/2012, 12:22 AM Last Post: sheldonison Elliptic Superfunctions BenStandeven 2 4,009 08/20/2010, 11:56 AM Last Post: bo198214 elementary superfunctions bo198214 37 36,169 04/25/2010, 05:15 PM Last Post: bo198214 Users browsing this thread: 1 Guest(s)
2020-01-21 06:48:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 8, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9345094561576843, "perplexity": 9233.00848442147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601615.66/warc/CC-MAIN-20200121044233-20200121073233-00240.warc.gz"}
https://tex.stackexchange.com/questions/511401/newunicodechar-fails-for-prime-only
# \newunicodechar fails for PRIME only I'm using Beamer with the XeTeX backend and initially I got lots of warnings of the following form: [WARNING] Missing character: There is no ∙ in font [lmmono10-regular]:! So I went and added \newunicodechar commands for each of them, e.g. \newunicodechar{∙}{\makebox[\fontcharwd\fonta]{$\bullet$}} And they all work, I've got almost 30 lines all similar for various Unicode characters. However, this fails for the unicode PRIME character ′. Once I add the following line: \newunicodechar{′}{\makebox[\fontcharwd\fonta]{$\prime$}} I start getting the following error: Error producing PDF. ! TeX capacity exceeded, sorry [input stack size=5000]. \__um_scanprime_collect:N ...canprime_collect:N #1 }{\peek_meaning_remove:NTF... l.277 ...erTok{→} \DataTypeTok{Set} \OtherTok{_} I'm using Beamer via Pandoc if that matters. • I get no error if unicode-math is not used; I get a different error than stated if I load unicode-math. Anyway, you should do \AtBeginDocument{\newunicodechar{′}{\makebox[\fontcharwd\fonta]{$\prime$}}} – egreg Oct 8 at 12:15 Judging from the error message you're using unicode-math. The problem is that this package assigns a value to the “active ′” at begin document, thus overriding what your \newunicodechar does. Solution: postpone \newunicodechar. \documentclass{article} \usepackage{unicode-math} \usepackage{newunicodechar} \AtBeginDocument{\newunicodechar{′}{\makebox[\fontcharwd\fonta]{$\prime$}}} \begin{document} a′ \end{document} • Thanks! I'm not using unicode-math myself, but I guess Pandoc includes it automatically. – Cactus Oct 8 at 13:46
2019-11-19 04:31:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9450186491012573, "perplexity": 9482.193707413677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670006.89/warc/CC-MAIN-20191119042928-20191119070928-00235.warc.gz"}
http://plan4maths.co.uk/L3-gcse-algebra-algebraicExpressions10.html
# Lesson 10: Factorising to a single bracket Introduction Factorising to a single bracket is the opposite of expanding a single bracket. You should be familiar with lessons 5 and 8 before starting, and be able to work out the highest common factor (HCF) of two or three numbers. Oh, and be prepared for a bit of a challenge. SECTION A Expand $$~4(2x+3)~$$ and you get $$~8x+12~$$. Factorise $$~8x+12~$$ and you get $$~4(2x+3)~$$. That's what we mean when we say expanding and factorising are opposites. But how do we get from $$~8x+12~$$ to $$~4(2x+3)~$$? Example 1 Factorise fully $$~8x+12~$$ First find the highest common factor (HCF) of the numbers 8 and 12 The HCF is 4 so the answer will look something like this ⇒ $$~4(■~~~~□)~$$ If we expand the answer, we want to get $$~8x+12~$$ so ... $$~4\times ■=8x~$$ ⇒ so $$~■~$$ must be $$~2x~$$ $$~4\times □=12~$$ ⇒ so $$~□~$$ must be $$~3~$$ Answer: $$~4(2x+3)~$$ Important! The example above says factorise fully $$~8x+12~$$. What does this mean? Well, let's say you just found any old factor of the two numbers instead of the highest common factor. Let's say you came up with 2. $$~2\times ■=8x~$$ ⇒ so $$~■~$$ must be $$~4x~$$ $$~2\times □=12~$$ ⇒ so $$~□~$$ must be $$~6~$$ Answer: $$~2(4x+6)~$$ You have factorised $$~8x+12~$$ but you have not fully factorised it. You must always fully factorise even if the question just says factorise! If you expand your fully factorised answer $$~4(2x+3)~$$, it should give you $$~8x+12~$$. This is a useful check to make sure you have factorised correctly. BUT if you expand $$~2(4x+6)~$$, you will also get $$~8x+12~$$! So this check will pick up on most mistakes, which you can then correct, but it won't tell you if you've fully factorised or not. So be careful, find the HCF, not just any old factor! One way of checking that you've fully factorised is to look inside the brackets in your answer. Could this be factorised further? If so, you ain't finished! Example 2 Factorise $$~12-18x~$$ First find the highest common factor (HCF) of the numbers 12 and 18 The HCF is 6 so the answer will look something like this ⇒ $$~6(■~~~~□)~$$ If we expand the answer, we want to get $$~12-18x~$$ so ... $$~6\times ■=12~$$ ⇒ so $$~■~$$ must be $$~2~$$ $$~6\times □=-18x~$$ ⇒ so $$~□~$$ must be $$~-3x~$$ Answer: $$~6(2-3x)~$$ # Practise to master SECTION A Factorise the following expressions. 01) $$~6x+14~$$ HCF of 6 and 14 ⇒ 2 $$2(3x+7)$$ 02) $$~7x+21~$$ HCF of 7 and 21 ⇒ 7 $$7(x+3)$$ 03) $$~10x-25~$$ HCF of 10 and 25 ⇒ 5 $$5(2x-5)$$ 04) $$~24x-6~$$ HCF of 24 and 6 ⇒ 6 $$6(4x-1)$$ 05) $$~8x+36~$$ HCF of 8 and 36 ⇒ 4 $$4(2x+9)$$ 06) $$~5x+20~$$ HCF of 5 and 20 ⇒ 5 $$5(x+4)$$ 07) $$~15-9x~$$ HCF of 15 and 9 ⇒ 3 $$3(5-3x)$$ 08) $$~9-27x~$$ HCF of 9 and 27 ⇒ 9 $$9(1-3x)$$ SECTION B In Section A, you looked at the number part of each term and worked out the HCF. There was also an $$~x~$$ part but only in one of the terms. In this section, both terms will have a number part and both terms will have an $$~x~$$ part so we'll have to work out a HCF for each. Example 1 Factorise fully $$~6x^2+8x~$$ First find the HCF of 6 and 8 ⇒ HCF is 2 Now find the HCF of $$~x^2~$$ and $$~x~$$ ⇒ HCF is $$~x~$$ (see Tip! below) So the answer will look something like this ⇒ $$~2x(■~~~~□)~$$ If we expand the answer, we want to get $$~6x^2+8x~$$ so ... $$~2x\times ■=6x^2~$$ ⇒ so $$~■~$$ must be $$~3x~$$ $$~2x\times □=8x~$$ ⇒ so $$~□~$$ must be $$~4~$$ Answer: $$~2x(3x+4)~$$ Tip! The HCF of the $$~x~$$ bits is just the one with the lowest power. Simple as that! For example, the HCF of $$~x^5~$$ and $$~x^2~$$ is $$~x^2~$$, the HCF of $$~y^3~$$ and $$~y~$$ is $$~y~$$, etc. It usually takes a good few practice questions before students start to get a feel for this. # Practise to master SECTION B Factorise the following expressions. 01) $$~3x^2+7x~$$ HCF of 3 and 7 ⇒ 1 HCF of $$~x^2~$$ and $$~x~$$ ⇒ $$~x~$$ $$x(3x+7)$$ 02) $$~4x^2+x~$$ HCF of 4 and 1 ⇒ 1 HCF of $$~x^2~$$ and $$~x~$$ ⇒ $$~x~$$ $$x(4x+1)$$ 03) $$~4x^2+6x~$$ HCF of 4 and 6 ⇒ 2 HCF of $$~x^2~$$ and $$~x~$$ ⇒ $$~x~$$ $$2x(2x+3)$$ 04) $$~6x^2+18x~$$ HCF of 6 and 18 ⇒ 6 HCF of $$~x^2~$$ and $$~x~$$ ⇒ $$~x~$$ $$6x(x+3)$$ 05) $$~5x^2-6x~$$ HCF of 5 and 6 ⇒ 1 HCF of $$~x^2~$$ and $$~x~$$ ⇒ $$~x~$$ $$x(5x-6)$$ 06) $$~x^2+2x~$$ HCF of 1 and 2 ⇒ 1 HCF of $$~x^2~$$ and $$~x~$$ ⇒ $$~x~$$ $$x(x+2)$$ 07) $$~8x^2-12x~$$ HCF of 8 and 12 ⇒ 4 HCF of $$~x^2~$$ and $$~x~$$ ⇒ $$~x~$$ $$4x(2x-3)$$ 08) $$~35x^2+7x~$$ HCF of 35 and 7 ⇒ 7 HCF of $$~x^2~$$ and $$~x~$$ ⇒ $$~x~$$ $$7x(5x+1)$$ 09) $$~4x^2+9x^3~$$ HCF of 4 and 9 ⇒ 1 HCF of $$~x^2~$$ and $$~x^3~$$ ⇒ $$~x^2~$$ $$x^2(4+9x)$$ 10) $$~2x^3-x~$$ HCF of 2 and 1 ⇒ 1 HCF of $$~x^3~$$ and $$~x~$$ ⇒ $$~x~$$ $$x(2x^2-1)$$ 11) $$~9x^2+24x~$$ HCF of 9 and 24 ⇒ 3 HCF of $$~x^2~$$ and $$~x~$$ ⇒ $$~x~$$ $$3x(3x+8)$$ 12) $$~5x^4+30x^2~$$ HCF of 5 and 30 ⇒ 5 HCF of $$~x^4~$$ and $$~x^2~$$ ⇒ $$~x^2~$$ $$5x^2(x^2+6)$$ 13) $$~7x-10x^5~$$ HCF of 7 and 10 ⇒ 1 HCF of $$~x~$$ and $$~x^5~$$ ⇒ $$~x~$$ $$x(7-10x^4)$$ 14) $$~x-5x^2~$$ HCF of 1 and 5 ⇒ 1 HCF of $$~x~$$ and $$~x^2~$$ ⇒ $$~x~$$ $$x(1-5x)$$ 15) $$~15x^2-20x~$$ HCF of 15 and 20 ⇒ 5 HCF of $$~x^2~$$ and $$~x~$$ ⇒ $$~x~$$ $$5x(3x-4)$$ 16) $$~21x^2-3x^3~$$ HCF of 21 and 3 ⇒ 3 HCF of $$~x^2~$$ and $$~x^3~$$ ⇒ $$~x^2~$$ $$3x^2(7-x)$$ SECTION C You've done well to get this far! They say stretching is good for you. This last section will certainly do that! Example 1 Factorise $$~8x^2y^2+20xy^4~$$ HCF of 8 and 20 ⇒ 4 HCF of $$~x^2~$$ and $$~x~$$ ⇒ $$~x~$$ HCF of $$~y^2~$$ and $$~y^4~$$ ⇒ $$~y^2~$$ So the answer will look something like this ⇒ $$~4xy^2(■~~~~□)~$$ When we expand the answer, we want to get $$~8x^2y^2+20xy^4~$$ so ... $$~4xy^2\times ■=8x^2y^2~$$ ⇒ so $$~■~$$ must be $$~2x~$$ $$~4xy^2\times □=20xy^4~$$ ⇒ so $$~□~$$ must be $$~5y^2~$$ Answer: $$~4xy^2(2x+5y^2)~$$ Example 2 Factorise $$~2xy^3+8x^5y-4y^2~$$ HCF of 2, 8 and 4 ⇒ 2 There isn't an $$~x~$$ part in every term! HCF of $$~y^3~$$, $$~y~$$ and $$~y^2~$$ ⇒ $$~y~$$ So the answer will look something like this ⇒ $$~2y(■~~~~□~~~~▲)~$$ When we expand the answer, we want to get $$~2xy^3+8x^5y-4y^2~$$ so ... $$~2y\times ■=2xy^3~$$ ⇒ so $$~■~$$ must be $$~xy^2~$$ $$~2y\times □=8x^5y~$$ ⇒ so $$~□~$$ must be $$~4x^5~$$ $$~2y\times ▲=-4y^2~$$ ⇒ so $$~▲~$$ must be $$~-2y~$$ Answer: $$~2y(xy^2+4x^5-2y)~$$ The following questions will require quite a bit of concentration. # Practise to master SECTION C Factorise the following expressions. 01) $$~2y+5xy^2~$$ HCF of 2 and 5 ⇒ 1 There isn't an $$~x~$$ part in both terms! HCF of $$~y~$$ and $$~y^2~$$ ⇒ $$~y~$$ $$y(2+5xy)$$ 02) $$~7xy^3-x~$$ HCF of 7 and 1 ⇒ 1 HCF of $$~x~$$ and $$~x~$$ ⇒ $$~x~$$ There isn't a $$~y~$$ part in both terms! $$x(7y^3-1)$$ 03) $$~8ab^2+10a^2b~$$ HCF of 8 and 10 ⇒ 2 HCF of $$~a~$$ and $$~a^2~$$ ⇒ $$~a~$$ HCF of $$~b^2~$$ and $$~b~$$ ⇒ $$~b~$$ $$2ab(4b+5a)$$ 04) $$~3x^2y^2-9xy^3~$$ HCF of 3 and 9 ⇒ 3 HCF of $$~x^2~$$ and $$~x~$$ ⇒ $$~x~$$ HCF of $$~y^2~$$ and $$~y^3~$$ ⇒ $$~y^2~$$ $$3xy^2(x-3y)$$ 05) $$~7y+5x^2y^3~$$ HCF of 7 and 5 ⇒ 1 There isn't an $$~x~$$ part in both terms! HCF of $$~y~$$ and $$~y^3~$$ ⇒ $$~y~$$ $$y(7+5x^2y^2)$$ 06) $$~p^2q^2-6q~$$ HCF of 1 and 6 ⇒ 1 There isn't a $$~p~$$ part in both terms! HCF of $$~q^2~$$ and $$~q~$$ ⇒ $$~q~$$ $$q(p^2q-6)$$ 07) $$~4xy+14x^3~$$ HCF of 4 and 14 ⇒ 2 HCF of $$~x~$$ and $$~x^3~$$ ⇒ $$~x~$$ There isn't a $$~y~$$ part in both terms! $$2x(2y+7x^2)$$ 08) $$~10x^3y^2-5xy^2~$$ HCF of 10 and 5 ⇒ 5 HCF of $$~x^3~$$ and $$~x~$$ ⇒ $$~x~$$ HCF of $$~y^2~$$ and $$~y^2~$$ ⇒ $$~y^2~$$ $$5xy^2(2x^2-1)$$ 09) $$~6xy^2+4xy+8xy^3~$$ HCF of 6, 4 and 8 ⇒ 2 HCF of $$~x~$$, $$~x~$$ and $$~x~$$ ⇒ $$~x~$$ HCF of $$~y^2~$$, $$~y~$$ and $$~y^3~$$ ⇒ $$~y~$$ $$2xy(3y+2+4y^2)$$ 10) $$~3y^2+9x^2y-6x^3y~$$ HCF of 3, 9 and 6 ⇒ 3 There isn't an $$~x~$$ part in every term! HCF of $$~y^2~$$, $$~y~$$ and $$~y~$$ ⇒ $$~y~$$ $$3y(y+3x^2-2x^3)$$ 11) $$~x^2y^2-4y+12xy~$$ HCF of 1, 4 and 12 ⇒ 1 There isn't an $$~x~$$ part in every term! HCF of $$~y^2~$$, $$~y~$$ and $$~y~$$ ⇒ $$~y~$$ $$y(x^2y-4+12x)$$ 12) $$~8xy-2x^2-4x^2y^2~$$ HCF of 8, 2 and 4 ⇒ 2 HCF of $$~x~$$, $$~x^2~$$ and $$~x^2~$$ ⇒ $$~x~$$ There isn't a $$~y~$$ part in every term! $$2x(4y-x-2xy^2)$$
2018-12-13 15:36:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9552393555641174, "perplexity": 1282.433244129074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824912.16/warc/CC-MAIN-20181213145807-20181213171307-00527.warc.gz"}
https://twodee.org/blog/17706
# teaching machines ## Figure Out One of my favorite illusion mechanics is figure-ground reversal. What was the foreground becomes the background, and what was the background becomes the foreground. I went for a subtle figure-ground reversal in this checkerboard animation. It’s subtle because I don’t think checkerboards have an obvious figure and ground. We are trained to accept that both colors are equal. The initial movement makes the cornflower squares appear to be the figure, but then we see the white squares move. When I watch this, I get some intriguing after-image effects. Diagonal lines appear on the axis-aligned checkerboard. The original Twoville code for this animation was too long. I added a new construct to eliminate repetitive code. Using the with block and an array, once can now set properties on multiple objects at once. For example, here we set the size of a and b at the same time: with [a, b] size = [10, 10] The code still clocks in at 130 lines. What would it take in a language like Processing? step = 20 with gif size = [512, ~] with time start = 0 stop = step * 4 delay = 0.05 // Backdrop with rectangle() corner = :zero2 size = [100, ~] 0 -> t -> step * 2 - 1 color = :white step * 2 -> t -> step * 4 color = :cornflower for r in -1..11 for c in -1..11 if abs(r) % 2 != abs(c) % 2 // Stasis A rectangle with rectangle() corner = [c, r] * 10 size = [10, ~] color = :cornflower enabled = false 0 -> t -> step enabled = true // Rotating A triangles ab = with polygon() vertex().position = [c, r] * 10 vertex().position = [c + 1, r] * 10 vertex().position = [c + 0.5, r + 0.5] * 10 with rotate() pivot = [c, r] * 10 0 -> step * 1 -> t degrees = 0 t -> step * 2 degrees = -90 at = with polygon() vertex().position = [c, r + 1] * 10 vertex().position = [c + 1, r + 1] * 10 vertex().position = [c + 0.5, r + 0.5] * 10 with rotate() pivot = [c + 1, r + 1] * 10 0 -> step * 1 -> t degrees = 0 t -> step * 2 degrees = -90 ar = with polygon() vertex().position = [c + 1, r] * 10 vertex().position = [c + 1, r + 1] * 10 vertex().position = [c + 0.5, r + 0.5] * 10 al = with polygon() vertex().position = [c, r] * 10 vertex().position = [c, r + 1] * 10 vertex().position = [c + 0.5, r + 0.5] * 10 with [al, at, ar, ab] color = :cornflower 0 -> t -> step enabled = false step * 2 -> t enabled = false // Stasis B rectangles bRectangle1 = with rectangle() corner = [c, r] * 10 with rotate() degrees = -45 pivot = [c, r] * 10 bRectangle2 = with rectangle() corner = [c + 1, r] * 10 with rotate() degrees = -45 pivot = [c + 1, r] * 10 with [bRectangle1, bRectangle2] size = [sqrt(50), ~] color = :white enabled = false step * 2 -> t -> step * 3 enabled = true // Rotating B triangles bl = with polygon() vertex().position = [c - 1, r] * 10 vertex().position = [c - 1, r + 1] * 10 vertex().position = [c - 0.5, r + 0.5] * 10 with rotate() pivot = [c - 1, r] * 10 step * 3 -> t degrees = 90 t -> step * 4 degrees = 0 br = with polygon() vertex().position = [c, r] * 10 vertex().position = [c, r + 1] * 10 vertex().position = [c - 0.5, r + 0.5] * 10 with rotate() pivot = [c, r + 1] * 10 step * 3 -> t degrees = 90 t -> step * 4 degrees = 0 bt = with polygon() vertex().position = [c - 1, r + 1] * 10 vertex().position = [c, r + 1] * 10 vertex().position = [c - 0.5, r + 0.5] * 10 bb = with polygon() vertex().position = [c - 1, r] * 10 vertex().position = [c, r] * 10 vertex().position = [c - 0.5, r + 0.5] * 10 with [bl, br, bb, bt] color = :white 0 -> t -> step * 3 enabled = false var twovilleDiv = jQuery('#twoville_figure_ground_checkers'); twovilleDiv.closest('pre').replaceWith(twovilleDiv); document.getElementById('twoville_form_figure_ground_checkers').submit(); step = 20 with gif size = [512, ~] with time start = 0 stop = step * 4 delay = 0.05 // Backdrop with rectangle() corner = :zero2 size = [100, ~] 0 -> t -> step * 2 - 1 color = :white step * 2 -> t -> step * 4 color = :cornflower for r in -1..11 for c in -1..11 if abs(r) % 2 != abs(c) % 2 // Stasis A rectangle with rectangle() corner = [c, r] * 10 size = [10, ~] color = :cornflower enabled = false 0 -> t -> step enabled = true // Rotating A triangles ab = with polygon() vertex().position = [c, r] * 10 vertex().position = [c + 1, r] * 10 vertex().position = [c + 0.5, r + 0.5] * 10 with rotate() pivot = [c, r] * 10 0 -> step * 1 -> t degrees = 0 t -> step * 2 degrees = -90 at = with polygon() vertex().position = [c, r + 1] * 10 vertex().position = [c + 1, r + 1] * 10 vertex().position = [c + 0.5, r + 0.5] * 10 with rotate() pivot = [c + 1, r + 1] * 10 0 -> step * 1 -> t degrees = 0 t -> step * 2 degrees = -90 ar = with polygon() vertex().position = [c + 1, r] * 10 vertex().position = [c + 1, r + 1] * 10 vertex().position = [c + 0.5, r + 0.5] * 10 al = with polygon() vertex().position = [c, r] * 10 vertex().position = [c, r + 1] * 10 vertex().position = [c + 0.5, r + 0.5] * 10 with [al, at, ar, ab] color = :cornflower 0 -> t -> step enabled = false step * 2 -> t enabled = false // Stasis B rectangles bRectangle1 = with rectangle() corner = [c, r] * 10 with rotate() degrees = -45 pivot = [c, r] * 10 bRectangle2 = with rectangle() corner = [c + 1, r] * 10 with rotate() degrees = -45 pivot = [c + 1, r] * 10 with [bRectangle1, bRectangle2] size = [sqrt(50), ~] color = :white enabled = false step * 2 -> t -> step * 3 enabled = true // Rotating B triangles bl = with polygon() vertex().position = [c - 1, r] * 10 vertex().position = [c - 1, r + 1] * 10 vertex().position = [c - 0.5, r + 0.5] * 10 with rotate() pivot = [c - 1, r] * 10 step * 3 -> t degrees = 90 t -> step * 4 degrees = 0 br = with polygon() vertex().position = [c, r] * 10 vertex().position = [c, r + 1] * 10 vertex().position = [c - 0.5, r + 0.5] * 10 with rotate() pivot = [c, r + 1] * 10 step * 3 -> t degrees = 90 t -> step * 4 degrees = 0 bt = with polygon() vertex().position = [c - 1, r + 1] * 10 vertex().position = [c, r + 1] * 10 vertex().position = [c - 0.5, r + 0.5] * 10 bb = with polygon() vertex().position = [c - 1, r] * 10 vertex().position = [c, r] * 10 vertex().position = [c - 0.5, r + 0.5] * 10 with [bl, br, bb, bt] color = :white 0 -> t -> step * 3 enabled = false
2020-11-24 08:58:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28730911016464233, "perplexity": 13375.118807987097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176049.8/warc/CC-MAIN-20201124082900-20201124112900-00699.warc.gz"}
https://www.physicsforums.com/threads/is-it-dangerous-to-fire-a-gun-upwards.494000/
# Is it dangerous to fire a gun upwards? 1. Apr 27, 2011 ### flyingpig 1. The problem statement, all variables and given/known data I am just wondering, if you fire something at some speed and acceleration, that something will come down even stronger right? So you know in funerals, often there accompanied some men in some suits and they start firing a few times in the air. Why do they do that even if it is extremely dangerous? 2. Apr 27, 2011 ### thegreenlaser Theoretically, it will come down with exactly the same speed it went up with. Assuming no air resistance, then the mechanical energy of the bullet is conserved. i.e. all the kinetic energy you give the bullet by firing it up will be converted to potential energy as it travels upwards, and then converted back into kinetic energy as it travels back downwards. Since energy is conserved, the bullet will have the exact same amount of kinetic energy when it hits the ground as it did when it was fired: $$E_{kinetic} = \frac{1}{2} mv^2$$ The mass of the bullet won't change, so the magnitude of the velocity will be the same. Add in air resistance, which is a non-conservative force (unlike gravity), and you'll find that the bullet is actually slowed down along its path, losing energy to things like heat and sound. (so the final kinetic energy is less than the initial kinetic energy) There's also the factor of terminal velocity. At a certain speed the air resistance force pushing the bullet up is equal to the gravitational force pulling the bullet down, and so it will no longer accelerate downward. i.e. there is a limit to how fast the bullet can be traveling when it hits the ground. Under very ideal circumstances, the best you'll get is a bullet hitting the ground at the exact same speed. In reality, it's likely to be a smaller speed. 3. Apr 27, 2011 ### SteamKing Staff Emeritus If the Honor Guard at a funeral is well led, they only fire blanks in order to avoid killing anyone downrange. Generally, the guard doesn't fire straight up. 4. Apr 28, 2011 ### flyingpig Why would the final KE be less? Doesn't it come down stronger? 5. Apr 28, 2011 ### Mattowander When it is falling, the force due to drag would be pointing upward, not downward. 6. Apr 28, 2011 ### Pengwuino The air resistance is always in the opposite direction of the direction of travel. A bullet falling down will experience an air resistance upward. Also yah, as someone said, I bet they just fire blanks as bullets do kill people when fired like that. Over here, shooting off weapons during new years is banned because people have been killed by bullets hitting people upon falling down. Also, think about the idea of something having more energy when it goes down to the same height. If it had more energy, you could send it back upwards using a U-shaped ramp and when it came down it would have even more energy than before and you've basically built a device that creates energy from nothing! Bad Physics!
2016-05-30 01:25:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5420135855674744, "perplexity": 646.7049776830397}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049282327.67/warc/CC-MAIN-20160524002122-00214-ip-10-185-217-139.ec2.internal.warc.gz"}
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=4295
## WeBWorK Main Forum ### XSS Vulnerability by Collin Smith - Number of replies: 12 Our IT Central folks have flagged our WeBWorK 2.12 server with a Cross-Site Scripting vulnerability. I used the Plain Vanilla WW2.12/Ubuntu 16.04 ISO to install our present WeBWorK server this past May, with some (obviously not enough) customization. What is needed to patch our server to remove this vulnerability? ### Re: XSS Vulnerability by Michael Gage - We'll need more information from your IT department to track this down. Do they have the URL that displays the XSS vulnerability (that will tell us the offending page from WeBWorK) and if possible what kind of XSS vulnerability exists? If they can give us a command that triggers the XSS vulnerability that would be ideal. Often these XSS vulnerabilities arise from an error message that returns a little too much information. -- Mike ### Re: XSS Vulnerability by Collin Smith - With the help of our IT Central folks here at the University, we were able to resolve this XXS issue. The solution involved removing the various error responses in the ... /opt/webwork/webwork2/lib/Apache/WeBWorK.pm ... file. Basically, from the Vanilla 2.12 with Ubuntu 16.04 install (without too many other modifications), the following lines of code from the file above ... <div style="text-align:left"> <h2>WeBWorK error</h2> <p>An error occured while processing your request. For help, please send mail to this site's webmaster $admin, including all of the following information as well as what what you were doing when the error occured.</p> <p>$time</p> <h3>Warning messages</h3> <ul>$warnings</ul> <h3>Error messages</h3> <blockquote style="color:red"><code>$exception</code></blockquote> <h3>Call stack</h3> <p>The information below can help locate the source of the problem.</p> <ul>$backtrace</ul> <h3>Request information</h3> <table border="1"> <tr><td>Method</td><td>$method</td></tr> <tr><td>URI</td><td>$uri</td></tr> <tr><td>HTTP Headers</td><td> <table width="90%">$headers </table> </td></tr> </table> </div> ... were truncated to this ... <div style="text-align:left"> <h2>WeBWorK error</h2> <p>An error occured while processing your request. For help, please send mail to this site's webmaster $admin, including all of the following information as well as what what you were doing when the error occured.</p> <p>$time</p> </div> These represent lines 233-256 in the original WeBWorK.pm file (your line #'s may vary). After a sudo service apache2 restart, this trouble-ticket, cross-talk vulnerability, was resolved. ### Re: XSS Vulnerability by Michael Gage - Thanks to you and your IT department for your work on the XXS exploit. This is a good fix for now but I'd like to see a solution that still allows some error message to be returned. At the moment to this site's webmaster $admin, including all of the following information as well as what what you were doing when the error occurred. As it stands there is no "information below" and the webmaster will not have any idea of what error occurred. Of course balancing providing useful debugging information while at the same time preventing XXS exploits and other safety issues has always been a problem. It will take a little thought and some work to see if we can provide both or at least reach an optimal compromise. In reply to Michael Gage ### Re: XSS Vulnerability by Ping-Shun Chan - Dear Michael, I am not sure whether this should belong to a new thread, but my department has encountered something of a similar nature when our university's central IT folks subjected our WeBWorK server to a vulnerability test. One thing which they have flagged as a cross-site scripting vulnerability is the ability of the following URL: <webwork host>/math1234/?status_message=<aUdIo%20SrC=x%20OnErRoR=alert(61212)> to trigger an alert on the browser of a user with professor-level or above privilege. I suppose the security risk is that the "alert" function could potentially be replaced with something a lot more malicious. Seeing that the argument to "status_message" is processed by the "message" routine under ContentGenerator.pm, I made the following change: sub message { my ($self) = @_; print "\n<!-- BEGIN " . __PACKAGE__ . "::message -->\n"; #       print $self->{status_message} # if exists$self->{status_message}; if (exists $self->{status_message}) { my$sterilized = $self->{status_message};$sterilized =~ s/<|>/%/g; print $sterilized; } print "<!-- END " . __PACKAGE__ . "::message -->\n"; return ""; } Basically, it replaces all the pointy brackets of any potential html tags in the message argument. Perhaps not the most elegant solution... My question is: Would it be possible to avoid the vulnerability without touching the source code? If not, is the way I altered the code above safe from a security standpoint? Are there better ways still to handle this? Thank you very much! -- P-S In reply to Ping-Shun Chan ### Re: XSS Vulnerability by Danny Glin - Fixing this will almost certainly require editing the source code. There are other functions in ContentGenerator.pm that use the HTML::Scrubber package to remove any scripts from html messages. Take a look at the addmessage routine to see how it is implemented. Using the same code is probably the cleanest way to do this robustly. If you do change the code, please submit a pull request back to the main repository on GitHub so it can be included in the next release. In reply to Danny Glin ### Re: XSS Vulnerability by Ping-Shun Chan - Dear Danny, Thank you very much for your reply. It appears that the current settings of$scrubber (as of WeBWorK 2.15) do not scrub html events like "onerror" and etc. It seems there are all sorts of on* events in html, and I am not even sure where a comprehensive list can be found (I'd imagine it'd also depend on which browser we are talking about). Anyway, I went on the forum https://www.perlmonks.org/index.pl?node_id=251427 and a commenter has posted a pretty long list.  It appears that adding the following lines after each definition of $scrubber (for example under 'sub warningOutput' in ContentGenerator.pm) does avert the URL attack referenced in my first post:$scrubber->default( undef, { '*' => 1, 'onabort' => 0, 'onactivate' => 0, 'onafterprint' => 0, 'onafterupdate' => 0, 'onbeforeactivate' => 0, 'onbeforecopy' => 0, 'onbeforecut' => 0, 'onbeforedeactivate' => 0, 'onbeforeeditfocus' => 0, 'onbeforepaste' => 0, 'onbeforeprint' => 0, 'onbeforeupdate' => 0, 'onblur' => 0, 'onbounce' => 0, 'oncellchange' => 0, 'onchange' => 0, 'onclick' => 0, 'oncontrolselect' => 0, 'oncopy' => 0, 'oncut' => 0, 'ondataavailable' => 0, 'ondatasetchanged' => 0, 'ondatasetcomplete' => 0, 'ondblclick' => 0, 'ondeactivate' => 0, 'ondrag' => 0, 'ondragend' => 0, 'ondragenter' => 0, 'ondragleave' => 0, 'ondragover' => 0, 'ondragstart' => 0, 'ondrop' => 0, 'onerror' => 0, 'onerrorupdate' => 0, 'onfilterchange' => 0, 'onfinish' => 0, 'onfocus' => 0, 'onfocusin' => 0, 'onfocusout' => 0, 'onhelp' => 0, 'onkeydown' => 0, 'onkeypress' => 0, 'onkeyup' => 0, 'onlayoutcomplete' => 0, 'onlosecapture' => 0, 'onmousedown' => 0, 'onmouseenter' => 0, 'onmouseleave' => 0, 'onmousemove' => 0, 'onmouseover' => 0, 'onmouseout' => 0, 'onmouseup' => 0, 'onmousewheel' => 0, 'onmove' => 0, 'onmoveend' => 0, 'onmovestart' => 0, 'onpaste' => 0, 'onpropertychange' => 0, 'onreset' => 0, 'onresize' => 0, 'onresizeend' => 0, 'onresizestart' => 0, 'onrowenter' => 0, 'onrowexit' => 0, 'onrowsdelete' => 0, 'onrowsinserted' => 0, 'onscroll' => 0, 'onselect' => 0, 'onselectionchange' => 0, 'onselectstart' => 0, 'onstart' => 0, 'onstop' => 0, 'onsubmit' => 0, } ); Does it look right?  Thank you very much! Best regards, Ping-Shun p.s. If it looks fine I will submit a pull request. ### Re: XSS Vulnerability by Danny Glin - I'm not an expert on this, but if I understand the existing scrubber code correctly then it is scrubbing all scripts, which means that it doesn't have to worry about individual events because they would have to be contained in a script. Can someone confirm whether this is correct? ### Re: XSS Vulnerability by Ping-Shun Chan - Dear Danny, I am guessing that the URL sent by my IT department's scanner triggered an alert because "onerror" was contained in an <audio> tag with a bogus src attribute. So, no script tag was explicitly involved. But perhaps there is a cleaner solution to including that long list of DOM events in the scrubber rules.  It appears to me that the HTML element attributes that I ever encountered in system messages were mostly "class" attributes within <div>'s (e.g. ' class="ResultsWithoutErrors" ')  If that's the case, wouldn't it be simpler to make scrubber allow only the 'class' attribute, or perhaps a few more harmless ones? Thank you very much! Best regards, Ping-Shun ### Re: XSS Vulnerability by Danny Glin - It was my (possibly flawed) understanding that the scrubber package was specifically designed to prevent things like XSS vulnerabilities, so I'm surprised that there isn't an easy way to scrub anything that looks like a script. At this point this is beyond my level of expertise, so I'm hoping someone else will join the conversation with more knowledgeable input. ### Re: XSS Vulnerability by Ping-Shun Chan - Dear Danny, Thanks for looking into it. For what it's worth, on https://metacpan.org/pod/HTML::Scrubber, the example given there also has DOM event attributes hard coded in even after "script => 0" has been set. Anyway, our quick solution here is to escape all HTML tags in $warnings (and$messages) with "HTML::Entities::encode_entities($warnings)". A total of maybe 4 lines were changed, in lib/Apache/WeBWorK.pm and lib/WeBWorK/ContentGenerator.pm. These changes allowed the server to pass all XSS vulnerability tests given by our IT colleagues, but of course, all system messages and warnings are now a little ugly because the HTML tags are not rendered anymore. Functionally speaking, the full messages can still be seen, so that's "good enough" for us. Alternatively, setting: $scrubber->default( undef, { '*' => 0, # all attributes disabled by default 'class' => 1 # only class attributes are allowed } ); also seems to prevent scripts from executing, and the messages, at least the ones I come across, look as nice as before. But this hasn't been fully tested yet. Best regards, Ping-Shun ### Re: XSS Vulnerability by Nathan Wallach - Dear Ping-Shun, It is nice of you to work on this, and for your IT staff to help improve the security aspects of WW. If you could share your code in a fork on GitHub, and eventually put it into a pull request, it should be quite easy to get the correct changes into the develop branch, and eventually into WW 2.16. I agree with your conclusion that there is a need to scrub the input coming in from requests as a status_message value to avoid the XSS issues reported. Your suggestion of allowing only the class attribute seems to be a very good one. It could be that this does not need to be done to all possible uses of addmessage(). The main issue with XSS is probably with the few places where form provided values of status_message are read in and used to create the "new" status message. Those places use "$r->param("status_message");" (This seems to be intended to carry over status_message values passed on by some "prior" page.) It seems likely these are the cases which cause the the XSS issue you can trigger by providing a value of status_message in the URL. It could be that it suffices to apply stricter scrubbing rules only in these cases using a add_scrubbed_message() replacement for addmessage() in those settings. Your IT people could help test whether that is really sufficient. Here are the cases where form data seems to be pulled in to be used in a status_message in webwork2/lib: • lib/WeBWorK/ContentGenerator/Instructor/PGProblemEditor2.pm:383:$self->addmessage($r->param('status_message') ||''); # record status messages carried over if this is a redirect • lib/WeBWorK/ContentGenerator/Instructor/PGProblemEditor3.pm:384:$self->addmessage($r->param('status_message') ||''); # record status messages carried over if this is a redirect • lib/WeBWorK/ContentGenerator/Instructor/PGProblemEditor.pm:378:$self->addmessage($r->param('status_message') ||''); # record status messages carried over if this is a redirect • lib/WeBWorK/ContentGenerator/Instructor/AchievementEditor.pm:137:$self->addmessage($r->param('status_message') ||''); # record status messages carried over if this is a redirect • lib/WeBWorK/ContentGenerator/ProblemSet.pm:62: my$status_message = $r->param("status_message"); • lib/WeBWorK/ContentGenerator/CourseAdmin.pm:64: my$status_message = $r->param("status_message"); • lib/WeBWorK/ContentGenerator/ProblemSets.pm:147: my$status_message = $r->param("status_message"); • lib/WeBWorK/ContentGenerator/Problem.pm:580: my$status_message = $r->param("status_message"); The "stronger" scrubbing could probably by default block all HTML tags except a short white-list including DIV, SPAN, P, HR, UL, and LI and maybe a few simple formatting tags (including CODE), as well as blocking all attributed except class as you suggested. Then, if additional tags should be white-listed - it can be done so later. Maybe something like: my$scrubber = HTML::Scrubber->new( default => 0,    # No longer 1 allow  => [ qw[ div span p b i u hr br ul li code ] ] , script => 0, comment => 0 ); Passing the class attribute is About attributes to pass - I suspect that only class is really needed, as that would likely allow the CSS based formatting to continue to work. Notes: 1. Most usages of addmessage() seem pretty simple and basically set stings which are often enveloped in one of CGI::div, CGI::span, and CGI::p where some cases set a class attribute. I suspect that those should not be causing any problem, as no client side data is included in those typical usages. 3. There is one use of CGI::code  in lib/WeBWorK/ContentGenerator/Hardcopy.pm and I found some other places where <CODE> and </CODE> are being used via addbadmessage in lib/WeBWorK/ContentGenerator/Instructor/ProblemSetList.pm and lib/WeBWorK/ContentGenerator/Instructor/ProblemSetList2.pm 4. lib/WeBWorK/ContentGenerator/Instructor/Assigner.pm makes use of CGI::ul and CGI::li 5. Some calls of the forms $self->addbadmessage($msg);    $self->addgoodmessage($msg);   $self->addgoodmessage($message);   $self->addbadmessage($write_result);   need to be checked. (I gave up on digging down that far.) 6. There are 3 places where addmessage() is called on the output from $self->$actionHandler . Someone needs to check what sort of output those calls are generating, as they may be the most critical places where too much scrubbing might make trouble. I took a very quick look and it seems that they also basically are strings and DIVS with some use of the class attribute. • lib/WeBWorK/ContentGenerator/Instructor/AchievementList.pm:214 • lib/WeBWorK/ContentGenerator/Instructor/ProblemSetList.pm:387 • lib/WeBWorK/ContentGenerator/Instructor/ProblemSetList2.pm:404 ### Re: XSS Vulnerability by Ping-Shun Chan - Dear Nathan, Thank you very much for your detailed overview of the situation. I didn't even realize there were all these other places, like AchievementList.pm, that generate messages. I guess the issue is that if a revised scrubber is placed too downstream, or the white list is made too short, one ends up potentially taking away useful features. Anyway, in my fork of webwork2, I made changes to the scrubber settings so that it allows only the 'class' attribute, but allows all html tags except script. You are welcome to have a look: https://github.com/pschan-gh/webwork2/commit/d3ecb96c8393bd9d015b615e922e7bdd4c03c65a These changes scrub messages pretty much right before they are presented to the webpage, so based on what you said, it's probably not the most ideal solution, since it might take away functionalities of other webwork features. So, perhaps right now it's at best something that serves to start a discussion. For testing purposes, I include below a few URL's that trigger unwanted alerts. They work only if the user logs on as a professor or admin. The changes I made to my fork appear to thwart these "attacks". Thank you very much! Best regards, Ping-Shun http://localhost/webwork2/TestingCourse/?status_message=%3CaUdIo%20SrC=x%20OnErRoR=alert(61212)%3E http://localhost/webwork2/TestingCourse/instructor/setmaker/?last_index=-1%22%3E%3CsVg%20OnLoAd=alert(16676)%3Ehttp://localhost/webwork2/TestingCourse/instructor/?selected_users!sort=lnfn%3CaUdIo%20SrC%3dx%20OnErRoR%3dalert%2817166%29%3Ehttp://localhost/webwork2/TestingCourse/instructor/sets2/?action=filter%3CaUdIo%20SrC=x%20OnErRoR=alert(26176)%3Ehttp://localhost/webwork2/TestingCourse/instructor/send_mail/?rows=15%22%20sTyLe=X:eX/**/pReSsIoN(alert(24226))%20%22
2021-10-26 15:07:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3178945481777191, "perplexity": 6064.4500601511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587908.20/warc/CC-MAIN-20211026134839-20211026164839-00333.warc.gz"}
https://brilliant.org/discussions/thread/proof-for-area-of-any-regular-polygon-2/
# Proof for area of any Regular Polygon 2! Here is the link to my previous proof for the area of any quadrilateral being $$\frac{ap}{2}$$ where a is the apothem and p is the perimeter. These two notes are quite closely related and the other note helps you to get a better understanding of this note but it is not at all necessary. In this note, I will prove that the area of any polygon is $$\dfrac{1}{4} na^2\cot(\dfrac{\pi}{n})$$ where n is the number of sides and a is the side length. We start by drawing lines to each of the vertices of the polygon. Each of the angles will be congruent and of course add up to 360 deg. Thus since there are n angles (n being the number of sides), each angle is $$\frac{360}{n}$$. Next, we draw the perpendicular bisectors of each side. Because each of the interior triangles is isosceles, the bisector will bisect both the angle and the side length. This makes the base of each "half" triangle $$\frac{a}{2}$$ and the angle closest to the polygon's in-center $$\frac{180}{n}\Rightarrow \frac{\pi}{n}$$). Next, we find the height of the triangle (which is equivalent to the apothem. In this case, because we are given the angle closest to the incenter of the polygon and the corresponding opposite side length $$(\frac{a}{2})$$, we must multiply the base by $$\cot\dfrac{\pi}{n}$$. By doing this, we are left with the height since cot represents (in this case) the $$\frac{adjacent}{base}$$. Then, we get the area of each triangle $$\dfrac{b\times h}{2}\Rightarrow \dfrac{a\times (\dfrac{a}{2} \times (\cot\dfrac{\pi}{n}))}{2}\Rightarrow \dfrac{1}{4} a^2\cot(\dfrac{\pi}{n})$$. Finally, because there are as many triangles as there are sides, we multiply our formula by n (the number of sides and are left with $$\boxed{\dfrac{1}{4} na^2\cot(\dfrac{\pi}{n})}$$. Note by Trevor Arashiro 3 years, 7 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ ## Comments Sort by: Top Newest Write a comment or ask a question... - 3 years, 7 months ago Log in to reply Haha. Good use of the back ground text. - 3 years, 7 months ago Log in to reply Good work. (y) - 3 years, 7 months ago Log in to reply you cheat !!!!!!!(to sumit sakarkhar - 3 years, 6 months ago Log in to reply toooooooooo good - 3 years, 6 months ago Log in to reply thanks a lot - 3 years, 7 months ago Log in to reply awesome!!!!!!!!! well said :) - 3 years, 7 months ago Log in to reply 45 - 3 years, 7 months ago Log in to reply awesome work, - 3 years, 7 months ago Log in to reply awesome work. - 3 years, 7 months ago Log in to reply This is great work. I have yet to understand trigonometry to this extent. - 3 years, 7 months ago Log in to reply gr8 work - 3 years, 7 months ago Log in to reply Absolutely brilliant!!!!!!!! well said - 3 years, 7 months ago Log in to reply absolutely brilliant well said - 3 years, 7 months ago Log in to reply Shouldn't the formula be n (a^2)tan(pi/n) paragraph 2 If "a" is the radius, then the area converges to pi at n=infinity - 3 years, 7 months ago Log in to reply Well, if we try a triangle of side length 6, the area is $$9\sqrt3$$. By your formula, it's supposedly $$3(36)tan(60)=108\sqrt3$$ - 3 years, 7 months ago Log in to reply "a" is the radius, not the side length. If it was the side length, then your formula is correct. Your diagram made it confusing. - 3 years, 7 months ago Log in to reply Could we say the triangles formed are equilateral? For example a hexagon would have 6 equilateral triangles area would be 6a^2root(3)/4 ?? - 3 years, 6 months ago Log in to reply × Problem Loading... Note Loading... Set Loading...
2018-04-26 13:18:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9930503368377686, "perplexity": 2289.273497486195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948214.37/warc/CC-MAIN-20180426125104-20180426145104-00148.warc.gz"}
https://wikimili.com/en/Stochastic_process
# Stochastic process Last updated In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. Examples include the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule. [1] [4] [5] Stochastic processes have applications in many disciplines such as biology, [6] chemistry, [7] ecology, [8] neuroscience, [9] physics, [10] image processing, signal processing, [11] control theory, [12] information theory, [13] computer science, [14] cryptography [15] and telecommunications. [16] Furthermore, seemingly random changes in financial markets have motivated the extensive use of stochastic processes in finance. [17] [18] [19] ## Contents Applications and the study of phenomena have in turn inspired the proposal of new stochastic processes. Examples of such stochastic processes include the Wiener process or Brownian motion process, [lower-alpha 1] used by Louis Bachelier to study price changes on the Paris Bourse, [22] and the Poisson process, used by A. K. Erlang to study the number of phone calls occurring in a certain period of time. [23] These two stochastic processes are considered the most important and central in the theory of stochastic processes, [1] [4] [24] and were discovered repeatedly and independently, both before and after Bachelier and Erlang, in different settings and countries. [22] [25] The term random function is also used to refer to a stochastic or random process, [26] [27] because a stochastic process can also be interpreted as a random element in a function space. [28] [29] The terms stochastic process and random process are used interchangeably, often with no specific mathematical space for the set that indexes the random variables. [28] [30] But often these two terms are used when the random variables are indexed by the integers or an interval of the real line. [5] [30] If the random variables are indexed by the Cartesian plane or some higher-dimensional Euclidean space, then the collection of random variables is usually called a random field instead. [5] [31] The values of a stochastic process are not always numbers and can be vectors or other mathematical objects. [5] [29] Based on their mathematical properties, stochastic processes can be grouped into various categories, which include random walks, [32] martingales, [33] Markov processes, [34] Lévy processes, [35] Gaussian processes, [36] random fields, [37] renewal processes, and branching processes. [38] The study of stochastic processes uses mathematical knowledge and techniques from probability, calculus, linear algebra, set theory, and topology [39] [40] [41] as well as branches of mathematical analysis such as real analysis, measure theory, Fourier analysis, and functional analysis. [42] [43] [44] The theory of stochastic processes is considered to be an important contribution to mathematics [45] and it continues to be an active topic of research for both theoretical reasons and applications. [46] [47] [48] ## Introduction A stochastic or random process can be defined as a collection of random variables that is indexed by some mathematical set, meaning that each random variable of the stochastic process is uniquely associated with an element in the set. [4] [5] The set used to index the random variables is called the index set. Historically, the index set was some subset of the real line, such as the natural numbers, giving the index set the interpretation of time. [1] Each random variable in the collection takes values from the same mathematical space known as the state space. This state space can be, for example, the integers, the real line or ${\displaystyle n}$-dimensional Euclidean space. [1] [5] An increment is the amount that a stochastic process changes between two index values, often interpreted as two points in time. [49] [50] A stochastic process can have many outcomes, due to its randomness, and a single outcome of a stochastic process is called, among other names, a sample function or realization. [29] [51] ### Classifications A stochastic process can be classified in different ways, for example, by its state space, its index set, or the dependence among the random variables. One common way of classification is by the cardinality of the index set and the state space. [52] [53] [54] When interpreted as time, if the index set of a stochastic process has a finite or countable number of elements, such as a finite set of numbers, the set of integers, or the natural numbers, then the stochastic process is said to be in discrete time . [55] [56] If the index set is some interval of the real line, then time is said to be continuous . The two types of stochastic processes are respectively referred to as discrete-time and continuous-time stochastic processes . [49] [57] [58] Discrete-time stochastic processes are considered easier to study because continuous-time processes require more advanced mathematical techniques and knowledge, particularly due to the index set being uncountable. [59] [60] If the index set is the integers, or some subset of them, then the stochastic process can also be called a random sequence. [56] If the state space is the integers or natural numbers, then the stochastic process is called a discrete or integer-valued stochastic process. If the state space is the real line, then the stochastic process is referred to as a real-valued stochastic process or a process with continuous state space. If the state space is ${\displaystyle n}$-dimensional Euclidean space, then the stochastic process is called a ${\displaystyle n}$-dimensional vector process or ${\displaystyle n}$-vector process. [52] [53] ### Etymology The word stochastic in English was originally used as an adjective with the definition "pertaining to conjecturing", and stemming from a Greek word meaning "to aim at a mark, guess", and the Oxford English Dictionary gives the year 1662 as its earliest occurrence. [61] In his work on probability Ars Conjectandi, originally published in Latin in 1713, Jakob Bernoulli used the phrase "Ars Conjectandi sive Stochastice", which has been translated to "the art of conjecturing or stochastics". [62] This phrase was used, with reference to Bernoulli, by Ladislaus Bortkiewicz [63] who in 1917 wrote in German the word stochastik with a sense meaning random. The term stochastic process first appeared in English in a 1934 paper by Joseph Doob. [61] For the term and a specific mathematical definition, Doob cited another 1934 paper, where the term stochastischer Prozeß was used in German by Aleksandr Khinchin, [64] [65] though the German term had been used earlier, for example, by Andrei Kolmogorov in 1931. [66] According to the Oxford English Dictionary, early occurrences of the word random in English with its current meaning, which relates to chance or luck, date back to the 16th century, while earlier recorded usages started in the 14th century as a noun meaning "impetuosity, great speed, force, or violence (in riding, running, striking, etc.)". The word itself comes from a Middle French word meaning "speed, haste", and it is probably derived from a French verb meaning "to run" or "to gallop". The first written appearance of the term random process pre-dates stochastic process, which the Oxford English Dictionary also gives as a synonym, and was used in an article by Francis Edgeworth published in 1888. [67] ### Terminology The definition of a stochastic process varies, [68] but a stochastic process is traditionally defined as a collection of random variables indexed by some set. [69] [70] The terms random process and stochastic process are considered synonyms and are used interchangeably, without the index set being precisely specified. [28] [30] [31] [71] [72] [73] Both "collection", [29] [71] or "family" are used [4] [74] while instead of "index set", sometimes the terms "parameter set" [29] or "parameter space" [31] are used. The term random function is also used to refer to a stochastic or random process, [5] [75] [76] though sometimes it is only used when the stochastic process takes real values. [29] [74] This term is also used when the index sets are mathematical spaces other than the real line, [5] [77] while the terms stochastic process and random process are usually used when the index set is interpreted as time, [5] [77] [78] and other terms are used such as random field when the index set is ${\displaystyle n}$-dimensional Euclidean space ${\displaystyle \mathbb {R} ^{n}}$ or a manifold. [5] [29] [31] ### Notation A stochastic process can be denoted, among other ways, by ${\displaystyle \{X(t)\}_{t\in T}}$, [57] ${\displaystyle \{X_{t}\}_{t\in T}}$, [70] ${\displaystyle \{X_{t}\}}$ [79] ${\displaystyle \{X(t)\}}$ or simply as ${\displaystyle X}$ or ${\displaystyle X(t)}$, although ${\displaystyle X(t)}$ is regarded as an abuse of function notation. [80] For example, ${\displaystyle X(t)}$ or ${\displaystyle X_{t}}$ are used to refer to the random variable with the index ${\displaystyle t}$, and not the entire stochastic process. [79] If the index set is ${\displaystyle T=[0,\infty )}$, then one can write, for example, ${\displaystyle (X_{t},t\geq 0)}$ to denote the stochastic process. [30] ## Examples ### Bernoulli process One of the simplest stochastic processes is the Bernoulli process, [81] which is a sequence of independent and identically distributed (iid) random variables, where each random variable takes either the value one or zero, say one with probability ${\displaystyle p}$ and zero with probability ${\displaystyle 1-p}$. This process can be linked to repeatedly flipping a coin, where the probability of obtaining a head is ${\displaystyle p}$ and its value is one, while the value of a tail is zero. [82] In other words, a Bernoulli process is a sequence of iid Bernoulli random variables, [83] where each coin flip is an example of a Bernoulli trial. [84] ### Random walk Random walks are stochastic processes that are usually defined as sums of iid random variables or random vectors in Euclidean space, so they are processes that change in discrete time. [85] [86] [87] [88] [89] But some also use the term to refer to processes that change in continuous time, [90] particularly the Wiener process used in finance, which has led to some confusion, resulting in its criticism. [91] There are other various types of random walks, defined so their state spaces can be other mathematical objects, such as lattices and groups, and in general they are highly studied and have many applications in different disciplines. [90] [92] A classic example of a random walk is known as the simple random walk, which is a stochastic process in discrete time with the integers as the state space, and is based on a Bernoulli process, where each Bernoulli variable takes either the value positive one or negative one. In other words, the simple random walk takes place on the integers, and its value increases by one with probability, say, ${\displaystyle p}$, or decreases by one with probability ${\displaystyle 1-p}$, so the index set of this random walk is the natural numbers, while its state space is the integers. If the ${\displaystyle p=0.5}$, this random walk is called a symmetric random walk. [93] [94] ### Wiener process The Wiener process is a stochastic process with stationary and independent increments that are normally distributed based on the size of the increments. [2] [95] The Wiener process is named after Norbert Wiener, who proved its mathematical existence, but the process is also called the Brownian motion process or just Brownian motion due to its historical connection as a model for Brownian movement in liquids. [96] [97] [98] Playing a central role in the theory of probability, the Wiener process is often considered the most important and studied stochastic process, with connections to other stochastic processes. [1] [2] [3] [99] [100] [101] [102] Its index set and state space are the non-negative numbers and real numbers, respectively, so it has both continuous index set and states space. [103] But the process can be defined more generally so its state space can be ${\displaystyle n}$-dimensional Euclidean space. [92] [100] [104] If the mean of any increment is zero, then the resulting Wiener or Brownian motion process is said to have zero drift. If the mean of the increment for any two points in time is equal to the time difference multiplied by some constant ${\displaystyle \mu }$, which is a real number, then the resulting stochastic process is said to have drift ${\displaystyle \mu }$. [105] [106] [107] Almost surely, a sample path of a Wiener process is continuous everywhere but nowhere differentiable. It can be considered as a continuous version of the simple random walk. [50] [106] The process arises as the mathematical limit of other stochastic processes such as certain random walks rescaled, [108] [109] which is the subject of Donsker's theorem or invariance principle, also known as the functional central limit theorem. [110] [111] [112] The Wiener process is a member of some important families of stochastic processes, including Markov processes, Lévy processes and Gaussian processes. [2] [50] The process also has many applications and is the main stochastic process used in stochastic calculus. [113] [114] It plays a central role in quantitative finance, [115] [116] where it is used, for example, in the Black–Scholes–Merton model. [117] The process is also used in different fields, including the majority of natural sciences as well as some branches of social sciences, as a mathematical model for various random phenomena. [3] [118] [119] ### Poisson process The Poisson process is a stochastic process that has different forms and definitions. [120] [121] It can be defined as a counting process, which is a stochastic process that represents the random number of points or events up to some time. The number of points of the process that are located in the interval from zero to some given time is a Poisson random variable that depends on that time and some parameter. This process has the natural numbers as its state space and the non-negative numbers as its index set. This process is also called the Poisson counting process, since it can be interpreted as an example of a counting process. [120] If a Poisson process is defined with a single positive constant, then the process is called a homogeneous Poisson process. [120] [122] The homogeneous Poisson process is a member of important classes of stochastic processes such as Markov processes and Lévy processes. [50] The homogeneous Poisson process can be defined and generalized in different ways. It can be defined such that its index set is the real line, and this stochastic process is also called the stationary Poisson process. [123] [124] If the parameter constant of the Poisson process is replaced with some non-negative integrable function of ${\displaystyle t}$, the resulting process is called an inhomogeneous or nonhomogeneous Poisson process, where the average density of points of the process is no longer constant. [125] Serving as a fundamental process in queueing theory, the Poisson process is an important process for mathematical models, where it finds applications for models of events randomly occurring in certain time windows. [126] [127] Defined on the real line, the Poisson process can be interpreted as a stochastic process, [50] [128] among other random objects. [129] [130] But then it can be defined on the ${\displaystyle n}$-dimensional Euclidean space or other mathematical spaces, [131] where it is often interpreted as a random set or a random counting measure, instead of a stochastic process. [129] [130] In this setting, the Poisson process, also called the Poisson point process, is one of the most important objects in probability theory, both for applications and theoretical reasons. [23] [132] But it has been remarked that the Poisson process does not receive as much attention as it should, partly due to it often being considered just on the real line, and not on other mathematical spaces. [132] [133] ## Definitions ### Stochastic process A stochastic process is defined as a collection of random variables defined on a common probability space ${\displaystyle (\Omega ,{\mathcal {F}},P)}$, where ${\displaystyle \Omega }$ is a sample space, ${\displaystyle {\mathcal {F}}}$ is a ${\displaystyle \sigma }$-algebra, and ${\displaystyle P}$ is a probability measure; and the random variables, indexed by some set ${\displaystyle T}$, all take values in the same mathematical space ${\displaystyle S}$, which must be measurable with respect to some ${\displaystyle \sigma }$-algebra ${\displaystyle \Sigma }$. [29] In other words, for a given probability space ${\displaystyle (\Omega ,{\mathcal {F}},P)}$ and a measurable space ${\displaystyle (S,\Sigma )}$, a stochastic process is a collection of ${\displaystyle S}$-valued random variables, which can be written as: [81] ${\displaystyle \{X(t):t\in T\}.}$ Historically, in many problems from the natural sciences a point ${\displaystyle t\in T}$ had the meaning of time, so ${\displaystyle X(t)}$ is a random variable representing a value observed at time ${\displaystyle t}$. [134] A stochastic process can also be written as ${\displaystyle \{X(t,\omega ):t\in T\}}$ to reflect that it is actually a function of two variables, ${\displaystyle t\in T}$ and ${\displaystyle \omega \in \Omega }$. [29] [135] There are other ways to consider a stochastic process, with the above definition being considered the traditional one. [69] [70] For example, a stochastic process can be interpreted or defined as a ${\displaystyle S^{T}}$-valued random variable, where ${\displaystyle S^{T}}$ is the space of all the possible ${\displaystyle S}$-valued functions of ${\displaystyle t\in T}$ that map from the set ${\displaystyle T}$ into the space ${\displaystyle S}$. [28] [69] ### Index set The set ${\displaystyle T}$ is called the index set [4] [52] or parameter set [29] [136] of the stochastic process. Often this set is some subset of the real line, such as the natural numbers or an interval, giving the set ${\displaystyle T}$ the interpretation of time. [1] In addition to these sets, the index set ${\displaystyle T}$ can be another set with a total order or a more general set, [1] [55] such as the Cartesian plane ${\displaystyle R^{2}}$ or ${\displaystyle n}$-dimensional Euclidean space, where an element ${\displaystyle t\in T}$ can represent a point in space. [49] [137] That said, many results and theorems are only possible for stochastic processes with a totally ordered index set. [138] ### State space The mathematical space ${\displaystyle S}$ of a stochastic process is called its state space. This mathematical space can be defined using integers, real lines, ${\displaystyle n}$-dimensional Euclidean spaces, complex planes, or more abstract mathematical spaces. The state space is defined using elements that reflect the different values that the stochastic process can take. [1] [5] [29] [52] [57] ### Sample function A sample function is a single outcome of a stochastic process, so it is formed by taking a single possible value of each random variable of the stochastic process. [29] [139] More precisely, if ${\displaystyle \{X(t,\omega ):t\in T\}}$ is a stochastic process, then for any point ${\displaystyle \omega \in \Omega }$, the mapping ${\displaystyle X(\cdot ,\omega ):T\rightarrow S,}$ is called a sample function, a realization, or, particularly when ${\displaystyle T}$ is interpreted as time, a sample path of the stochastic process ${\displaystyle \{X(t,\omega ):t\in T\}}$. [51] This means that for a fixed ${\displaystyle \omega \in \Omega }$, there exists a sample function that maps the index set ${\displaystyle T}$ to the state space ${\displaystyle S}$. [29] Other names for a sample function of a stochastic process include trajectory, path function [140] or path. [141] ### Increment An increment of a stochastic process is the difference between two random variables of the same stochastic process. For a stochastic process with an index set that can be interpreted as time, an increment is how much the stochastic process changes over a certain time period. For example, if ${\displaystyle \{X(t):t\in T\}}$ is a stochastic process with state space ${\displaystyle S}$ and index set ${\displaystyle T=[0,\infty )}$, then for any two non-negative numbers ${\displaystyle t_{1}\in [0,\infty )}$ and ${\displaystyle t_{2}\in [0,\infty )}$ such that ${\displaystyle t_{1}\leq t_{2}}$, the difference ${\displaystyle X_{t_{2}}-X_{t_{1}}}$ is a ${\displaystyle S}$-valued random variable known as an increment. [49] [50] When interested in the increments, often the state space ${\displaystyle S}$ is the real line or the natural numbers, but it can be ${\displaystyle n}$-dimensional Euclidean space or more abstract spaces such as Banach spaces. [50] ### Further definitions #### Law For a stochastic process ${\displaystyle X\colon \Omega \rightarrow S^{T}}$ defined on the probability space ${\displaystyle (\Omega ,{\mathcal {F}},P)}$, the law of stochastic process ${\displaystyle X}$ is defined as the image measure: ${\displaystyle \mu =P\circ X^{-1},}$ where ${\displaystyle P}$ is a probability measure, the symbol ${\displaystyle \circ }$ denotes function composition and ${\displaystyle X^{-1}}$ is the pre-image of the measurable function or, equivalently, the ${\displaystyle S^{T}}$-valued random variable ${\displaystyle X}$, where ${\displaystyle S^{T}}$ is the space of all the possible ${\displaystyle S}$-valued functions of ${\displaystyle t\in T}$, so the law of a stochastic process is a probability measure. [28] [69] [142] [143] For a measurable subset ${\displaystyle B}$ of ${\displaystyle S^{T}}$, the pre-image of ${\displaystyle X}$ gives ${\displaystyle X^{-1}(B)=\{\omega \in \Omega :X(\omega )\in B\},}$ so the law of a ${\displaystyle X}$ can be written as: [29] ${\displaystyle \mu (B)=P(\{\omega \in \Omega :X(\omega )\in B\}).}$ The law of a stochastic process or a random variable is also called the probability law, probability distribution, or the distribution. [134] [142] [144] [145] [146] #### Finite-dimensional probability distributions For a stochastic process ${\displaystyle X}$ with law ${\displaystyle \mu }$, its finite-dimensional distribution for ${\displaystyle t_{1},\dots ,t_{n}\in T}$ is defined as: ${\displaystyle \mu _{t_{1},\dots ,t_{n}}=P\circ (X({t_{1}}),\dots ,X({t_{n}}))^{-1},}$ This measure ${\displaystyle \mu _{t_{1},..,t_{n}}}$is the joint distribution of the random vector ${\displaystyle (X({t_{1}}),\dots ,X({t_{n}}))}$; it can be viewed as a "projection" of the law ${\displaystyle \mu }$ onto a finite subset of ${\displaystyle T}$. [28] [147] For any measurable subset ${\displaystyle C}$ of the ${\displaystyle n}$-fold Cartesian power ${\displaystyle S^{n}=S\times \dots \times S}$, the finite-dimensional distributions of a stochastic process ${\displaystyle X}$ can be written as: [29] ${\displaystyle \mu _{t_{1},\dots ,t_{n}}(C)=P{\Big (}{\big \{}\omega \in \Omega$ :{\big (}X_{t_{1}}(\omega ),\dots ,X_{t_{n}}(\omega ){\big )}\in C{\big \}}{\Big )}.} The finite-dimensional distributions of a stochastic process satisfy two mathematical conditions known as consistency conditions. [58] #### Stationarity Stationarity is a mathematical property that a stochastic process has when all the random variables of that stochastic process are identically distributed. In other words, if ${\displaystyle X}$ is a stationary stochastic process, then for any ${\displaystyle t\in T}$ the random variable ${\displaystyle X_{t}}$ has the same distribution, which means that for any set of ${\displaystyle n}$ index set values ${\displaystyle t_{1},\dots ,t_{n}}$, the corresponding ${\displaystyle n}$ random variables ${\displaystyle X_{t_{1}},\dots X_{t_{n}},}$ all have the same probability distribution. The index set of a stationary stochastic process is usually interpreted as time, so it can be the integers or the real line. [148] [149] But the concept of stationarity also exists for point processes and random fields, where the index set is not interpreted as time. [148] [150] [151] When the index set ${\displaystyle T}$ can be interpreted as time, a stochastic process is said to be stationary if its finite-dimensional distributions are invariant under translations of time. This type of stochastic process can be used to describe a physical system that is in steady state, but still experiences random fluctuations. [148] The intuition behind stationarity is that as time passes the distribution of the stationary stochastic process remains the same. [152] A sequence of random variables forms a stationary stochastic process only if the random variables are identically distributed. [148] A stochastic process with the above definition of stationarity is sometimes said to be strictly stationary, but there are other forms of stationarity. One example is when a discrete-time or continuous-time stochastic process ${\displaystyle X}$ is said to be stationary in the wide sense, then the process ${\displaystyle X}$ has a finite second moment for all ${\displaystyle t\in T}$ and the covariance of the two random variables ${\displaystyle X_{t}}$ and ${\displaystyle X_{t+h}}$ depends only on the number ${\displaystyle h}$ for all ${\displaystyle t\in T}$. [152] [153] Khinchin introduced the related concept of stationarity in the wide sense, which has other names including covariance stationarity or stationarity in the broad sense. [153] [154] #### Filtration A filtration is an increasing sequence of sigma-algebras defined in relation to some probability space and an index set that has some total order relation, such as in the case of the index set being some subset of the real numbers. More formally, if a stochastic process has an index set with a total order, then a filtration ${\displaystyle \{{\mathcal {F}}_{t}\}_{t\in T}}$, on a probability space ${\displaystyle (\Omega ,{\mathcal {F}},P)}$ is a family of sigma-algebras such that ${\displaystyle {\mathcal {F}}_{s}\subseteq {\mathcal {F}}_{t}\subseteq {\mathcal {F}}}$ for all ${\displaystyle s\leq t}$, where ${\displaystyle t,s\in T}$ and ${\displaystyle \leq }$ denotes the total order of the index set ${\displaystyle T}$. [52] With the concept of a filtration, it is possible to study the amount of information contained in a stochastic process ${\displaystyle X_{t}}$ at ${\displaystyle t\in T}$, which can be interpreted as time ${\displaystyle t}$. [52] [155] The intuition behind a filtration ${\displaystyle {\mathcal {F}}_{t}}$ is that as time ${\displaystyle t}$ passes, more and more information on ${\displaystyle X_{t}}$ is known or available, which is captured in ${\displaystyle {\mathcal {F}}_{t}}$, resulting in finer and finer partitions of ${\displaystyle \Omega }$. [156] [157] #### Modification A modification of a stochastic process is another stochastic process, which is closely related to the original stochastic process. More precisely, a stochastic process ${\displaystyle X}$ that has the same index set ${\displaystyle T}$, set space ${\displaystyle S}$, and probability space ${\displaystyle (\Omega ,{\cal {F}},P)}$ as another stochastic process ${\displaystyle Y}$ is said to be a modification of ${\displaystyle Y}$ if for all ${\displaystyle t\in T}$ the following ${\displaystyle P(X_{t}=Y_{t})=1,}$ holds. Two stochastic processes that are modifications of each other have the same finite-dimensional law [158] and they are said to be stochastically equivalent or equivalent. [159] Instead of modification, the term version is also used, [150] [160] [161] [162] however some authors use the term version when two stochastic processes have the same finite-dimensional distributions, but they may be defined on different probability spaces, so two processes that are modifications of each other, are also versions of each other, in the latter sense, but not the converse. [163] [142] If a continuous-time real-valued stochastic process meets certain moment conditions on its increments, then the Kolmogorov continuity theorem says that there exists a modification of this process that has continuous sample paths with probability one, so the stochastic process has a continuous modification or version. [161] [162] [164] The theorem can also be generalized to random fields so the index set is ${\displaystyle n}$-dimensional Euclidean space [165] as well as to stochastic processes with metric spaces as their state spaces. [166] #### Indistinguishable Two stochastic processes ${\displaystyle X}$ and ${\displaystyle Y}$ defined on the same probability space ${\displaystyle (\Omega ,{\mathcal {F}},P)}$ with the same index set ${\displaystyle T}$ and set space ${\displaystyle S}$ are said be indistinguishable if the following ${\displaystyle P(X_{t}=Y_{t}{\text{ for all }}t\in T)=1,}$ holds. [142] [158] If two ${\displaystyle X}$ and ${\displaystyle Y}$ are modifications of each other and are almost surely continuous, then ${\displaystyle X}$ and ${\displaystyle Y}$ are indistinguishable. [167] #### Separability Separability is a property of a stochastic process based on its index set in relation to the probability measure. The property is assumed so that functionals of stochastic processes or random fields with uncountable index sets can form random variables. For a stochastic process to be separable, in addition to other conditions, its index set must be a separable space, [lower-alpha 2] which means that the index set has a dense countable subset. [150] [168] More precisely, a real-valued continuous-time stochastic process ${\displaystyle X}$ with a probability space ${\displaystyle (\Omega ,{\cal {F}},P)}$ is separable if its index set ${\displaystyle T}$ has a dense countable subset ${\displaystyle U\subset T}$ and there is a set ${\displaystyle \Omega _{0}\subset \Omega }$ of probability zero, so ${\displaystyle P(\Omega _{0})=0}$, such that for every open set ${\displaystyle G\subset T}$ and every closed set ${\displaystyle F\subset \textstyle R=(-\infty ,\infty )}$, the two events ${\displaystyle \{X_{t}\in F{\text{ for all }}t\in G\cap U\}}$ and ${\displaystyle \{X_{t}\in F{\text{ for all }}t\in G\}}$ differ from each other at most on a subset of ${\displaystyle \Omega _{0}}$. [169] [170] [171] The definition of separability [lower-alpha 3] can also be stated for other index sets and state spaces, [174] such as in the case of random fields, where the index set as well as the state space can be ${\displaystyle n}$-dimensional Euclidean space. [31] [150] The concept of separability of a stochastic process was introduced by Joseph Doob,. [168] The underlying idea of separability is to make a countable set of points of the index set determine the properties of the stochastic process. [172] Any stochastic process with a countable index set already meets the separability conditions, so discrete-time stochastic processes are always separable. [175] A theorem by Doob, sometimes known as Doob's separability theorem, says that any real-valued continuous-time stochastic process has a separable modification. [168] [170] [176] Versions of this theorem also exist for more general stochastic processes with index sets and state spaces other than the real line. [136] #### Independence Two stochastic processes ${\displaystyle X}$ and ${\displaystyle Y}$ defined on the same probability space ${\displaystyle (\Omega ,{\mathcal {F}},P)}$ with the same index set ${\displaystyle T}$ are said be independent if for all ${\displaystyle n\in \mathbb {N} }$ and for every choice of epochs ${\displaystyle t_{1},\ldots ,t_{n}\in T}$, the random vectors ${\displaystyle \left(X(t_{1}),\ldots ,X(t_{n})\right)}$ and ${\displaystyle \left(Y(t_{1}),\ldots ,Y(t_{n})\right)}$ are independent. [177] :p. 515 #### Uncorrelatedness Two stochastic processes ${\displaystyle \left\{X_{t}\right\}}$ and ${\displaystyle \left\{Y_{t}\right\}}$ are called uncorrelated if their cross-covariance ${\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {Y} }(t_{1},t_{2})=\operatorname {E} \left[\left(X(t_{1})-\mu _{X}(t_{1})\right)\left(Y(t_{2})-\mu _{Y}(t_{2})\right)\right]}$ is zero for all times. [178] :p. 142 Formally: ${\displaystyle \left\{X_{t}\right\},\left\{Y_{t}\right\}{\text{ uncorrelated}}\quad \iff \quad \operatorname {K} _{\mathbf {X} \mathbf {Y} }(t_{1},t_{2})=0\quad \forall t_{1},t_{2}}$. #### Independence implies uncorrelatedness If two stochastic processes ${\displaystyle X}$ and ${\displaystyle Y}$ are independent, then they are also uncorrelated. [178] :p. 151 #### Orthogonality Two stochastic processes ${\displaystyle \left\{X_{t}\right\}}$ and ${\displaystyle \left\{Y_{t}\right\}}$ are called orthogonal if their cross-correlation ${\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {Y} }(t_{1},t_{2})=\operatorname {E} [X(t_{1}){\overline {Y(t_{2})}}]}$ is zero for all times. [178] :p. 142 Formally: ${\displaystyle \left\{X_{t}\right\},\left\{Y_{t}\right\}{\text{ orthogonal}}\quad \iff \quad \operatorname {R} _{\mathbf {X} \mathbf {Y} }(t_{1},t_{2})=0\quad \forall t_{1},t_{2}}$. #### Skorokhod space A Skorokhod space, also written as Skorohod space, is a mathematical space of all the functions that are right-continuous with left limits, defined on some interval of the real line such as ${\displaystyle [0,1]}$ or ${\displaystyle [0,\infty )}$, and take values on the real line or on some metric space. [179] [180] [181] Such functions are known as càdlàg or cadlag functions, based on the acronym of the French phrase continue à droite, limite à gauche. [179] [182] A Skorokhod function space, introduced by Anatoliy Skorokhod, [181] is often denoted with the letter ${\displaystyle D}$, [179] [180] [181] [182] so the function space is also referred to as space ${\displaystyle D}$. [179] [183] [184] The notation of this function space can also include the interval on which all the càdlàg functions are defined, so, for example, ${\displaystyle D[0,1]}$ denotes the space of càdlàg functions defined on the unit interval ${\displaystyle [0,1]}$. [182] [184] [185] Skorokhod function spaces are frequently used in the theory of stochastic processes because it often assumed that the sample functions of continuous-time stochastic processes belong to a Skorokhod space. [181] [183] Such spaces contain continuous functions, which correspond to sample functions of the Wiener process. But the space also has functions with discontinuities, which means that the sample functions of stochastic processes with jumps, such as the Poisson process (on the real line), are also members of this space. [184] [186] #### Regularity In the context of mathematical construction of stochastic processes, the term regularity is used when discussing and assuming certain conditions for a stochastic process to resolve possible construction issues. [187] [188] For example, to study stochastic processes with uncountable index sets, it is assumed that the stochastic process adheres to some type of regularity condition such as the sample functions being continuous. [189] [190] ## Further examples ### Markov processes and chains Markov processes are stochastic processes, traditionally in discrete or continuous time, that have the Markov property, which means the next value of the Markov process depends on the current value, but it is conditionally independent of the previous values of the stochastic process. In other words, the behavior of the process in the future is stochastically independent of its behavior in the past, given the current state of the process. [191] [192] The Brownian motion process and the Poisson process (in one dimension) are both examples of Markov processes [193] in continuous time, while random walks on the integers and the gambler's ruin problem are examples of Markov processes in discrete time. [194] [195] A Markov chain is a type of Markov process that has either discrete state space or discrete index set (often representing time), but the precise definition of a Markov chain varies. [196] For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of the nature of time), [197] [198] [199] [200] but it has been also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space). [196] It has been argued that the first definition of a Markov chain, where it has discrete time, now tends to be used, despite the second definition having been used by researchers like Joseph Doob and Kai Lai Chung. [201] Markov processes form an important class of stochastic processes and have applications in many areas. [40] [202] For example, they are the basis for a general stochastic simulation method known as Markov chain Monte Carlo, which is used for simulating random objects with specific probability distributions, and has found application in Bayesian statistics. [203] [204] The concept of the Markov property was originally for stochastic processes in continuous and discrete time, but the property has been adapted for other index sets such as ${\displaystyle n}$-dimensional Euclidean space, which results in collections of random variables known as Markov random fields. [205] [206] [207] ### Martingale A martingale is a discrete-time or continuous-time stochastic process with the property that, at every instant, given the current value and all the past values of the process, the conditional expectation of every future value is equal to the current value. In discrete time, if this property holds for the next value, then it holds for all future values. The exact mathematical definition of a martingale requires two other conditions coupled with the mathematical concept of a filtration, which is related to the intuition of increasing available information as time passes. Martingales are usually defined to be real-valued, [208] [209] [155] but they can also be complex-valued [210] or even more general. [211] A symmetric random walk and a Wiener process (with zero drift) are both examples of martingales, respectively, in discrete and continuous time. [208] [209] For a sequence of independent and identically distributed random variables ${\displaystyle X_{1},X_{2},X_{3},\dots }$ with zero mean, the stochastic process formed from the successive partial sums ${\displaystyle X_{1},X_{1}+X_{2},X_{1}+X_{2}+X_{3},\dots }$ is a discrete-time martingale. [212] In this aspect, discrete-time martingales generalize the idea of partial sums of independent random variables. [213] Martingales can also be created from stochastic processes by applying some suitable transformations, which is the case for the homogeneous Poisson process (on the real line) resulting in a martingale called the compensated Poisson process. [209] Martingales can also be built from other martingales. [212] For example, there are martingales based on the martingale the Wiener process, forming continuous-time martingales. [208] [214] Martingales mathematically formalize the idea of a fair game, [215] and they were originally developed to show that it is not possible to win a fair game. [216] But now they are used in many areas of probability, which is one of the main reasons for studying them. [155] [216] [217] Many problems in probability have been solved by finding a martingale in the problem and studying it. [218] Martingales will converge, given some conditions on their moments, so they are often used to derive convergence results, due largely to martingale convergence theorems. [213] [219] [220] Martingales have many applications in statistics, but it has been remarked that its use and application are not as widespread as it could be in the field of statistics, particularly statistical inference. [221] They have found applications in areas in probability theory such as queueing theory and Palm calculus [222] and other fields such as economics [223] and finance. [18] ### Lévy process Lévy processes are types of stochastic processes that can be considered as generalizations of random walks in continuous time. [50] [224] These processes have many applications in fields such as finance, fluid mechanics, physics and biology. [225] [226] The main defining characteristics of these processes are their stationarity and independence properties, so they were known as processes with stationary and independent increments. In other words, a stochastic process ${\displaystyle X}$ is a Lévy process if for ${\displaystyle n}$ non-negatives numbers, ${\displaystyle 0\leq t_{1}\leq \dots \leq t_{n}}$, the corresponding ${\displaystyle n-1}$ increments ${\displaystyle X_{t_{2}}-X_{t_{1}},\dots ,X_{t_{n}}-X_{t_{n-1}},}$ are all independent of each other, and the distribution of each increment only depends on the difference in time. [50] A Lévy process can be defined such that its state space is some abstract mathematical space, such as a Banach space, but the processes are often defined so that they take values in Euclidean space. The index set is the non-negative numbers, so ${\displaystyle I=[0,\infty )}$, which gives the interpretation of time. Important stochastic processes such as the Wiener process, the homogeneous Poisson process (in one dimension), and subordinators are all Lévy processes. [50] [224] ### Random field A random field is a collection of random variables indexed by a ${\displaystyle n}$-dimensional Euclidean space or some manifold. In general, a random field can be considered an example of a stochastic or random process, where the index set is not necessarily a subset of the real line. [31] But there is a convention that an indexed collection of random variables is called a random field when the index has two or more dimensions. [5] [29] [227] If the specific definition of a stochastic process requires the index set to be a subset of the real line, then the random field can be considered as a generalization of stochastic process. [228] ### Point process A point process is a collection of points randomly located on some mathematical space such as the real line, ${\displaystyle n}$-dimensional Euclidean space, or more abstract spaces. Sometimes the term point process is not preferred, as historically the word process denoted an evolution of some system in time, so a point process is also called a random point field. [229] There are different interpretations of a point process, such a random counting measure or a random set. [230] [231] Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process, [232] [233] though it has been remarked that the difference between point processes and stochastic processes is not clear. [233] Other authors consider a point process as a stochastic process, where the process is indexed by sets of the underlying space [lower-alpha 4] on which it is defined, such as the real line or ${\displaystyle n}$-dimensional Euclidean space. [236] [237] Other stochastic processes such as renewal and counting processes are studied in the theory of point processes. [238] [233] ## History ### Early probability theory Probability theory has its origins in games of chance, which have a long history, with some games being played thousands of years ago, [239] [240] but very little analysis on them was done in terms of probability. [239] [241] The year 1654 is often considered the birth of probability theory when French mathematicians Pierre Fermat and Blaise Pascal had a written correspondence on probability, motivated by a gambling problem. [239] [242] [243] But there was earlier mathematical work done on the probability of gambling games such as Liber de Ludo Aleae by Gerolamo Cardano, written in the 16th century but posthumously published later in 1663. [239] [244] After Cardano, Jakob Bernoulli [lower-alpha 5] wrote Ars Conjectandi, which is considered a significant event in the history of probability theory. [239] Bernoulli's book was published, also posthumously, in 1713 and inspired many mathematicians to study probability. [239] [246] [247] But despite some renowned mathematicians contributing to probability theory, such as Pierre-Simon Laplace, Abraham de Moivre, Carl Gauss, Siméon Poisson and Pafnuty Chebyshev, [248] [249] most of the mathematical community [lower-alpha 6] did not consider probability theory to be part of mathematics until the 20th century. [248] [250] [251] [252] ### Statistical mechanics In the physical sciences, scientists developed in the 19th century the discipline of statistical mechanics, where physical systems, such as containers filled with gases, can be regarded or treated mathematically as collections of many moving particles. Although there were attempts to incorporate randomness into statistical physics by some scientists, such as Rudolf Clausius, most of the work had little or no randomness. [253] [254] This changed in 1859 when James Clerk Maxwell contributed significantly to the field, more specifically, to the kinetic theory of gases, by presenting work where he assumed the gas particles move in random directions at random velocities. [255] [256] The kinetic theory of gases and statistical physics continued to be developed in the second half of the 19th century, with work done chiefly by Clausius, Ludwig Boltzmann and Josiah Gibbs, which would later have an influence on Albert Einstein's mathematical model for Brownian movement. [257] ### Measure theory and probability theory At the International Congress of Mathematicians in Paris in 1900, David Hilbert presented a list of mathematical problems, where his sixth problem asked for a mathematical treatment of physics and probability involving axioms. [249] Around the start of the 20th century, mathematicians developed measure theory, a branch of mathematics for studying integrals of mathematical functions, where two of the founders were French mathematicians, Henri Lebesgue and Émile Borel. In 1925 another French mathematician Paul Lévy published the first probability book that used ideas from measure theory. [249] In 1920s fundamental contributions to probability theory were made in the Soviet Union by mathematicians such as Sergei Bernstein, Aleksandr Khinchin, [lower-alpha 7] and Andrei Kolmogorov. [252] Kolmogorov published in 1929 his first attempt at presenting a mathematical foundation, based on measure theory, for probability theory. [258] In the early 1930s Khinchin and Kolmogorov set up probability seminars, which were attended by researchers such as Eugene Slutsky and Nikolai Smirnov, [259] and Khinchin gave the first mathematical definition of a stochastic process as a set of random variables indexed by the real line. [64] [260] [lower-alpha 8] ### Birth of modern probability theory In 1933 Andrei Kolmogorov published in German, his book on the foundations of probability theory titled Grundbegriffe der Wahrscheinlichkeitsrechnung, [lower-alpha 9] where Kolmogorov used measure theory to develop an axiomatic framework for probability theory. The publication of this book is now widely considered to be the birth of modern probability theory, when the theories of probability and stochastic processes became parts of mathematics. [249] [252] After the publication of Kolmogorov's book, further fundamental work on probability theory and stochastic processes was done by Khinchin and Kolmogorov as well as other mathematicians such as Joseph Doob, William Feller, Maurice Fréchet, Paul Lévy, Wolfgang Doeblin, and Harald Cramér. [249] [252] Decades later Cramér referred to the 1930s as the "heroic period of mathematical probability theory". [252] World War II greatly interrupted the development of probability theory, causing, for example, the migration of Feller from Sweden to the United States of America [252] and the death of Doeblin, considered now a pioneer in stochastic processes. [262] ### Stochastic processes after World War II After World War II the study of probability theory and stochastic processes gained more attention from mathematicians, with significant contributions made in many areas of probability and mathematics as well as the creation of new areas. [252] [265] Starting in the 1940s, Kiyosi Itô published papers developing the field of stochastic calculus, which involves stochastic integrals and stochastic differential equations based on the Wiener or Brownian motion process. [266] Also starting in the 1940s, connections were made between stochastic processes, particularly martingales, and the mathematical field of potential theory, with early ideas by Shizuo Kakutani and then later work by Joseph Doob. [265] Further work, considered pioneering, was done by Gilbert Hunt in the 1950s, connecting Markov processes and potential theory, which had a significant effect on the theory of Lévy processes and led to more interest in studying Markov processes with methods developed by Itô. [22] [267] [268] In 1953 Doob published his book Stochastic processes, which had a strong influence on the theory of stochastic processes and stressed the importance of measure theory in probability. [265] [264] Doob also chiefly developed the theory of martingales, with later substantial contributions by Paul-André Meyer. Earlier work had been carried out by Sergei Bernstein, Paul Lévy and Jean Ville, the latter adopting the term martingale for the stochastic process. [269] [270] Methods from the theory of martingales became popular for solving various probability problems. Techniques and theory were developed to study Markov processes and then applied to martingales. Conversely, methods from the theory of martingales were established to treat Markov processes. [265] Other fields of probability were developed and used to study stochastic processes, with one main approach being the theory of large deviations. [265] The theory has many applications in statistical physics, among other fields, and has core ideas going back to at least the 1930s. Later in the 1960s and 1970s fundamental work was done by Alexander Wentzell in the Soviet Union and Monroe D. Donsker and Srinivasa Varadhan in the United States of America, [271] which would later result in Varadhan winning the 2007 Abel Prize. [272] In the 1990s and 2000s the theories of Schramm–Loewner evolution [273] and rough paths [142] were introduced and developed to study stochastic processes and other mathematical objects in probability theory, which respectively resulted in Fields Medals being awarded to Wendelin Werner [274] in 2008 and to Martin Hairer in 2014. [275] The theory of stochastic processes still continues to be a focus of research, with yearly international conferences on the topic of stochastic processes. [46] [225] ### Discoveries of specific stochastic processes Although Khinchin gave mathematical definitions of stochastic processes in the 1930s, [64] [260] specific stochastic processes had already been discovered in different settings, such as the Brownian motion process and the Poisson process. [22] [25] Some families of stochastic processes such as point processes or renewal processes have long and complex histories, stretching back centuries. [276] #### Bernoulli process The Bernoulli process, which can serve as a mathematical model for flipping a biased coin, is possibly the first stochastic process to have been studied. [82] The process is a sequence of independent Bernoulli trials, [83] which are named after Jackob Bernoulli who used them to study games of chance, including probability problems proposed and studied earlier by Christiaan Huygens. [277] Bernoulli's work, including the Bernoulli process, were published in his book Ars Conjectandi in 1713. [278] #### Random walks In 1905 Karl Pearson coined the term random walk while posing a problem describing a random walk on the plane, which was motivated by an application in biology, but such problems involving random walks had already been studied in other fields. Certain gambling problems that were studied centuries earlier can be considered as problems involving random walks. [90] [278] For example, the problem known as the Gambler's ruin is based on a simple random walk, [195] [279] and is an example of a random walk with absorbing barriers. [242] [280] Pascal, Fermat and Huyens all gave numerical solutions to this problem without detailing their methods, [281] and then more detailed solutions were presented by Jakob Bernoulli and Abraham de Moivre. [282] For random walks in ${\displaystyle n}$-dimensional integer lattices, George Pólya published in 1919 and 1921 work, where he studied the probability of a symmetric random walk returning to a previous position in the lattice. Pólya showed that a symmetric random walk, which has an equal probability to advance in any direction in the lattice, will return to a previous position in the lattice an infinite number of times with probability one in one and two dimensions, but with probability zero in three or higher dimensions. [283] [284] #### Wiener process The Wiener process or Brownian motion process has its origins in different fields including statistics, finance and physics. [22] In 1880, Thorvald Thiele wrote a paper on the method of least squares, where he used the process to study the errors of a model in time-series analysis. [285] [286] [287] The work is now considered as an early discovery of the statistical method known as Kalman filtering, but the work was largely overlooked. It is thought that the ideas in Thiele's paper were too advanced to have been understood by the broader mathematical and statistical community at the time. [287] The French mathematician Louis Bachelier used a Wiener process in his 1900 thesis [288] [289] in order to model price changes on the Paris Bourse, a stock exchange, [290] without knowing the work of Thiele. [22] It has been speculated that Bachelier drew ideas from the random walk model of Jules Regnault, but Bachelier did not cite him, [291] and Bachelier's thesis is now considered pioneering in the field of financial mathematics. [290] [291] It is commonly thought that Bachelier's work gained little attention and was forgotten for decades until it was rediscovered in the 1950s by the Leonard Savage, and then become more popular after Bachelier's thesis was translated into English in 1964. But the work was never forgotten in the mathematical community, as Bachelier published a book in 1912 detailing his ideas, [291] which was cited by mathematicians including Doob, Feller [291] and Kolmogorov. [22] The book continued to be cited, but then starting in the 1960s the original thesis by Bachelier began to be cited more than his book when economists started citing Bachelier's work. [291] In 1905 Albert Einstein published a paper where he studied the physical observation of Brownian motion or movement to explain the seemingly random movements of particles in liquids by using ideas from the kinetic theory of gases. Einstein derived a differential equation, known as a diffusion equation, for describing the probability of finding a particle in a certain region of space. Shortly after Einstein's first paper on Brownian movement, Marian Smoluchowski published work where he cited Einstein, but wrote that he had independently derived the equivalent results by using a different method. [292] Einstein's work, as well as experimental results obtained by Jean Perrin, later inspired Norbert Wiener in the 1920s [293] to use a type of measure theory, developed by Percy Daniell, and Fourier analysis to prove the existence of the Wiener process as a mathematical object. [22] #### Poisson process The Poisson process is named after Siméon Poisson, due to its definition involving the Poisson distribution, but Poisson never studied the process. [23] [294] There are a number of claims for early uses or discoveries of the Poisson process. [23] [25] At the beginning of the 20th century the Poisson process would arise independently in different situations. [23] [25] In Sweden 1903, Filip Lundberg published a thesis containing work, now considered fundamental and pioneering, where he proposed to model insurance claims with a homogeneous Poisson process. [295] [296] Another discovery occurred in Denmark in 1909 when A.K. Erlang derived the Poisson distribution when developing a mathematical model for the number of incoming phone calls in a finite time interval. Erlang was not at the time aware of Poisson's earlier work and assumed that the number phone calls arriving in each interval of time were independent to each other. He then found the limiting case, which is effectively recasting the Poisson distribution as a limit of the binomial distribution. [23] In 1910 Ernest Rutherford and Hans Geiger published experimental results on counting alpha particles. Motivated by their work, Harry Bateman studied the counting problem and derived Poisson probabilities as a solution to a family of differential equations, resulting in the independent discovery of the Poisson process. [23] After this time there were many studies and applications of the Poisson process, but its early history is complicated, which has been explained by the various applications of the process in numerous fields by biologists, ecologists, engineers and various physical scientists. [23] #### Markov processes Markov processes and Markov chains are named after Andrey Markov who studied Markov chains in the early 20th century. [297] Markov was interested in studying an extension of independent random sequences. [297] In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption, [298] [299] [300] [301] which had been commonly regarded as a requirement for such mathematical laws to hold. [301] Markov later used Markov chains to study the distribution of vowels in Eugene Onegin, written by Alexander Pushkin, and proved a central limit theorem for such chains. [298] [299] In 1912 Poincaré studied Markov chains on finite groups with an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced by Paul and Tatyana Ehrenfest in 1907, and a branching process, introduced by Francis Galton and Henry William Watson in 1873, preceding the work of Markov. [299] [300] After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier by Irénée-Jules Bienaymé. [302] Starting in 1928, Maurice Fréchet became interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains. [299] [303] Andrei Kolmogorov developed in a 1931 paper a large part of the early theory of continuous-time Markov processes. [252] [258] Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well as Norbert Wiener's work on Einstein's model of Brownian movement. [258] [304] He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes. [258] [305] Independent of Kolmogorov's work, Sydney Chapman derived in a 1928 paper an equation, now called the Chapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement. [306] The differential equations are now called the Kolmogorov equations [307] or the Kolmogorov–Chapman equations. [308] Other mathematicians who contributed significantly to the foundations of Markov processes include William Feller, starting in the 1930s, and then later Eugene Dynkin, starting in the 1950s. [252] #### Lévy processes Lévy processes such as the Wiener process and the Poisson process (on the real line) are named after Paul Lévy who started studying them in the 1930s, [225] but they have connections to infinitely divisible distributions going back to the 1920s. [224] In a 1932 paper Kolmogorov derived a characteristic function for random variables associated with Lévy processes. This result was later derived under more general conditions by Lévy in 1934, and then Khinchin independently gave an alternative form for this characteristic function in 1937. [252] [309] In addition to Lévy, Khinchin and Kolomogrov, early fundamental contributions to the theory of Lévy processes were made by Bruno de Finetti and Kiyosi Itô. [224] ## Mathematical construction In mathematics, constructions of mathematical objects are needed, which is also the case for stochastic processes, to prove that they exist mathematically. [58] There are two main approaches for constructing a stochastic process. One approach involves considering a measurable space of functions, defining a suitable measurable mapping from a probability space to this measurable space of functions, and then deriving the corresponding finite-dimensional distributions. [310] Another approach involves defining a collection of random variables to have specific finite-dimensional distributions, and then using Kolmogorov's existence theorem [lower-alpha 10] to prove a corresponding stochastic process exists. [58] [310] This theorem, which is an existence theorem for measures on infinite product spaces, [314] says that if any finite-dimensional distributions satisfy two conditions, known as consistency conditions, then there exists a stochastic process with those finite-dimensional distributions. [58] ### Construction issues When constructing continuous-time stochastic processes certain mathematical difficulties arise, due to the uncountable index sets, which do not occur with discrete-time processes. [59] [60] One problem is that is it possible to have more than one stochastic process with the same finite-dimensional distributions. For example, both the left-continuous modification and the right-continuous modification of a Poisson process have the same finite-dimensional distributions. [315] This means that the distribution of the stochastic process does not, necessarily, specify uniquely the properties of the sample functions of the stochastic process. [310] [316] Another problem is that functionals of continuous-time process that rely upon an uncountable number of points of the index set may not be measurable, so the probabilities of certain events may not be well-defined. [168] For example, the supremum of a stochastic process or random field is not necessarily a well-defined random variable. [31] [60] For a continuous-time stochastic process ${\displaystyle X}$, other characteristics that depend on an uncountable number of points of the index set ${\displaystyle T}$ include: [168] • a sample function of a stochastic process ${\displaystyle X}$ is a continuous function of ${\displaystyle t\in T}$; • a sample function of a stochastic process ${\displaystyle X}$ is a bounded function of ${\displaystyle t\in T}$; and • a sample function of a stochastic process ${\displaystyle X}$ is an increasing function of ${\displaystyle t\in T}$. To overcome these two difficulties, different assumptions and approaches are possible. [70] ### Resolving construction issues One approach for avoiding mathematical construction issues of stochastic processes, proposed by Joseph Doob, is to assume that the stochastic process is separable. [317] Separability ensures that infinite-dimensional distributions determine the properties of sample functions by requiring that sample functions are essentially determined by their values on a dense countable set of points in the index set. [318] Furthermore, if a stochastic process is separable, then functionals of an uncountable number of points of the index set are measurable and their probabilities can be studied. [168] [318] Another approach is possible, originally developed by Anatoliy Skorokhod and Andrei Kolmogorov, [319] for a continuous-time stochastic process with any metric space as its state space. For the construction of such a stochastic process, it is assumed that the sample functions of the stochastic process belong to some suitable function space, which is usually the Skorokhod space consisting of all right-continuous functions with left limits. This approach is now more used than the separability assumption, [70] [263] but such a stochastic process based on this approach will be automatically separable. [320] Although less used, the separability assumption is considered more general because every stochastic process has a separable version. [263] It is also used when it is not possible to construct a stochastic process in a Skorokhod space. [173] For example, separability is assumed when constructing and studying random fields, where the collection of random variables is now indexed by sets other than the real line such as ${\displaystyle n}$-dimensional Euclidean space. [31] [321] ## Notes 1. The term Brownian motion can refer to the physical process, also known as Brownian movement, and the stochastic process, a mathematical object, but to avoid ambiguity this article uses the terms Brownian motion process or Wiener process for the latter in a style similar to, for example, Gikhman and Skorokhod [20] or Rosenblatt. [21] 2. The term "separable" appears twice here with two different meanings, where the first meaning is from probability and the second from topology and analysis. For a stochastic process to be separable (in a probabilistic sense), its index set must be a separable space (in a topological or analytic sense), in addition to other conditions. [136] 3. The definition of separability for a continuous-time real-valued stochastic process can be stated in other ways. [172] [173] 4. In the context of point processes, the term "state space" can mean the space on which the point process is defined such as the real line, [234] [235] which corresponds to the index set in stochastic process terminology. 5. Also known as James or Jacques Bernoulli. [245] 6. It has been remarked that a notable exception was the St Petersburg School in Russia, where mathematicians led by Chebyshev studied probability theory. [250] 7. The name Khinchin is also written in (or transliterated into) English as Khintchine. [64] 8. Doob, when citing Khinchin, uses the term 'chance variable', which used to be an alternative term for 'random variable'. [261] 9. Later translated into English and published in 1950 as Foundations of the Theory of Probability [249] 10. The theorem has other names including Kolmogorov's consistency theorem, [311] Kolmogorov's extension theorem [312] or the Daniell–Kolmogorov theorem. [313] ## Related Research Articles A Markov chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). It is named after the Russian mathematician Andrey Markov. In mathematics, the Wiener process is a real valued continuous-time stochastic process named in honor of American mathematician Norbert Wiener for his investigations on the mathematical properties of the one-dimensional Brownian motion. It is often also called Brownian motion due to its historical connection with the physical process of the same name originally observed by Scottish botanist Robert Brown. It is one of the best known Lévy processes and occurs frequently in pure and applied mathematics, economics, quantitative finance, evolutionary biology, and physics. In probability theory, a martingale is a sequence of random variables for which, at a particular time, the conditional expectation of the next value in the sequence is equal to the present value, regardless of all prior values. In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process. It is named after the Russian mathematician Andrey Markov. The term strong Markov property is similar to the Markov property, except that the meaning of "present" is defined in terms of a random variable known as a stopping time. In probability theory, the Girsanov theorem describes how the dynamics of stochastic processes change when the original measure is changed to an equivalent probability measure. The theorem is especially important in the theory of financial mathematics as it tells how to convert from the physical measure, which describes the probability that an underlying instrument will take a particular value or values, to the risk-neutral measure which is a very useful tool for pricing derivatives on the underlying instrument. Probability is a measure of the likeliness that an event will occur. Probability is used to quantify an attitude of mind towards some proposition of whose truth we are not certain. The proposition of interest is usually of the form "A specific event will occur." The attitude of mind is of the form "How certain are we that the event will occur?" The certainty we adopt can be described in terms of a numerical measure and this number, between 0 and 1, we call probability. Probability theory is used extensively in statistics, mathematics, science and philosophy to draw conclusions about the likelihood of potential events and the underlying mechanics of complex systems. In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which displacements in pairwise disjoint time intervals are independent, and displacements in different time intervals of the same length have identical probability distributions. A Lévy process may thus be viewed as the continuous-time analog of a random walk. In probability theory, in particular in the study of stochastic processes, a stopping time is a specific type of “random time”: a random variable whose value is interpreted as the time at which a given stochastic process exhibits a certain behavior of interest. A stopping time is often defined by a stopping rule, a mechanism for deciding whether to continue or stop a process on the basis of the present position and past events, and which will almost always lead to a decision to stop at some finite time. A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs are used to model various phenomena such as unstable stock prices or physical systems subject to thermal fluctuations. Typically, SDEs contain a variable which represents random white noise calculated as the derivative of Brownian motion or the Wiener process. However, other types of random behaviour are possible, such as jump processes. Random differential equations are conjugate to stochastic differential equations. Itô calculus, named after Kiyoshi Itô, extends the methods of calculus to stochastic processes such as Brownian motion. It has important applications in mathematical finance and stochastic differential equations. In probability theory, an empirical process is a stochastic process that describes the proportion of objects in a system in a given state. For a process in a discrete state space a population continuous time Markov chain or Markov population model is a process which counts the number of objects in a given state . In mean field theory, limit theorems are considered and generalise the central limit theorem for empirical measures. Applications of the theory of empirical processes arise in non-parametric statistics. In mathematics, progressive measurability is a property in the theory of stochastic processes. A progressively measurable process, while defined quite technically, is important because it implies the stopped process is measurable. Being progressively measurable is a strictly stronger property than the notion of being an adapted process. Progressively measurable processes are important in the theory of Itô integrals. In probability theory, a real valued stochastic process X is called a semimartingale if it can be decomposed as the sum of a local martingale and an adapted finite-variation process. Semimartingales are "good integrators", forming the largest class of processes with respect to which the Itô integral and the Stratonovich integral can be defined. This page lists articles related to probability theory. In particular, it lists many articles corresponding to specific probability distributions. Such articles are marked here by a code of the form (X:Y), which refers to number of random variables involved and the type of the distribution. For example (2:DC) indicates a distribution with two random variables, discrete or continuous. Other codes are just abbreviations for topics. The list of codes can be found in the table of contents. Mathematical finance, also known as quantitative finance and financial mathematics, is a field of applied mathematics, concerned with mathematical modeling of financial markets. See Quantitative analyst. In probability theory and statistics, Campbell's theorem or the Campbell–Hardy theorem is either a particular equation or set of results relating to the expectation of a function summed over a point process to an integral involving the mean measure of the point process, which allows for the calculation of expected value and variance of the random sum. One version of the theorem, also known as Campbell's formula, entails an integral equation for the aforementioned sum over a general point process, and not necessarily a Poisson point process. There also exist equations involving moment measures and factorial moment measures that are considered versions of Campbell's formula. All these results are employed in probability and statistics with a particular importance in the theory of point processes and queueing theory as well as the related fields stochastic geometry, continuum percolation theory, and spatial statistics. In probability theory, a Cauchy process is a type of stochastic process. There are symmetric and asymmetric forms of the Cauchy process. The unspecified term "Cauchy process" is often used to refer to the symmetric Cauchy process. In probability, statistics and related fields, a Poisson point process is a type of random mathematical object that consists of points randomly located on a mathematical space. The Poisson point process is often called simply the Poisson process, but it is also called a Poisson random measure, Poisson random point field or Poisson point field. This point process has convenient mathematical properties, which has led to it being frequently defined in Euclidean space and used as a mathematical model for seemingly random processes in numerous disciplines such as astronomy, biology, ecology, geology, seismology, physics, economics, image processing, and telecommunications. ## References 1. Joseph L. Doob (1990). Stochastic processes. Wiley. pp. 46, 47. 2. L. C. G. Rogers; David Williams (2000). Diffusions, Markov Processes, and Martingales: Volume 1, Foundations. Cambridge University Press. p. 1. ISBN   978-1-107-71749-7. 3. J. Michael Steele (2012). Stochastic Calculus and Financial Applications. Springer Science & Business Media. p. 29. ISBN   978-1-4684-9305-4. 4. Emanuel Parzen (2015). Stochastic Processes. Courier Dover Publications. pp. 7, 8. ISBN   978-0-486-79688-8. 5. Iosif Ilyich Gikhman; Anatoly Vladimirovich Skorokhod (1969). Introduction to the Theory of Random Processes. Courier Corporation. p. 1. ISBN   978-0-486-69387-3. 6. Paul C. Bressloff (2014). Stochastic Processes in Cell Biology. Springer. ISBN   978-3-319-08488-6. 7. N.G. Van Kampen (2011). Stochastic Processes in Physics and Chemistry. Elsevier. ISBN   978-0-08-047536-3. 8. Russell Lande; Steinar Engen; Bernt-Erik Sæther (2003). Stochastic Population Dynamics in Ecology and Conservation. Oxford University Press. ISBN   978-0-19-852525-7. 9. Carlo Laing; Gabriel J Lord (2010). Stochastic Methods in Neuroscience. OUP Oxford. ISBN   978-0-19-923507-0. 10. Wolfgang Paul; Jörg Baschnagel (2013). Stochastic Processes: From Physics to Finance. Springer Science & Business Media. ISBN   978-3-319-00327-6. 11. Edward R. Dougherty (1999). Random processes for image and signal processing. SPIE Optical Engineering Press. ISBN   978-0-8194-2513-3. 12. Dimitri P. Bertsekas (1996). Stochastic Optimal Control: The Discrete-Time Case. Athena Scientific. ISBN   1-886529-03-5. 13. Thomas M. Cover; Joy A. Thomas (2012). Elements of Information Theory. John Wiley & Sons. p. 71. ISBN   978-1-118-58577-1. 14. Michael Baron (2015). Probability and Statistics for Computer Scientists, Second Edition. CRC Press. p. 131. ISBN   978-1-4987-6060-7. 15. Jonathan Katz; Yehuda Lindell (2007). Introduction to Modern Cryptography: Principles and Protocols. CRC Press. p.  26. ISBN   978-1-58488-586-3. 16. François Baccelli; Bartlomiej Blaszczyszyn (2009). Stochastic Geometry and Wireless Networks. Now Publishers Inc. ISBN   978-1-60198-264-3. 17. J. Michael Steele (2001). Stochastic Calculus and Financial Applications. Springer Science & Business Media. ISBN   978-0-387-95016-7. 18. Marek Musiela; Marek Rutkowski (2006). Martingale Methods in Financial Modelling. Springer Science & Business Media. ISBN   978-3-540-26653-2. 19. Steven E. Shreve (2004). Stochastic Calculus for Finance II: Continuous-Time Models. Springer Science & Business Media. ISBN   978-0-387-40101-0. 20. Iosif Ilyich Gikhman; Anatoly Vladimirovich Skorokhod (1969). Introduction to the Theory of Random Processes. Courier Corporation. ISBN   978-0-486-69387-3. 21. Murray Rosenblatt (1962). . Oxford University Press. 22. Jarrow, Robert; Protter, Philip (2004). "A short history of stochastic integration and mathematical finance: the early years, 1880–1970". A Festschrift for Herman Rubin. Institute of Mathematical Statistics Lecture Notes - Monograph Series. pp. 75–80. CiteSeerX  . doi:10.1214/lnms/1196285381. ISBN   978-0-940600-61-4. ISSN   0749-2170. 23. Stirzaker, David (2000). "Advice to Hedgehogs, or, Constants Can Vary". The Mathematical Gazette. 84 (500): 197–210. doi:10.2307/3621649. ISSN   0025-5572. JSTOR   3621649. S2CID   125163415. 24. Donald L. Snyder; Michael I. Miller (2012). Random Point Processes in Time and Space. Springer Science & Business Media. p. 32. ISBN   978-1-4612-3166-0. 25. Guttorp, Peter; Thorarinsdottir, Thordis L. (2012). "What Happened to Discrete Chaos, the Quenouille Process, and the Sharp Markov Property? Some History of Stochastic Point Processes". International Statistical Review. 80 (2): 253–268. doi:10.1111/j.1751-5823.2012.00181.x. ISSN   0306-7734. 26. Gusak, Dmytro; Kukush, Alexander; Kulik, Alexey; Mishura, Yuliya; Pilipenko, Andrey (2010). Theory of Stochastic Processes: With Applications to Financial Mathematics and Risk Theory. Springer Science & Business Media. p. 21. ISBN   978-0-387-87862-1. 27. Valeriy Skorokhod (2005). Basic Principles and Applications of Probability Theory. Springer Science & Business Media. p. 42. ISBN   978-3-540-26312-8. 28. Olav Kallenberg (2002). Foundations of Modern Probability. Springer Science & Business Media. pp. 24–25. ISBN   978-0-387-95313-7. 29. John Lamperti (1977). Stochastic processes: a survey of the mathematical theory. Springer-Verlag. pp. 1–2. ISBN   978-3-540-90275-1. 30. Loïc Chaumont; Marc Yor (2012). Exercises in Probability: A Guided Tour from Measure Theory to Random Processes, Via Conditioning. Cambridge University Press. p. 175. ISBN   978-1-107-60655-5. 31. Robert J. Adler; Jonathan E. Taylor (2009). Random Fields and Geometry. Springer Science & Business Media. pp. 7–8. ISBN   978-0-387-48116-6. 32. Gregory F. Lawler; Vlada Limic (2010). Random Walk: A Modern Introduction. Cambridge University Press. ISBN   978-1-139-48876-1. 33. David Williams (1991). Probability with Martingales. Cambridge University Press. ISBN   978-0-521-40605-5. 34. L. C. G. Rogers; David Williams (2000). Diffusions, Markov Processes, and Martingales: Volume 1, Foundations. Cambridge University Press. ISBN   978-1-107-71749-7. 35. David Applebaum (2004). Lévy Processes and Stochastic Calculus. Cambridge University Press. ISBN   978-0-521-83263-2. 36. Mikhail Lifshits (2012). Lectures on Gaussian Processes. Springer Science & Business Media. ISBN   978-3-642-24939-6. 37. Robert J. Adler (2010). The Geometry of Random Fields. SIAM. ISBN   978-0-89871-693-1. 38. Samuel Karlin; Howard E. Taylor (2012). A First Course in Stochastic Processes. Academic Press. ISBN   978-0-08-057041-9. 39. Bruce Hajek (2015). Random Processes for Engineers. Cambridge University Press. ISBN   978-1-316-24124-0. 40. G. Latouche; V. Ramaswami (1999). Introduction to Matrix Analytic Methods in Stochastic Modeling. SIAM. ISBN   978-0-89871-425-8. 41. D.J. Daley; David Vere-Jones (2007). An Introduction to the Theory of Point Processes: Volume II: General Theory and Structure. Springer Science & Business Media. ISBN   978-0-387-21337-8. 42. Patrick Billingsley (2008). Probability and Measure. Wiley India Pvt. Limited. ISBN   978-81-265-1771-8. 43. Pierre Brémaud (2014). Fourier Analysis and Stochastic Processes. Springer. ISBN   978-3-319-09590-5. 44. Adam Bobrowski (2005). Functional Analysis for Probability and Stochastic Processes: An Introduction. Cambridge University Press. ISBN   978-0-521-83166-6. 45. Applebaum, David (2004). "Lévy processes: From probability to finance and quantum groups". Notices of the AMS. 51 (11): 1336–1347. 46. Jochen Blath; Peter Imkeller; Sylvie Rœlly (2011). Surveys in Stochastic Processes. European Mathematical Society. ISBN   978-3-03719-072-2. 47. Michel Talagrand (2014). Upper and Lower Bounds for Stochastic Processes: Modern Methods and Classical Problems. Springer Science & Business Media. pp. 4–. ISBN   978-3-642-54075-2. 48. Paul C. Bressloff (2014). Stochastic Processes in Cell Biology. Springer. pp. vii–ix. ISBN   978-3-319-08488-6. 49. Samuel Karlin; Howard E. Taylor (2012). A First Course in Stochastic Processes. Academic Press. p. 27. ISBN   978-0-08-057041-9. 50. Applebaum, David (2004). "Lévy processes: From probability to finance and quantum groups". Notices of the AMS. 51 (11): 1337. 51. L. C. G. Rogers; David Williams (2000). Diffusions, Markov Processes, and Martingales: Volume 1, Foundations. Cambridge University Press. pp. 121–124. ISBN   978-1-107-71749-7. 52. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. pp. 294, 295. ISBN   978-1-118-59320-2. 53. Samuel Karlin; Howard E. Taylor (2012). A First Course in Stochastic Processes. Academic Press. p. 26. ISBN   978-0-08-057041-9. 54. Donald L. Snyder; Michael I. Miller (2012). Random Point Processes in Time and Space. Springer Science & Business Media. pp. 24, 25. ISBN   978-1-4612-3166-0. 55. Patrick Billingsley (2008). Probability and Measure. Wiley India Pvt. Limited. p. 482. ISBN   978-81-265-1771-8. 56. Alexander A. Borovkov (2013). Probability Theory. Springer Science & Business Media. p. 527. ISBN   978-1-4471-5201-9. 57. Pierre Brémaud (2014). Fourier Analysis and Stochastic Processes. Springer. p. 120. ISBN   978-3-319-09590-5. 58. Jeffrey S Rosenthal (2006). A First Look at Rigorous Probability Theory. World Scientific Publishing Co Inc. pp. 177–178. ISBN   978-981-310-165-4. 59. Peter E. Kloeden; Eckhard Platen (2013). Numerical Solution of Stochastic Differential Equations. Springer Science & Business Media. p. 63. ISBN   978-3-662-12616-5. 60. Davar Khoshnevisan (2006). Multiparameter Processes: An Introduction to Random Fields. Springer Science & Business Media. pp. 153–155. ISBN   978-0-387-21631-7. 61. . Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.) 62. O. B. Sheĭnin (2006). Theory of probability and statistics as exemplified in short dictums. NG Verlag. p. 5. ISBN   978-3-938417-40-9. 63. Oscar Sheynin; Heinrich Strecker (2011). Alexandr A. Chuprov: Life, Work, Correspondence. V&R unipress GmbH. p. 136. ISBN   978-3-89971-812-6. 64. Doob, Joseph (1934). "Stochastic Processes and Statistics". Proceedings of the National Academy of Sciences of the United States of America. 20 (6): 376–379. Bibcode:1934PNAS...20..376D. doi:. PMC  . PMID   16587907. 65. Khintchine, A. (1934). "Korrelationstheorie der stationeren stochastischen Prozesse". Mathematische Annalen. 109 (1): 604–615. doi:10.1007/BF01449156. ISSN   0025-5831. S2CID   122842868. 66. Kolmogoroff, A. (1931). "Über die analytischen Methoden in der Wahrscheinlichkeitsrechnung". Mathematische Annalen. 104 (1): 1. doi:10.1007/BF01457949. ISSN   0025-5831. S2CID   119439925. 67. . Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.) 68. Bert E. Fristedt; Lawrence F. Gray (2013). A Modern Approach to Probability Theory. Springer Science & Business Media. p. 580. ISBN   978-1-4899-2837-5. 69. L. C. G. Rogers; David Williams (2000). Diffusions, Markov Processes, and Martingales: Volume 1, Foundations. Cambridge University Press. pp. 121, 122. ISBN   978-1-107-71749-7. 70. Søren Asmussen (2003). Applied Probability and Queues. Springer Science & Business Media. p. 408. ISBN   978-0-387-00211-8. 71. David Stirzaker (2005). Stochastic Processes and Models. Oxford University Press. p. 45. ISBN   978-0-19-856814-8. 72. Murray Rosenblatt (1962). . Oxford University Press. p.  91. 73. John A. Gubner (2006). Probability and Random Processes for Electrical and Computer Engineers. Cambridge University Press. p. 383. ISBN   978-1-139-45717-0. 74. Kiyosi Itō (2006). Essentials of Stochastic Processes. American Mathematical Soc. p. 13. ISBN   978-0-8218-3898-3. 75. M. Loève (1978). Probability Theory II. Springer Science & Business Media. p. 163. ISBN   978-0-387-90262-3. 76. Pierre Brémaud (2014). Fourier Analysis and Stochastic Processes. Springer. p. 133. ISBN   978-3-319-09590-5. 77. Gusak et al. (2010), p. 1 78. Richard F. Bass (2011). Stochastic Processes. Cambridge University Press. p. 1. ISBN   978-1-139-50147-7. 79. ,John Lamperti (1977). Stochastic processes: a survey of the mathematical theory. Springer-Verlag. p. 3. ISBN   978-3-540-90275-1. 80. Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. p. 55. ISBN   978-1-86094-555-7. 81. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. p. 293. ISBN   978-1-118-59320-2. 82. Florescu, Ionut (2014). Probability and Stochastic Processes. John Wiley & Sons. p. 301. ISBN   978-1-118-59320-2. 83. Bertsekas, Dimitri P.; Tsitsiklis, John N. (2002). Introduction to Probability. Athena Scientific. p. 273. ISBN   978-1-886529-40-3. 84. Ibe, Oliver C. (2013). Elements of Random Walk and Diffusion Processes. John Wiley & Sons. p. 11. ISBN   978-1-118-61793-9. 85. Achim Klenke (2013). Probability Theory: A Comprehensive Course. Springer. p. 347. ISBN   978-1-4471-5362-7. 86. Gregory F. Lawler; Vlada Limic (2010). Random Walk: A Modern Introduction. Cambridge University Press. p. 1. ISBN   978-1-139-48876-1. 87. Olav Kallenberg (2002). Foundations of Modern Probability. Springer Science & Business Media. p. 136. ISBN   978-0-387-95313-7. 88. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. p. 383. ISBN   978-1-118-59320-2. 89. Rick Durrett (2010). Probability: Theory and Examples. Cambridge University Press. p. 277. ISBN   978-1-139-49113-6. 90. Weiss, George H. (2006). "Random Walks". Encyclopedia of Statistical Sciences. p. 1. doi:10.1002/0471667196.ess2180.pub2. ISBN   978-0471667193. 91. Aris Spanos (1999). Probability Theory and Statistical Inference: Econometric Modeling with Observational Data. Cambridge University Press. p. 454. ISBN   978-0-521-42408-0. 92. Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. p. 81. ISBN   978-1-86094-555-7. 93. Allan Gut (2012). Probability: A Graduate Course. Springer Science & Business Media. p. 88. ISBN   978-1-4614-4708-5. 94. Geoffrey Grimmett; David Stirzaker (2001). Probability and Random Processes. OUP Oxford. p. 71. ISBN   978-0-19-857222-0. 95. Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. p. 56. ISBN   978-1-86094-555-7. 96. Brush, Stephen G. (1968). "A history of random processes". Archive for History of Exact Sciences. 5 (1): 1–2. doi:10.1007/BF00328110. ISSN   0003-9519. S2CID   117623580. 97. Applebaum, David (2004). "Lévy processes: From probability to finance and quantum groups". Notices of the AMS. 51 (11): 1338. 98. Iosif Ilyich Gikhman; Anatoly Vladimirovich Skorokhod (1969). Introduction to the Theory of Random Processes. Courier Corporation. p. 21. ISBN   978-0-486-69387-3. 99. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. p. 471. ISBN   978-1-118-59320-2. 100. Samuel Karlin; Howard E. Taylor (2012). A First Course in Stochastic Processes. Academic Press. pp. 21, 22. ISBN   978-0-08-057041-9. 101. Ioannis Karatzas; Steven Shreve (1991). Brownian Motion and Stochastic Calculus. Springer. p. VIII. ISBN   978-1-4612-0949-2. 102. Daniel Revuz; Marc Yor (2013). Continuous Martingales and Brownian Motion. Springer Science & Business Media. p. IX. ISBN   978-3-662-06400-9. 103. Jeffrey S Rosenthal (2006). A First Look at Rigorous Probability Theory. World Scientific Publishing Co Inc. p. 186. ISBN   978-981-310-165-4. 104. Donald L. Snyder; Michael I. Miller (2012). Random Point Processes in Time and Space. Springer Science & Business Media. p. 33. ISBN   978-1-4612-3166-0. 105. J. Michael Steele (2012). Stochastic Calculus and Financial Applications. Springer Science & Business Media. p. 118. ISBN   978-1-4684-9305-4. 106. Peter Mörters; Yuval Peres (2010). Brownian Motion. Cambridge University Press. pp. 1, 3. ISBN   978-1-139-48657-6. 107. Ioannis Karatzas; Steven Shreve (1991). Brownian Motion and Stochastic Calculus. Springer. p. 78. ISBN   978-1-4612-0949-2. 108. Ioannis Karatzas; Steven Shreve (1991). Brownian Motion and Stochastic Calculus. Springer. p. 61. ISBN   978-1-4612-0949-2. 109. Steven E. Shreve (2004). Stochastic Calculus for Finance II: Continuous-Time Models. Springer Science & Business Media. p. 93. ISBN   978-0-387-40101-0. 110. Olav Kallenberg (2002). Foundations of Modern Probability. Springer Science & Business Media. pp. 225, 260. ISBN   978-0-387-95313-7. 111. Ioannis Karatzas; Steven Shreve (1991). Brownian Motion and Stochastic Calculus. Springer. p. 70. ISBN   978-1-4612-0949-2. 112. Peter Mörters; Yuval Peres (2010). Brownian Motion. Cambridge University Press. p. 131. ISBN   978-1-139-48657-6. 113. Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. ISBN   978-1-86094-555-7. 114. Ioannis Karatzas; Steven Shreve (1991). Brownian Motion and Stochastic Calculus. Springer. ISBN   978-1-4612-0949-2. 115. Applebaum, David (2004). "Lévy processes: From probability to finance and quantum groups". Notices of the AMS. 51 (11): 1341. 116. Samuel Karlin; Howard E. Taylor (2012). A First Course in Stochastic Processes. Academic Press. p. 340. ISBN   978-0-08-057041-9. 117. Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. p. 124. ISBN   978-1-86094-555-7. 118. Ioannis Karatzas; Steven Shreve (1991). Brownian Motion and Stochastic Calculus. Springer. p. 47. ISBN   978-1-4612-0949-2. 119. Ubbo F. Wiersema (2008). Brownian Motion Calculus. John Wiley & Sons. p. 2. ISBN   978-0-470-02171-2. 120. Henk C. Tijms (2003). A First Course in Stochastic Models. Wiley. pp. 1, 2. ISBN   978-0-471-49881-0. 121. D.J. Daley; D. Vere-Jones (2006). An Introduction to the Theory of Point Processes: Volume I: Elementary Theory and Methods. Springer Science & Business Media. pp. 19–36. ISBN   978-0-387-21564-8. 122. Mark A. Pinsky; Samuel Karlin (2011). An Introduction to Stochastic Modeling. Academic Press. p. 241. ISBN   978-0-12-381416-6. 123. J. F. C. Kingman (1992). Poisson Processes. Clarendon Press. p. 38. ISBN   978-0-19-159124-2. 124. D.J. Daley; D. Vere-Jones (2006). An Introduction to the Theory of Point Processes: Volume I: Elementary Theory and Methods. Springer Science & Business Media. p. 19. ISBN   978-0-387-21564-8. 125. J. F. C. Kingman (1992). Poisson Processes. Clarendon Press. p. 22. ISBN   978-0-19-159124-2. 126. Samuel Karlin; Howard E. Taylor (2012). A First Course in Stochastic Processes. Academic Press. pp. 118, 119. ISBN   978-0-08-057041-9. 127. Leonard Kleinrock (1976). . Wiley. p.  61. ISBN   978-0-471-49110-1. 128. Murray Rosenblatt (1962). . Oxford University Press. p.  94. 129. Martin Haenggi (2013). Stochastic Geometry for Wireless Networks. Cambridge University Press. pp. 10, 18. ISBN   978-1-107-01469-5. 130. Sung Nok Chiu; Dietrich Stoyan; Wilfrid S. Kendall; Joseph Mecke (2013). Stochastic Geometry and Its Applications. John Wiley & Sons. pp. 41, 108. ISBN   978-1-118-65825-3. 131. J. F. C. Kingman (1992). Poisson Processes. Clarendon Press. p. 11. ISBN   978-0-19-159124-2. 132. Roy L. Streit (2010). Poisson Point Processes: Imaging, Tracking, and Sensing. Springer Science & Business Media. p. 1. ISBN   978-1-4419-6923-1. 133. J. F. C. Kingman (1992). Poisson Processes. Clarendon Press. p. v. ISBN   978-0-19-159124-2. 134. Alexander A. Borovkov (2013). Probability Theory. Springer Science & Business Media. p. 528. ISBN   978-1-4471-5201-9. 135. Georg Lindgren; Holger Rootzen; Maria Sandsten (2013). Stationary Stochastic Processes for Scientists and Engineers. CRC Press. p. 11. ISBN   978-1-4665-8618-5. 136. Valeriy Skorokhod (2005). Basic Principles and Applications of Probability Theory. Springer Science & Business Media. pp. 93, 94. ISBN   978-3-540-26312-8. 137. Donald L. Snyder; Michael I. Miller (2012). Random Point Processes in Time and Space. Springer Science & Business Media. p. 25. ISBN   978-1-4612-3166-0. 138. Valeriy Skorokhod (2005). Basic Principles and Applications of Probability Theory. Springer Science & Business Media. p. 104. ISBN   978-3-540-26312-8. 139. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. p. 296. ISBN   978-1-118-59320-2. 140. Patrick Billingsley (2008). Probability and Measure. Wiley India Pvt. Limited. p. 493. ISBN   978-81-265-1771-8. 141. Bernt Øksendal (2003). Stochastic Differential Equations: An Introduction with Applications. Springer Science & Business Media. p. 10. ISBN   978-3-540-04758-2. 142. Peter K. Friz; Nicolas B. Victoir (2010). Multidimensional Stochastic Processes as Rough Paths: Theory and Applications. Cambridge University Press. p. 571. ISBN   978-1-139-48721-4. 143. Sidney I. Resnick (2013). Adventures in Stochastic Processes. Springer Science & Business Media. pp. 40–41. ISBN   978-1-4612-0387-2. 144. Ward Whitt (2006). Stochastic-Process Limits: An Introduction to Stochastic-Process Limits and Their Application to Queues. Springer Science & Business Media. p. 23. ISBN   978-0-387-21748-2. 145. David Applebaum (2004). Lévy Processes and Stochastic Calculus. Cambridge University Press. p. 4. ISBN   978-0-521-83263-2. 146. Daniel Revuz; Marc Yor (2013). Continuous Martingales and Brownian Motion. Springer Science & Business Media. p. 10. ISBN   978-3-662-06400-9. 147. L. C. G. Rogers; David Williams (2000). Diffusions, Markov Processes, and Martingales: Volume 1, Foundations. Cambridge University Press. p. 123. ISBN   978-1-107-71749-7. 148. John Lamperti (1977). Stochastic processes: a survey of the mathematical theory. Springer-Verlag. pp. 6 and 7. ISBN   978-3-540-90275-1. 149. Iosif I. Gikhman; Anatoly Vladimirovich Skorokhod (1969). Introduction to the Theory of Random Processes. Courier Corporation. p. 4. ISBN   978-0-486-69387-3. 150. Robert J. Adler (2010). The Geometry of Random Fields. SIAM. pp. 14, 15. ISBN   978-0-89871-693-1. 151. Sung Nok Chiu; Dietrich Stoyan; Wilfrid S. Kendall; Joseph Mecke (2013). Stochastic Geometry and Its Applications. John Wiley & Sons. p. 112. ISBN   978-1-118-65825-3. 152. Joseph L. Doob (1990). Stochastic processes. Wiley. pp. 94–96. 153. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. pp. 298, 299. ISBN   978-1-118-59320-2. 154. Iosif Ilyich Gikhman; Anatoly Vladimirovich Skorokhod (1969). Introduction to the Theory of Random Processes. Courier Corporation. p. 8. ISBN   978-0-486-69387-3. 155. David Williams (1991). Probability with Martingales. Cambridge University Press. pp. 93, 94. ISBN   978-0-521-40605-5. 156. Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. pp. 22–23. ISBN   978-1-86094-555-7. 157. Peter Mörters; Yuval Peres (2010). Brownian Motion. Cambridge University Press. p. 37. ISBN   978-1-139-48657-6. 158. L. C. G. Rogers; David Williams (2000). Diffusions, Markov Processes, and Martingales: Volume 1, Foundations. Cambridge University Press. p. 130. ISBN   978-1-107-71749-7. 159. Alexander A. Borovkov (2013). Probability Theory. Springer Science & Business Media. p. 530. ISBN   978-1-4471-5201-9. 160. Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. p. 48. ISBN   978-1-86094-555-7. 161. Bernt Øksendal (2003). Stochastic Differential Equations: An Introduction with Applications. Springer Science & Business Media. p. 14. ISBN   978-3-540-04758-2. 162. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. p. 472. ISBN   978-1-118-59320-2. 163. Daniel Revuz; Marc Yor (2013). Continuous Martingales and Brownian Motion. Springer Science & Business Media. pp. 18–19. ISBN   978-3-662-06400-9. 164. David Applebaum (2004). Lévy Processes and Stochastic Calculus. Cambridge University Press. p. 20. ISBN   978-0-521-83263-2. 165. Hiroshi Kunita (1997). Stochastic Flows and Stochastic Differential Equations. Cambridge University Press. p. 31. ISBN   978-0-521-59925-2. 166. Olav Kallenberg (2002). Foundations of Modern Probability. Springer Science & Business Media. p. 35. ISBN   978-0-387-95313-7. 167. Monique Jeanblanc; Marc Yor; Marc Chesney (2009). Mathematical Methods for Financial Markets. Springer Science & Business Media. p. 11. ISBN   978-1-85233-376-8. 168. Kiyosi Itō (2006). Essentials of Stochastic Processes. American Mathematical Soc. pp. 32–33. ISBN   978-0-8218-3898-3.
2021-09-27 00:34:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 218, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8130096793174744, "perplexity": 780.3465130538772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058222.43/warc/CC-MAIN-20210926235727-20210927025727-00444.warc.gz"}
https://www.gamedev.net/forums/topic/701816-creating-a-font-atlas/
Creating a Font Atlas Recommended Posts Not too long ago I made a thread requesting help with rendering fonts in FreeType. After a while, I got everything to work perfectly and had a pretty good font renderer. The problem that I ignored for a while was that I was making a texture for each glyph, which made the program's memory usage insanely high. Today I decided to just create a single texture instead, and place each glyph in a single section (font atlas). Unfortunately, I'm getting terrible results and haven't figured this out in the past 6 hours I've been working on it. Above is the result when I draw the entire texture.. I'm not sure exactly why this happens, I am using the same code I was before to rasterize the glyphs into the texture. case FT_PIXEL_MODE_GRAY: { for ( int j = 0; j < glyph_h; j++ ) { uchar *psrc = bitmap->buffer + ( j * bitmap->pitch ); for ( int i = 0; i < glyph_w; i++ ) { uchar *pdst = dst + ( 4 * i + j * stride ); *pdst++ = 0xFF; *pdst++ = 0xFF; *pdst++ = 0xFF; *pdst++ = *psrc++; } } break; } The only difference is how it is loaded into the buffer (which just adds the position "(y * texture_size) + x") // dst - a unsigned char buffer // x/y - the current x/y position in the texture ft_RenderGlyphToBuffer( dst + ( y * texsize ) + x ); Is my logic or approach wrong here? Appreciate any help, thank you. Share on other sites I'm not sure this is enough information to really you. There could be bugs elsewhere. It does look like perhaps your color components are flipped, and it also looks like either incorrect UVs, and/or incorrect stride when copying pixels. When I had similar problems I would first verify which pieces are absolutely known to be correct, and try to isolate problems. Is your atlas constructed properly, and can you open it in an image editor? Can you copy out one image from your atlas into an isolated image, and open it properly in an image editor? Can you copy out one image from your atlas, and then render it on screen? Share on other sites Is that how it looks in the graphics debugger? Also you can use a single channel texture as your using single colour fonts and not doing subpixel. Share on other sites Posted (edited) 31 minutes ago, Randy Gaul said: I'm not sure this is enough information to really you. There could be bugs elsewhere. It does look like perhaps your color components are flipped, and it also looks like either incorrect UVs, and/or incorrect stride when copying pixels. When I had similar problems I would first verify which pieces are absolutely known to be correct, and try to isolate problems. Is your atlas constructed properly, and can you open it in an image editor? Can you copy out one image from your atlas into an isolated image, and open it properly in an image editor? Can you copy out one image from your atlas, and then render it on screen? At this point, I believe it has to be the function "ft_RenderGlyphToBuffer".. I was using this function perfectly fine before when I was creating a texture for every glyph rather than creating one for all. 31 minutes ago, Randy Gaul said: Can you copy out one image from your atlas, and then render it on screen? Not sure what you mean. I was rendering the entire atlas onto the screen. All of them are skewed. VDisplayImage dimg( /*font atlas texture*/ m_pTexture, /*area*/ FloatRect( 0, 0, m_lTexSize, m_lTexSize ), /*offset*/ FloatVector2( 0, 0 ) ); dimg.Display( 0, 0, 1280, 720, CRgbaColor::Intensity( 255 ) ); This resulted in the picture above. 19 minutes ago, SyncViews said: Is that how it looks in the graphics debugger? I am drawing them with DirectX 9. --- Edit I should note that this is how I am storing each glyph into the atlas int gw = bitmap->width + GLYPH_PADDING; int gh = bitmap->rows + GLYPH_PADDING; // Check if glyph right margin does not exceed texture size uint x_next = x + gw; if ( x_next > texsize ) { x_next = x + gw; y = yb; } // Check if glyph bottom margin does not exceed texture size uint y_bot = y + gh; if ( y_bot > texsize ) break; ft_RenderGlyphToBuffer( dst + ( y * texsize ) + x, texsize ); FloatRect area ( static_cast< float >( x ), static_cast< float >( y ), static_cast< float >( gw - GLYPH_PADDING ), static_cast< float >( gh - GLYPH_PADDING ) ); FloatVector2 offset ( glyphit->second.offsetX, -glyphit->second.offsetY ); glyphit->second.image = new VDisplayImage( m_pTexture, area, offset ); x = x_next; if ( y_bot > yb ) { yb = y_bot; } Edited by datboi Share on other sites Posted (edited) 33 minutes ago, datboi said: This resulted in the picture above. 51 minutes ago, SyncViews said: Is that how it looks in the graphics debugger? I am drawing them with DirectX 9. Well check with VS/PIX just in case. Its really strange that you have multiple colours in that image when you wrote 0xFF to 3 of 4 bytes. EDIT: You might also step through in the debugger to make sure say a line or two gets rendered properly. Should be able to watch something like "(unsigned*)p,[100]". Edited by SyncViews Share on other sites 1 hour ago, SyncViews said: Well check with VS/PIX just in case. Its really strange that you have multiple colours in that image when you wrote 0xFF to 3 of 4 bytes. EDIT: You might also step through in the debugger to make sure say a line or two gets rendered properly. Should be able to watch something like "(unsigned*)p,[100]". Its 100% due to the fact that this code is wrong (for this scenario atleast) case FT_PIXEL_MODE_GRAY: { for ( int j = 0; j < glyph_height; j++ ) { uchar *psrc = bitmap->buffer + ( j * bitmap->pitch ); for ( int i = 0; i < glyph_width; i++ ) { // stride = texture_size * 4 uchar *pdst = buffer + ( 4 * i + j * stride ); *pdst++ = 0xFF; *pdst++ = 0xFF; *pdst++ = 0xFF; *pdst++ = *psrc++; } } break; } It worked for when I used single textures for each glyph, but now that I use one texture, it screws up. I'm not sure why, I thought that accounting for it by using texture_buffer + ( y * texture_size ) + x as the parameter for 'buffer' would be enough to allow this code to still work. I'm not sure how it doesn't... but I am very tired so I will take a look at this again tomorrow. Thanks. Create an account Register a new account • Game Developer Survey We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start! • 13 • 18 • 15 • 9 • 9
2019-10-18 09:22:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3108920753002167, "perplexity": 3648.8275514157194}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986679439.48/warc/CC-MAIN-20191018081630-20191018105130-00177.warc.gz"}
https://techwhiff.com/learn/question-26-in-its-first-year-of-operations-grace/253662
# QUESTION 26 In its first year of operations. Grace Company reports the following: Earned revenues of... ###### Question: QUESTION 26 In its first year of operations. Grace Company reports the following: Earned revenues of $60.000 (552,000 cash received from customers): Incurred expenses of 535,000 1531.000 tash paid toward them Prepaid$8,000 cash for costs that will not be expensed until next year. Net income under the accrual basis of accounting is: A. 517.000 OB. 521,000 C 518,000. D.525,000 E. None of the answer choices is correct. #### Similar Solved Questions ##### Discuss the “key” spots in the decision process (for both the vacation and gasoline) where marketers... Discuss the “key” spots in the decision process (for both the vacation and gasoline) where marketers could influence buyer behaviour. Which spots are most important? What is the best way to get the consumer’s attention and exert influence? 1. Vacation purchase 2. Gasoline (fue... ##### NUR 360 Tissue – Burn Pre-WorkDescribe the difference between thermal, chemical and electrical burns.Describe the difference... NUR 360 Tissue – Burn Pre-WorkDescribe the difference between thermal, chemical and electrical burns.Describe the difference in burn severity (superficial, partial thickness, full thickness, 1st degree, 2nd degree, 3rd degree).What is the parkland formula?How much fluid should be given to a bu... ##### Solve both parts of question 2 1. Find Ex and Ey, the energies of the signals... Solve both parts of question 2 1. Find Ex and Ey, the energies of the signals x(t) and y(t) for the given figure below. Sketch the signals x(t)+y(t) and x(t)-y(t). Show that the energies of either of these two signals is equal to Ex + Ey. (20 points) Vt) 2. Find the energies of the signals x(t) and ... ##### 41. 10.7 Use Table 10.1 to answer the following questions: (a) There are just four structures... 41. 10.7 Use Table 10.1 to answer the following questions: (a) There are just four structures for amines with the formula C3HoN. Draw the structure of the (b) There are only two structures for an alkyne with the formula C4Ho. Draw the structure of the (c) There are just two structures for a carbonyl... ##### 1.) a.) Using the simplified instruction set shown for part b, write code for the following.... 1.) a.) Using the simplified instruction set shown for part b, write code for the following. Suppose memory locations 1400 to 1449 contain 16-bit words. Each word represents 2 ASCII characters. Write code to read in and write out these 100 characters. Left-side character from location 1400 should be... ##### Fill the table with MMC and LMC values corresponding to features A through F. Fill the table with MMC and LMC values corresponding to features A through F.... ##### A bullet with mass 25g and initial horizontal velocity 320m=s strikes a block of mass 2kg... A bullet with mass 25g and initial horizontal velocity 320m=s strikes a block of mass 2kg that rests on a frictionless surface and is attached to one end of a spring. The bullet becomes embedded in the block. The other end of the spring is attached to the wall. The impact compresss the spring a maxi... ##### Three charges are placed on the x-axis, q1=+q at x = 0, q2 = +2q at... Three charges are placed on the x-axis, q1=+q at x = 0, q2 = +2q at x = d, and q3 = +5q at x =2d. Use a negative to indicate the negative x-direction (to the left) and a positive sign to indicate the positive x-direction (to the right). What is the electric force acting on Q2?... ##### Find the mass of water that vaporizes when 4.22 kg of mercury at 235 °C is... Find the mass of water that vaporizes when 4.22 kg of mercury at 235 °C is added to 0.176 kg of water at 92.4 °C.... ##### Let X ~Par (2) and Y = ln(X). Compute P(Y > 1). Let X ~Par (2) and Y = ln(X). Compute P(Y > 1).... ##### What are two ways to test the equality of population variances? Explain in detail. What are two ways to test the equality of population variances? Explain in detail.... ##### A circuit with a resistance of 6 Omega has a fuse that melts at 8 "A". Can a voltage of 66 "V" be applied to the circuit without blowing the fuse? A circuit with a resistance of 6 Omega has a fuse that melts at 8 "A". Can a voltage of 66 "V" be applied to the circuit without blowing the fuse?...
2023-03-20 16:46:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5590556859970093, "perplexity": 1963.1342323794063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00673.warc.gz"}
https://schoollearningcommons.info/question/te-if-alpha-beta-gamma-are-the-zeroes-of-th-cubic-polynomial-a3-b-2-c-d-24950440-1/
## [tex]if \: \alpha \: \beta \: \gamma \: are \: the \: zeroes \: of \: th \:cubic \: polynomial \: ax3 + bx {}^{2} + cx + d \: [ Question a is not equal to zero then in progress 0 9 months 2021-08-20T16:09:35+00:00 1 Answer 0 views 0
2022-05-29 05:14:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9778954386711121, "perplexity": 11513.788876545772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663039492.94/warc/CC-MAIN-20220529041832-20220529071832-00090.warc.gz"}
https://electronics.stackexchange.com/questions/125229/whats-the-name-of-this-circuit-and-how-does-it-work
# What's the name of this circuit, and how does it work? I was reading Hack Into a Timer AC Socket and when I got to this paragraph: The design firstly have a RC buck? (not sure how to call correctly in English) circuit to step down the voltage from 220V AC to 5V AC, a 0.33uf safety capacitor (yellow one) used here, accompanying with a 1M discharge resistor. And after these two there is another current limited big resistor to prevent pulse and shock. I was curious - I didn't even realize you could change AC voltage levels without a transformer (or at least not cheaper than with a transformer). I understand that a resistor will lower voltage for a given current, but I would have expected AC mains to be too much power for a cheap resistor. And I don't get what the capacitor does - even if it is helpful here to get smoother DC power, wouldn't it interfere with anything else on the same AC line? WARNING - ALL PARTS OF THIS CIRCUIT SHOULD BE CONSIDERED TO BE AT AC MAINS POTENTIAL AT ALL TIMES. Capacitor C1 MUST be an "X RATED" capacitor specified by its manufacturer for "across mains" use. NB NOT 'Y' rated. Examples of X & Y rated capacitors • Do you have a readable version of the schematic? Clicking the thumbnail in that article gets me a server error. – gwideman Aug 15 '14 at 1:06 • A resistor by itself does not "lower voltage". The concept you need to study is "voltage divider". en.wikipedia.org/wiki/Voltage_divider – gwideman Aug 15 '14 at 1:08 • Actually, I do - change the subdomain to WWW and the links work: electrodragon.com/wp-content/uploads/sites/7/2014/08/… – Nathan Friedly Aug 15 '14 at 1:14 • That looks somewhat like a very poorly laid out bridge rectifier with an input filter. – sherrellbc Aug 15 '14 at 1:17 • @RussellMcMahon Are there cases where a Y capacitor would be less safe than an X capacitor? – Spehro Pefhany Aug 15 '14 at 2:49 This is a capacitive dropper power supply. The bulk of the voltage is dropped across the 'safety' rated capacitor C1. It acts a bit like a resistor but does not dissipate significant heat. The reactance is Xc = $\dfrac{1}{2\pi f C} \Omega$.
2021-06-19 19:49:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3937031030654907, "perplexity": 1898.238000941219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649688.44/warc/CC-MAIN-20210619172612-20210619202612-00519.warc.gz"}
https://www.nature.com/articles/s41598-022-10777-w
## Introduction For a decade, growing interest in clinical immunology and rheumatology regarding targeted therapies to block cytokines and their signaling have led to the development and use of Janus kinase (JAK) inhibitors. Janus kinases are cytokine transmembrane receptors: JAK1, JAK2, JAK3 and TYK2. JAK-STAT (signal transducer and activator of transcription) pathway plays roles in orchestrating of immune system, cell proliferation and haematopoiesis1. JAK-STAT pathway is implicated in the pathogenesis of inflammatory and autoimmune diseases including rheumatoid arthritis, psoriasis, and inflammatory bowel disease as well as malignancies1. Three of the JAK inhibitors have been approved for a few years by the US Food and Drug Administration/European Medicines Agency (FDA/EMA). Tofacitinib, a selective JAK1 and JAK 3 inhibitor, has been approved for treating rheumatoid arthritis, psoriatic arthritis, and ulcerative colitis. Ruxolitinib, a selective JAK1 and JAK2 inhibitor, has been approved for treating myelofibrosis and polycythemia vera. Baricitinib selectively inhibits JAK 1 and JAK 2 and has been approved for treating rheumatoid arthritis and atopic dermatitis. The success of JAK inhibitors in the treatment of inflammatory diseases or malignancies demonstrates that intracellular signaling pathways can be targeted to treat inflammatory and autoimmune diseases. Perspectives for the use of these three JAK inhibitors are now wider, for other inflammatory/autoimmune diseases2,3. Moreover, increasing number of JAK inhibitors have been recently approved or assessed in clinical trials, including research into cancer treatment4,5,6,7. In this context of the intensive development of these JAK inhibitors, safety data are crucial. The first three approved JAK inhibitors (ruxolitinib anti-JAK1,2, tofacitinib anti-JAK1,3, and baricitinib anti-JAK1,2,) can offer sufficient perspectives for safety studies, for patients who participated into clinical trials or those receiving treatment with the approval of these treatments in the United States, Asia and Europe. As for other biologic agents, risk of serious infections and opportunistic infections has been reported, mostly among patients participating in clinical trials8,9,10,11,12,13,14. As compared with patients using biologics (anti-TNF, abatacept, rituximab and tocilizumab), among those receiving tofacitinib, the rate of herpes zoster infection doubled in a real-world American study15. Apart from infection risk, studies to evaluate the risk of serious heart-related events and cancer were planned at the time tofacitinib was approved. Recent concerns about ruxolitinib involved occurrence of non-melanoma skin-cancer and second malignancies16 and concerns about tofacitinib and baricitinib involved embolism and thrombotic events17,18,19,20,21,22, intestinal perforations10,23,24,25,26 and malignancies13,25,27. Thus, the EMA Committee for Medicinal Products for Human Use and the FDA added thrombosis to the bariticinib and tofacitinib warnings and precautions28 as well as intestinal perforations29. Post-marketing reporting constitutes an important source to identify safety signals. In this study, we assessed the safety of the first three approved JAK inhibitors—ruxolitinib, tofacitinib and baricitinib—by using the World Health Organization (WHO) international pharmacovigilance database, VigiBase, which contains more than 24 million individual case safety reports (ICSRs) and classifies adverse events according to the Medical Dictionary for Regulatory Activities (MedDRA). To identify safety concern, we used disproportionality analysis. ## Results Among the 24,416,850 ICSRs in VigiBase, the number involving JAK inhibitors was 126,815. Tofacitinib had the highest number of reports (supplementary Table S1). Physicians reported 12% to 29% of the ICSRs for JAK inhibitors. In 16.3% of the ICSRs for ruxolitinib, 9.6% for tofacitinib and 12.9% for baricitinib, the adverse events caused or prolonged hospitalization. In 14.0% of the ICSRs for ruxolitinib, 1.9% for tofacitinib and 1.4% for baricitinib, the adverse events caused death. The median number of Preferred Terms (PTs) declared by ICSRs was 2.0 (IQR 1.0–3.0). For patients, the median age was 70, 61 and 61 years for ruxolitinib, tofacitinib and baricitinib reports, respectively. More than 75% of the ICSRs for tofacitinib and baricitinib involved women. Rheumatoid arthritis was most frequently reported in tofacitinib and baricitinib ICSRs (55% and 79.7%, respectively), whereas myelofibrosis and polycythemia vera were reported in ruxolitinib ICSRs (43.5% and 19.3%). A total of 376,487 adverse events were reported in the 126,815 ICSRs (including 6179 different PTs). We identified four main System Organ Classes (SOCs) for which adverse event reporting was significantly increased for JAK inhibitors compared with the full database (Fig. 1 and supplementary Table S2): “infections and infestations” (IC025 1.7, i.e. lower limit of the 95% credibility interval of the information component estimated thanks to disproportionality analysis), “musculoskeletal and connective tissue disorders” (IC025 1.1), “investigations” (IC025 0.9), and “neoplasms benign, malignant and unspecified” (IC025 0.8). Six other SOCs (including blood and lymphatic system and respiratory, thoracic and mediastinal disorders) represented also significant increased reporting of adverse event associated with JAK inhibitors (Fig. 1 and supplementary Table S2). We did not find any association for 17 of the 27 different SOCs, including nervous system, psychiatric, vascular, cardiac, skin and subcutaneous tissue disorders (Fig. 1 and supplementary Table S2). We further described the results regarding (1) infections and infestations, (2) musculoskeletal and connective tissue disorders and (3) neoplasms. We did not describe “investigations” SOC which includes blood test abnormalities because we focused on clinical events rather than isolated biological data. We finally focused on PTs of interest for embolism and thrombosis, gastrointestinal perforations and serious heart-related events. ### Infections and infestations (Table 1, Fig. 2 and supplementary Tables S3 and S4) The main significant increased reporting of adverse events for viral infections were herpes infections, including herpes viral infections (IC025 2.9), and influenza viral infections (IC025 2.4) (Table 1). We found a differential reporting between the three JAK inhibitors (Fig. 2). Over-reported herpes viral infections were ranked from the highest for baricitinib, then tofacitinib, then ruxolitinib, whereas over-reported influenza viral infections were ranked from the highest for tofacitinib, then baricitinib, then ruxolitinib. High dose of baricitinib was significantly associated with increased reporting of herpes viral infections and influenza viral infections compared with low dose (supplementary Tables S3 and S4). Regarding fungal infectious disorders, we identified pneumocystis infections and cryptococcal and coccidioides infections as significant higher reporting (IC025 1.9 for all three). We found a significant increased reporting concerning pneumocystis infections for each JAK inhibitor, with an over-reporting for baricitinib versus tofacitinib and ruxolitinib (Fig. 2). Similarly, tuberculous and atypical mycobacterial infections had IC025 values close to 2 (1.9 and 1.7, respectively). Tuberculous infections were over-reported for ruxolitinib versus tofacitinib, with no signal observed for baricitinib. Finally, we observed a significant increased reporting of infections according to organ localization: upper respiratory tract infections (IC025 1.9), urinary tract infections (IC025 1.9), and lower respiratory tract and lung infections (IC025 1.9). Over-reported respiratory and urinary tract infections were ranked from the highest for baricitinib, then tofacitinib, then ruxolitinib. High dose of baricitinib was significantly associated with increased reporting of upper respiratory tract infections compared with low dose whereas high dose of tofacitinib was significantly associated with decreased reporting of lower respiratory tract and lung infections and urinary tract infections compared with low dose. No other differences in an over reporting of infections were associated to the dose of either baricitinib or tofacitinib (supplementary Tables S3 and S4). ### Musculoskeletal and connective tissue disorders (supplementary Table S5) The adverse events “synovial and bursal disorders”, “musculoskeletal and connective tissue deformities” and “joint disorders” were the main significant adverse events reported (IC025 3.4, 2.1 and 1.9, respectively). ### Neoplasms (Table 2 and supplementary Tables S4 and S6) We identified malignant neoplasms for which adverse event reporting was significanty increased: “hematopoietic neoplasms (excluding leukaemias and lymphomas)” (IC025 3.7), “skin neoplasms malignant and unspecified” (IC025 2.4), “leukaemias” (IC025 2.1) and “soft tissue neoplasms benign” (IC025 1.9). “Respiratory and mediastinal neoplasms malignant” also presented a significant increase in reports (IC025 0.8). No differences in an over reporting of neoplams were associated to the dose of either baricitinib or tofacitinib (Supplementary Tables S4 and S6). ### Embolism and thrombosis (Table 3 and Fig. 2 and supplementary Tables S4 and S7) Among the 126,815 ICSRs, 1803 (1.4%) described an embolism and thrombosis adverse event (IC025 0.4). Over-reported embolism and thrombosis adverse events were ranked from the highest for baricitinib, then ruxolitinib, then tofacitinib (Fig. 2). No differences in an over reporting of embolism and thrombosis events were associated to the dose of either baricitinib or tofacitinib (supplementary Tables S4 and S7). ### Gastrointestinal perforation (Table 3 and Fig. 2 and supplementary Table S7) The JAK inhibitors were associated with higher reporting of “gastrointestinal perforation”, “large intestinal perforation”, “diverticular perforation”, “intestinal perforation” and “gastric perforation”. At the drug level, only tofacitinib had a significant increase of adverse event reporting. Of note, no increase for baricitinib and ruxolitinib did not mean no event: from 3 to 16 events were described for ruxolitinib and from 1 to 4 events for baricitinib. ### Major cardiovascular events (Table 3 and supplementary Table S7) No major cardiovascular adverse event were associated with higher reporting for JAK inhibitors. Similarly, no cerebrovascular events were reported with JAK inhibitors. At the drug level, only ruxolitinib had a significant increase of reporting for adverse events “cardiac failure”, “cardiac failure acute”, “cardiac failure congestive” and “cardiac failure chronic” compared with the full database. ## Discussion In this pharmacovigilance study, JAK inhibitors were most commonly associated with infectious adverse events, embolism and thrombosis, neoplasms and gastrointestinal perforation events. We also identified significant increase in adverse event reporting regarding musculoskeletal and connective tissue disorders. Finally, we found no association with major cardiovascular events. In our study, infections were frequently reported for JAK inhibitors, as was expected according to safety data from clinical trials24,30. We found a significant increase in reporting compared with the full database for some microorganisms (viral [herpes and influenza], fungal, and mycobacterial infectious disorders) and two main organ locations (respiratory and urinary tract infections). Herpes zoster has been identified as a complication of JAK inhibitors in clinical trials25,26,31,32 and in a pharmacovigilance study of adverse events reported from the United States22. Of note, in our study, herpes viral infections (MedDRA HLT) also include herpes simplex virus. Herpes zoster induced most of the treatment discontinuation due to infections in some clinical trials25,26, but few data are available for herpes simplex infections. We observed over-reported herpes viral infections, the highest level for baricitinib, then tofacitinib, then ruxolitinib. Associated risk factors that can affect herpes zoster and herpes simplex infections for patients receiving JAK inhibitors include age, glucocorticoid exposure25, other combined therapy, and underlying immunologic dysregulation. For example, herpes zoster/simplex infections were more frequently reported in a pooled safety data analysis of baricitinib in atopic dermatitis than in rheumatoid arthritis33. Atopic dermatitis is known to be associated with herpes simplex infections, with a severe form called eczema herpeticum34. In our study, the increased risk of herpes viral infections with baricitinib versus the two other JAK inhibitors could be explained by the underlying disorder because the main indication (80%) was rheumatoid arthritis. The risk associated with ruxolitinib was difficult to assess because most patients with haematopoietic neoplasms could have received prophylactic valaciclovir. Recent concerns about JAK inhibitors involved embolism and thrombosis17,18,19,20,21,22. Although the initial beneficial effect of ruxolitinib for risk of thrombosis was assessed in patients with polycythemia vera and myelofibrosis35, lack of evidence remains for this beneficial association. Regarding tofacitinib, in the meta-analyses including 12,410 tofacitinib-exposed patients from completed studies, the incidence rate of venous thromboembolism events was 0.25 (95% CI 0.19–0.33). In our study, we found significant disproportionality analysis results for embolism and thrombosis with the first three approved JAK inhibitors. Over-reported “embolism and thrombosis” adverse events were ranked the highest for baricitinib, then ruxolitinib, then tofacitinib. These comparisons must be interpreted with caution. Indeed, we did not consider patient characteristics, risk factors for thromboembolism or dose and duration of treatments. In the meta-analysis of clinical trials of tofacitinib, patients with than without baseline cardiovascular risk factors were more likely to experience thromboembolic events20. Risk factors were age ≥ 50 years and with at least one criterion (current smoker, high-density lipoprotein level < 40 mg/dL, history of hypertension, diabetes, myocardial infarction or coronary heart disease). Incidence rates in patients without risk factors were very low and most patients who experienced thromboembolic events also had multiple cardiovascular risk factors at baseline. Similarly, all patients with thromboembolic events in a pooled analysis of clinical trials of baricitinib had multiple risk factors25. Therefore, the treatment must be adapted to the individual risk. Regarding neoplasms, we found increased frequency of neoplasm reports and identified “skin neoplasms malignant and unspecified” as significant. The three JAK inhibitors were associated with increased frequency of “skin neoplasms malignant and unspecified”. This is an important finding because previous cohort studies of patients with rheumatoid arthritis did not find a difference between tofacitinib and biologic disease-modifying anti-rheumatic drugs in risk of non-melanoma skin cancer (adjusted hazard ratio 1.04 [95% CI 0.68–1.61])36. Rheumatoid arthritis is associated with increased risk of melanoma and non-melanoma skin cancer regardless of the exposure37,38. Thus, the increased frequency of “skin neoplasms malignant and unspecified” for ruxolitinib leads to a discussion of a class effect of the JAK inhibitor. “Respiratory and mediastinal neoplasms malignant” was frequently reported for all three JAK inhibitors. This finding confirmed the recent warning from Pfizer for tofacitinib39. Indeed, in this warning, the incidence rate of malignancies excluding non-melanoma skin cancer was 1.13 (95% CI 0.94–1.35), with lung cancer as the leading cancer. As for skin neoplasms, this signal concerned all three JAK inhibitors. Lastly, we found a significant increase in reporting for “leukaemias”, in particular for ruxolitinib, which is probably related to the underlying disease. Some studies have concluded similar incidence rates of malignancies for patients receiving tofacitinib or baricitinib as for those receiving other drugs40 and for non-melanoma skin cancer41 or malignancies excluding non-melanoma cancer13,14,25. ‘Cancer immunoediting’, the process whereby the human immune system destroys cancer cells within the body, is thought to rely upon a variety of cytokines (for example, IFNγ) and cell types (such as NK cells) that could be affected by JAK inhibition42. Decrease NK cells could predispose to develop malignancies among patients treated by JAK inhibitors but this effect remains unclear30. Exposure time within trials is relatively limited, and even if pharmacovigilance studies bring interesting data, longer follow-up is needed to further assess malignancy risk and to compare JAK inhibitors with each other. In our study, we observed increased frequency of gastrointestinal perforations with the three JAK inhibitors. Few cases of gastrointestinal perforation have been reported for patients participating in clinical trials of baricitinib and tofacitinib or those covered by US Medicare/Marketscan10,23,24,25,26. These few cases were described only among patients with rheumatoid arthritis. To our knowledge, only one clinical trial of ruxolitinib for myelofibrosis reported such an event causing death in a patient in the placebo group11,43. In our study, gastrointestinal perforation was over-reported only with tofacitinib. However, cases were also reported for the other two JAK inhibitors. These adverse events would be more frequent for patients with inflammatory bowel diseases. Treatments other than JAK inhibitors such as non-steroidal anti-inflammatory drugs are associated with increased risk of gastrointestinal perforation, which is important to consider with JAK inhibitors. Finally, the percentage of fatal cases resulted much higher for ruxolitinib than for other JAK inhibitors. We did not perform a detailed analysis and clinical review of the 7000 fatal cases. However, plausible explanation regarding the percentage for ruxolitinib relies on patient characteristics and indications. ## Methods ### Study design and data sources In this retrospective observational study, pharmacovigilance data were extracted from VigiBase, the World Health Organization (WHO) database of adverse drug reactions reporting, which is managed by the Uppsala Monitoring Centre (UMC). It contains more than 24 million individual case safety reports (ICSRs) submitted by national pharmacovigilance centers from countries around the world since 1967. Different people can report adverse drug reactions: healthcare professionals, patients, pharmaceutical companies. For each ICSR, characteristics of the patient, general administrative information, drugs and reactions are available. A completeness score is also provided, to add a measure of ICSR quality44. The likelihood of a causal association is not the same in all reports. The information provided in this study does not represent the opinion of the WHO. ### Procedures This study included all ICSRs reported from inception to February 28, 2021, with a suspected drug among the following: tofacitinib, baricitinib and ruxolitinib. Each ICSR contains at least one adverse event, which corresponds to the most specific level of the Medical Dictionary for Regulatory Activities (MedDRA) hierarchy: Lowest Level Term (LLT). Each LLT is linked to one Preferred Term (PT), which are themselves grouped into High Level Terms (HLTs). The MedDRA hierarchy thus describes adverse events according to five levels, from the very specific (LLT) to the very general (System Organ Class [SOC]; details are available in supplementary Fig. S1). Each ICSR contains the onset date, end date, seriousness and fatal outcome of the event. A severe adverse event could be any event causing death, being life-threatening, requiring initial or prolonged hospital stay, or leading to persistent or clinically significant disability, congenital anomaly, birth defect or any other medically important condition. ### Statistical analysis To identify potential safety concern, we used disproportionality analysis, which compares the proportion of each suspected drug-induced adverse event (at different MedDRA levels) reported for a drug or a group of drugs with that for the same adverse event in the full database or for other drugs. Thus, when a proportion of an adverse event is higher for JAK inhibitors than for other drugs, this adverse event could constitute a safety concern. Two main estimations of the disproportionality analysis can be used: the information component (IC) for comparing to the full database or to other drugs and reporting odds ratios (RORs) for comparing drugs belonging to the same group of drugs. The IC was developed and validated by the UMC; it relies on a Bayesian confidence propagation neural network45 and the formula is as follows: $$IC=log2\frac{{N}_{observed}+0.5}{{N}_{expected}+0.5}$$ in which $${N}_{expected}$$ is estimated by $${N}_{expected}=\frac{{N}_{drug}\times {N}_{effect}}{{N}_{total}}$$, $${N}_{drug}$$ is the total number of reports involving the drug studied, and $${N}_{effect}$$ is the total number of reports for the adverse events, regardless of drug. If the corresponding lower end of the 95% credibility interval (IC025) positive46, the adverse event could be considered a significant signal. This threshold has been used in the UMC and in different signal detection studies. Disproportionality analysis with the IC is illustrated in supplementary Fig. S1. Disproportionality analysis relies on the ROR for drugs belonging to the same group. We detail the formula with the JAK inhibitors in supplementary Table S8, with corresponding 95% confidence intervals (95% CIs). We first estimated the IC025 for adverse events related to JAK inhibitors at the more general level of the MedDRA hierarchy (SOC). Then, for the SOC with a positive IC025, we detailed the IC025 for adverse events at the therapeutic class level (JAK inhibitors) and for each drug at different MedDRA levels: High Level Group Terms (HLGTs) and HLTs. Finally, we focused on warnings by regulatory agencies: infections, embolism and thrombosis, serious heart-related events, gastrointestinal perforations. For selected adverse events with a positive IC025 for the three JAK inhibitors, we calculated RORs and 95% CIs. For selected adverse events with a positive IC025 for tofacitinib and baricitinib, we also estimated the IC025 of infections, embolism and thrombosis, serious heart-related events, gastrointestinal perforations according to their doses: high dose over 2 mg per day and 5 mg per day for baricitinb and tofacitinib, respectively; low-dose either. Lastly, we calculated ROR and 95% CIs for these previous adverse events using low-dose as reference. Quantitative variables are described with median (interquartile range) and categorical variables with number (percentage). Analyses involved using R 3.6.2. ## Conclusion In this international study, we identified significant increase in reporting of adverse event for the first three marketed JAK inhibitors compared with reporting of adverse eventsfor other drugs.. We confirmed some adverse effects such as infectious events and embolism and thrombosis which were already known and mentioned among cautions for use. Our results also lead to increase vigilance regarding malignancies for ruxolitinib, tofacitinib and baricitinib as well as gastrointestinal perforations for tofacitinib. We found no association with major cardiovascular events. Longer follow-up and observational studies will be helpful to improve knowledge about these risks among patients with other risk factors and treatments.
2022-10-06 18:27:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2863060534000397, "perplexity": 14293.253097825873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00544.warc.gz"}
https://www.iacr.org/cryptodb/data/paper.php?pubkey=32467
CryptoDB Paper: The Price of Verifiability: Lower Bounds for Verifiable Random Functions Authors: Nicholas Brandt , ETH Zurich Dennis Hofheinz , ETH Zurich Julia Kastner , ETH Zurich Akin Ünal , ETH Zurich Search ePrint Search Google Slides TCC 2022 Verifiable random functions (VRFs) are a useful extension of pseudorandom functions for which it is possible to generate a proof that a certain image is indeed the correct function value (relative to a public verification key). Due to their strong soundness requirements on such proofs, VRFs are notoriously hard to construct, and existing constructions suffer either from complex proofs (for function images), or rely on complex and non-standard assumptions. In this work, we attempt to explain this phenomenon. We show that for a large class of pairing-based VRFs, it is not possible to obtain short proofs and a reduction to a simple assumption simultaneously. Since the class of "consecutively verifiable" VRFs we consider contains in particular the VRF of Lysyanskaya and that of Dodis-Yampolskiy, our results explain the large proof size, resp. the complex assumption of these VRFs. BibTeX @inproceedings{tcc-2022-32467, title={The Price of Verifiability: Lower Bounds for Verifiable Random Functions}, publisher={Springer-Verlag}, author={Nicholas Brandt and Dennis Hofheinz and Julia Kastner and Akin Ünal}, year=2022 }
2022-12-02 12:16:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.622279942035675, "perplexity": 2877.352957998708}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710902.80/warc/CC-MAIN-20221202114800-20221202144800-00276.warc.gz"}
https://www.zerotosingularity.com/blog/adventures-in-deep-learning/
"Artificial intelligence is the new electricity." (Andrew Ng) "Deep learning is eating the world." If those did not catch your breath, they sure did mine. While I have always had an interest in artificial antelligence and neural networks - having taken some classes at university in 2007, it sure has been a while since I have done anything remotely real with it. Even more so, it sure looks like machine learning has come a very long way since then as well. :) It must have been the end of 2016 (yes, that long ago), I first enrolled in the Coursera machine learning course from Andrew Ng. I did not really take it to heart - I was really busy doing consulting those days, switching sessions to later dates time and time again. It was only in the summer of 2017 I decided actually to complete it. After that, I was hooked, and started my quest to find more resources to get up to speed in deep learning. Come the new year (2018), resolutions were made: 1. Spend more time with my love. 2. Loose weight. 3. Excercise more. 4. Blog about what I am doing. This blog is me making good on number four and five of those resolutions. (fyi, I am doing my best for the other ones as well.) Meanwhile, I have been spending quite some time getting up to speed (probably will need some more time) and this is the place where I will document my adventures in deep learning. Review/comment/discuss courses, books, articles, interesting things related to deep learning… Whatever deep learning material I can get my hands on… After some initial research, I was overwhelmed by the vast amount of resources to get you started, paid/free online courses, books, blog posts… Deep learning is definitely already taking over the publishing world. It took me a while to figure out where to start and what to read, and even then I felt I should have done things in a different order. ### What’s in it for you? If you want to learn more about deep learning and get an overview of what is out there. This place is for you. I will provide regular updates and reviews of courses, books, articles, papers… while I am mastering the subject myself. Here is a short overview of content I will provide: • review courses I will take: • what I expected to learn • prerequisites • content review • what I actually learned • why you might be interested • time spent • cost • document my practical experience and experiments • share interesting links, articles, news…
2019-03-18 17:33:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2512127757072449, "perplexity": 1183.234798228502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201521.60/warc/CC-MAIN-20190318172016-20190318194016-00316.warc.gz"}
https://pressbooks.lib.jmu.edu/programmingpatterns/chapter/truncation/
# 5 Truncation Integers commonly include more digits of accuracy than needed. In some situations, the right way to deal with this is using truncation. Truncation problems can be solved by dropping the right-most digits using the techniques from Chapter 3 on digit manipulation and then multiplying. # Motivation Suppose you have to write a payroll program for a manufacturing company. The employees at the company get paid a fixed amount per piece, but only for multiples of ten pieces. For inventory purposes, the company keeps track of the exact number of pieces completed, but for payroll purposes, that number has more digits of accuracy than is needed. For example, if the employee completes 520 pieces, 521 pieces, or 529 pieces, they are going to be payed for 520 pieces. Hence, the payroll system you are writing must truncate the actual number of pieces produced to the second digit (i.e., the 10s place). # Review If the person were being paid per completed batch of ten pieces (rather than per piece), then you would only need to determine the number of completed batches. Since there are 10 pieces per batch, you could accomplish this by dropping the right-most digit. Further, you know from the discussion of digit manipulation in Chapter 3 that this can be accomplished by dividing by $10^1$ (using integer division). For example, letting `number` denote the actual number produced (at full accuracy): ``` batches = number / 10; ``` where `/` denotes integer division. So, an employee that completed 526 pieces completed 52 batches (ignoring the remaining 6 pieces). Unfortunately, what you need is not the number of batches but, instead, the number of pieces truncated to the 10s place. Fortunately, given the number of batches, you can calculate this pretty easily. In particular: ``` truncated = batches * 10; ``` So, continuing with the example, the 52 batches corresponds to 520 units truncated to the 10s place. # The Pattern It turns out that there’s nothing special about the 10s place, so the general pattern is easy to see. Letting `place` denote the integer place to truncate to (i.e., `10` for the 10s place, `100` for the 100s place, etc.), then the value truncated to that place is given by: ``` truncated = (number / place) * place; ``` where `/` again denotes integer division. One important aspect of this pattern is that it illustrates the importance of not over-generalizing. In particular, at first glance, you might think that the expression `(number / place) * place` could be simplified to `number`. However, this is not the case when using integer division. Specifically, when using integer division, `(a / b) * b` only equals `a` when `a` is evenly divisible by `b`. For example, as discussed above, `(526 / 10) * 10` is equal to `52 * 10` or `520`, which does not equal `526`. # Examples Suppose you want to talk about something that will happen 87 years after the year 1996. You might want to use the exact year (i.e., `1996 + 87` or `2083`), but you might want to know the decade or century rather than the year. Truncating to the decade (i.e., a `place` of `10`) using the truncation pattern yields `(2083 / 10) * 10` or `208 * 10` or `2080`. Similarly, truncating to the century (i.e., a `place` of `100`) using the truncation pattern yields `(2083 / 100) * 100` or `20 * 10` or `2000`. # Some Warnings It’s important to note that people use the word “truncation” in a variety of different, but related, ways. Most importantly, people often talk about truncating floating point values to integer values (e.g., truncating `3.14` to `3`), which is commonly accomplished using a type cast (e.g., `(int)3.14` evaluates to `3`). Our concern here is with a different notion of truncation. It’s also important to distinguish between the accuracy used when performing calculations and the accuracy (or format) used when displaying output. In some situations it is necessary to perform calculations using truncated values. In other situations it is necessary to perform calculations using all of the digits of accuracy available and truncate at the end. In still other situations it is necessary to perform calculations using all of the digits of accuracy available and then format the output when it is displayed. It is your responsibility to know what is required of a particular section of code.
2023-03-30 17:53:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5442618727684021, "perplexity": 639.7029076376307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00732.warc.gz"}
http://math.stackexchange.com/questions/672936/prove-the-product-of-any-three-consecutive-integers-is-divisible-by-6
# Prove: The product of any three consecutive integers is divisible by $6$. [duplicate] This question already has an answer here: I'm new to number theory and was wondering if someone could help me with this proof. Prove: The product of any three consecutive integers is divisible by $6$. So far I have $\cfrac{x(x+1)(x+2)}{6}$; How would I go about proving this? Should I replace $x$ with $k$ and then $k$ with $k+1$ and see if the statement is true? - ## marked as duplicate by Martin Sleziak, Daniel Fischer♦Dec 5 '15 at 14:07 Convince yourself that one of those three numbers is divisible by $3$, and at least one is divisible by $2$. – user61527 Feb 11 '14 at 22:28 – Martin Sleziak Dec 4 '15 at 12:52 Of $n$, $n +1$, $n +2$, one must be even, so divisible by 2 (why?). One must be divisible by 3 (why?). So their product must be divisible by $2 \times 3$ (why?) ... - Peter, how would one prove rigorously that in a list of three consecutive integers, one (and only one) of the three is divisible by $3$? Or in general, in a list of $n$ integers, one (and only one) of them is divisible by $n$? – EthanAlvaree Apr 15 '15 at 23:04 Hint $\displaystyle\ \ n(n\!+\!1)(n\!+\!2)\, =\, 6 { n+2 \choose 3}$ - It's the same as asking why ${ n+2 \choose 3}$ is an integer. – whatever Dec 4 '15 at 17:51 Hint: Note that the product of two consecutive integers is divisible by $2$ because one of them is even. Note then that the product of three consecutive integers is divisible by $3$ (this about it). Now $2$ and $3$ are prime, so the prodcut is divisible by $2\cdot 3 = 6$. - I see what you are saying, Is there any way to write it as a formal proof or no? – Lil Feb 11 '14 at 22:30 @Lil: I almost wrote down the formal proof. You just have to provide the justification for the claim that the product of three consecutive integers is divisible by $3$. – Thomas Feb 11 '14 at 22:31 is the any way to justify that the product of two consecutive integers, x(x+1) is divisible by 2 using mathematical induction and replacing x with k+1? – Lil Feb 11 '14 at 22:32 @Lil: Note that if you are given $x$, then either $x$ is even or $x+1$ is even, so ... – Thomas Feb 11 '14 at 22:33 Thomas, how would one prove rigorously that in a list of three consecutive integers, one (and only one) of the three is divisible by $3$? Or in general, in a list of $n$ integers, one (and only one) of them is divisible by $n$? – EthanAlvaree Apr 15 '15 at 23:34
2016-07-01 20:57:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8107351660728455, "perplexity": 135.0809586359447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00131-ip-10-164-35-72.ec2.internal.warc.gz"}
https://crypto.stackexchange.com/questions/48462/machine-learning-to-break-imperfect-randomness?noredirect=1
# Machine learning to break imperfect randomness I have a shared randomness between two user. But to make situation worse Eve can listen to exchanges and guess noisy or error prone version of shared randomness, whose correlation varies depending on it's proximity from either party. Are there works to help leak secret based on Eve's observation using machine learning? • I think your question might be too broad for this site. Can you distill it down to something more concrete? – mikeazo Jun 19 '17 at 15:50 • Eve's observation are correlated, less though, with Alice and Bobs about the secret. I can make Eve create situation to leak more information by increasing higher degree of correlation. Are there works using deep learning and feedback networks to assist Eve? – Jay Jun 19 '17 at 15:54 • Machine learning is meme tier jargon words now – daniel Jun 19 '17 at 16:44 • Related this question about machine learning in general. I guess most of it applies to this topic as well. – Maarten Bodewes Jun 19 '17 at 20:47 Both cryptography and machine learning are very broad terms. I gather you are wondering if it's possible to use some sort of inference techniques (like support vector machines or deep learning) to exploit a weak PRNG to extract a secret key. If this is possible, it's only true in a very limited manner, and there are probably more effective ways to infer secret material (see, for example, Bleichenbacher’s method for learning about biased nonces). The only obvious application to me seems to be in the validation of the strength of a PRNG. If you're willing to consider other kinds of applications of machine learning, you may be able to make some progress. For example, in this paper (unfortunately behind a paywall), the author trains a neural network to carry out a known-plaintext attack against DES and 3DES and infer the plaintext from a given ciphertext. Of course, DES and 3DES are not secure, and the key is not leaked by the method. Furthermore, I don't find this approach any better than just brute-force. Why not? Machine learning is most useful when it can teach you something about your data. Good cryptosystems publish their designs, so I already know what they will do with the data, so again, I can really only learn about the security of the generation of secret key material: Kerckhoff's Principle in action. Machine learning may fit into the security landscape in other ways. Enter adversarial machine learning, which concerns the security of the ML itself. How do you know someone isn't feeding you deliberately-biased training samples to skew your algorithms, for example? Also, consider the learning with errors problem: given a system of linear equations mod some $q$, solve it assuming some additive random noise is added in. The "random noise" part makes the solution very difficult. In a way, it flips your question on its head: trying to learn about vectors in this system has created a useful hard problem that cryptosystems can be built from! • The quoted paper claims: "The attack was practically, and successfully, applied on DES and Triple-DES. This attack required an average of $2^{11}$ plaintext-ciphertext pairs to perform cryptanalysis of DES in an average duration of 51 minutes". I am totally skeptical! This has to be a statistical error, much like this which in other form crept in an international journal paper according to the author's webpage; see one of several rebutals. – fgrieu Jun 20 '17 at 5:29 Attacks against established cryptography of their modern time have rarely if ever succeeded without exploiting knowledge about the cryptographic system they attacked (I do not know any exception since WWII; Enigma, Purple.. are not; and I'd be surprised to learn of any since 1970 in the open literature). This applies to machine learning: my advise is that it won't solve interesting cryptographic problems if it does not know the structure of the cryptographic system under attack. I do believe in automated cryptanalysis: • from output of weak (including some poorly studied) cryptosystems and random number generators; • or from a description of the cryptosystem. Machine learning can be useful towards these goals. • My design involves Alice and Bob observing a process. By inherence property of design Alice and Bob observe similar(noisy) information while Eve's observation are different yet somewhat correlated. If Eve comes close to Bob or Eve tries to forcefully induce some physical changes she can leak more correlated observations.If Eve can use machine learning to leak the observations at Bob and Eve. Actually the observation itself is secret which is generated without any deterministic model. So learning model using Machine learning like described in previous suggestions can be still valid here. – Jay Jun 20 '17 at 7:16
2020-09-21 00:36:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4800388514995575, "perplexity": 1081.4618309523635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198868.29/warc/CC-MAIN-20200920223634-20200921013634-00328.warc.gz"}
https://zenodo.org/record/4534138/export/dcite4
Conference paper Open Access # How you type is what you type: Keystroke dynamics correlate with affective content López-Carral, Héctor; Santos-Pata, Diogo; Zucca, Riccardo; Verschure, Paul F.M.J. ### DataCite XML Export <?xml version='1.0' encoding='utf-8'?> <identifier identifierType="URL">https://zenodo.org/record/4534138</identifier> <creators> <creator> <creatorName>López-Carral, Héctor</creatorName> <givenName>Héctor</givenName> <familyName>López-Carral</familyName> <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-4423-7179</nameIdentifier> <affiliation>SPECS-IBEC</affiliation> </creator> <creator> <creatorName>Santos-Pata, Diogo</creatorName> <givenName>Diogo</givenName> <familyName>Santos-Pata</familyName> <affiliation>SPECS-IBEC</affiliation> </creator> <creator> <creatorName>Zucca, Riccardo</creatorName> <givenName>Riccardo</givenName> <familyName>Zucca</familyName> <affiliation>SPECS-IBEC</affiliation> </creator> <creator> <creatorName>Verschure, Paul F.M.J.</creatorName> <givenName>Paul F.M.J.</givenName> <familyName>Verschure</familyName> <affiliation>SPECS-IBEC</affiliation> </creator> </creators> <titles> <title>How you type is what you type: Keystroke dynamics correlate with affective content</title> </titles> <publisher>Zenodo</publisher> <publicationYear>2019</publicationYear> <subjects> <subject>keystroke</subject> <subject>keyboard</subject> <subject>typing</subject> <subject>arousal</subject> <subject>valence</subject> <subject>affect</subject> </subjects> <dates> <date dateType="Issued">2019-12-09</date> </dates> <language>en</language> <resourceType resourceTypeGeneral="ConferencePaper"/> <alternateIdentifiers> <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/4534138</alternateIdentifier> </alternateIdentifiers> <relatedIdentifiers> <relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.1109/ACII.2019.8925460</relatedIdentifier> <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/787061</relatedIdentifier> </relatedIdentifiers> <rightsList> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract">&lt;p&gt;Estimating the affective state of a user during a computer task traditionally relies on either subjective reports or analysis of physiological signals, facial expressions, and other measures. These methods have known limitations, can be intrusive and may require specialized equipment. An alternative would be employing a ubiquitous device of everyday use such as a standard keyboard. Here we investigate if we can infer the emotional state of a user by analyzing their typing patterns. To test this hypothesis, we asked 400 participants to caption a set of emotionally charged images taken from a standard database with known ratings of arousal and valence. We computed different keystroke pattern dynamics, including keystroke duration (dwell time) and latency (flight time). By computing the mean value of all of these features for each image, we found a statistically significant negative correlation between dwell times and valence, and between flight times and arousal. These results highlight the potential of using keystroke dynamics to estimate the affective state of a user in a non-obtrusive way and without the need for specialized devices.&lt;/p&gt;</description> </descriptions> <fundingReferences> <fundingReference> <funderName>European Commission</funderName> <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier> <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/787061/">787061</awardNumber> <awardTitle>Advanced tools for fighting oNline Illegal TrAfficking</awardTitle> </fundingReference> </fundingReferences> </resource> 38 37 views
2021-10-22 08:06:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3449976146221161, "perplexity": 11080.323793167747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585460.87/warc/CC-MAIN-20211022052742-20211022082742-00141.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-discriminant-of-3x-2-9x-4-and-use-it-to-determine-if-the-equ
# How do you find the discriminant of -3x^2+9x=4 and use it to determine if the equation has one, two real or two imaginary roots? Feb 16, 2017 Therefore, there are two roots which will be real and unequal. #### Explanation: When you are solving a quadratic equation, it is very useful to know what sort of answer you will get. This can often help in determining which method to use - for example whether to look for factors or to use the quadratic formula. A quadratic equation is written in the form $a {x}^{2} + b x + c = 0$ Always change to this form first The discriminant is $\Delta = {b}^{2} - 4 a c$ The solutions to an equation are called the 'roots' and are referred to as $\alpha \mathmr{and} \beta$ The value of $\Delta$ tells us about the nature of the roots. If $\Delta > 0 \Rightarrow$ the roots are real and unequal (2 distinct roots) If $\Delta > 0 \text{ and a prefect square} \Rightarrow$ the roots are real, unequal and$\textcolor{w h i t e}{\ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots . \ldots \ldots \ldots \ldots \ldots . .}$ rational If $\Delta = 0 \Rightarrow$ the roots are real and equal (1 root) If $\Delta < 0 \Rightarrow$ the roots are imaginary and unequal Note that if $a \text{ or } b$ are irrational, the roots will be irrational. $- 3 {x}^{2} + 9 x = 4 \text{ "rArr} 3 {x}^{2} - 9 x + 4 = 0$ $\Delta = {b}^{2} - 4 a c$ $\Delta = {\left(- 9\right)}^{2} - 4 \left(3\right) \left(4\right) = 33$ $33 > 0$ and is not a perfect square Therefore, there are two roots which will be real and unequal.
2021-06-17 06:37:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 14, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.689702570438385, "perplexity": 316.2546278403517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629209.28/warc/CC-MAIN-20210617041347-20210617071347-00513.warc.gz"}
https://www.semanticscholar.org/paper/Zooming-in-on-%24%24B%5Crightarrow-K%5E*%5Cell-%5Cell-%24%24B%E2%86%92K%E2%88%97%E2%84%93%E2%84%93-Brass-Hiller/9205d20b6db9f976095e7b1ed784a62124114e20
# Zooming in on $$B\rightarrow K^*\ell \ell$$B→K∗ℓℓ decays at low recoil @article{Brass2016ZoomingIO, title={Zooming in on \$\$B\rightarrow K^*\ell \ell \$\$B→K∗ℓℓ decays at low recoil}, author={Simon Brass and Gudrun Hiller and Ivan Nǐsand{\vz}i{\'c}}, journal={The European Physical Journal C}, year={2016}, volume={77}, pages={1-16} } • Published 2 June 2016 • Physics • The European Physical Journal C We analyse $$B\rightarrow K^*\ell \ell$$B→K∗ℓℓ decays in the region of low hadronic recoil, where an operator product expansion (OPE) in $$1/m_b$$1/mb applies. Using a local model for charm contributions based on $$e^+ e^- \rightarrow \mathrm{hadrons}$$e+e-→hadrons against the OPE provides a data-driven method to access the limitations to the OPE’s accuracy related to binnings in the dilepton mass. Model-independent fits to $$B\rightarrow K^*\mu \mu$$B→K∗μμ low recoil angular observables… 7 Citations • Physics Journal of High Energy Physics • 2021 Following our earlier work we establish kinematic endpoint relations for baryon decays using the Wigner-Eckart theorem and apply them to 12→12\documentclass[12pt]{minimal} \usepackage{amsmath} • Physics Journal of High Energy Physics • 2021 We revisit the theoretical predictions and the parametrization of non-local matrix elements in rare B¯s→K¯∗ϕℓ+ℓ−\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} • Materials Science The European Physical Journal C • 2020 Following the updated measurement of the lepton flavour universality (LFU) ratio RK\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} Data from the LHCb experiments are indicative of a substantial distinction between the B → K (or K*) + e+e− and B → K (or K*) + μ+μ− branching ratios (April 2017). The branching ratio for the e+e− • Physics • 2017 A method for determining the long-distance contributions in $\bar{B}^{0} \rightarrow \bar{K}^{*0}\mu^{+}\mu^{-}$ decays is presented. This method uses an empirical model that relies on measurements • Physics • 2016 In the aligned two-Higgs-doublet model, we perform a complete one-loop computation of the short-distance Wilson coefficients $C_{7,9,10}^{(\prime)}$, which are the most relevant ones for $b\to To further precision studies with$B \to K^{(*)} \ell \ell$decays in the high-$q^2\$ window uncertainties related to the operator product expansion (OPE) need to be scrutinized. How well can the OPE ## References SHOWING 1-10 OF 39 REFERENCES • Physics • 2010 Using the heavy quark eective theory framework put forward by Grinstein and Pirjol we work out predictions for B! K l + l , l = e; , decays for a softly recoiling K , i.e., for large dilepton masses • Physics • 2000 Using improved theoretical calculations of the decay form factors in the Light Cone-QCD sum rule approach, we investigate the decay rates, dilepton invariant mass spectra and the forward-backward • Physics • 2004 B ! Ke + e in the low recoil region (large lepton invariant mass q 2 � m 2). In this region the long-distance effects from quark loops can be computed with the help of an operator product expansion
2023-02-01 17:06:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8722981810569763, "perplexity": 6062.316794428551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00193.warc.gz"}
https://eng.libretexts.org/Bookshelves/Environmental_Engineering_(Sustainability)/Book%3A_To_Catch_the_Rain_(Grafman)/07%3A_Problem_Sets/7.04%3A_Gutters
32. Use the area rule of thumb method to size gutters for an almost flat 1100 $$ft^2$$ roof in Columbia, Missouri.
2019-12-12 23:57:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2757340669631958, "perplexity": 5885.6725847511125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547536.49/warc/CC-MAIN-20191212232450-20191213020450-00373.warc.gz"}
https://www.greencarcongress.com/2018/09/20180922-ngengine.html
## US DOE and California partner to support $11M in advanced natural gas engine research ##### 22 September 2018 The US Department of Energy, National Renewable Energy Laboratory (NREL), California Energy Commission, and South Coast Air Quality Management District (SCAQMD) have teamed up to launch new research focused on medium- and heavy-duty natural gas engines and vehicles. Through a new request for proposals (RHQ-8-82305), NREL will award up to$11 million for projects in three categories: (1) reducing the cost natural gas vehicles; (2) increasing vehicle efficiency; and (3) advancing new innovative medium- and heavy-duty natural gas engine designs. Lowering the Total Cost of Ownership. The RFP is looking for technologies that lower initial vehicle costs as well as TCO over a two-year period. Technologies may provide engine operation that will reduce costs or improve efficiency, as well as technologies that address costs for storage, fuel or emissions control, or other natural gas vehicle systems that contribute to higher costs, compared with conventional vehicles. Current NG vehicles can have an initial cost premium of between 15%-50% over conventional technology. Proposals should demonstrate methodology proposed to meet the objective to bring the cost differential between conventional technologies and NGV’s in the same vehicle class down to between 5%-25%. For proposals that do not demonstrate an initial cost reduction, proposals should compare the two-year total cost of ownership for the CNG technology compared with conventional vehicles of the same vehicle class and duty cycle, showing lower costs for the NG product. Successful projects will demonstrate the use of innovative engine, vehicle, drive-train, on- board storage, fueling systems or other technologies that can lead to the reduction in the TCO to meet the overall objective. Example strategies and methodologies include: • Innovative on-board storage tank designs or monitoring technologies that can reduce costs, weight, or displaced volume on a vehicle. • Technologies that can improve storage utilization by addressing thermodynamic effects such as those associated with achieving full-fills using Compressed Natural Gas (CNG) dispensers, using stranded gas in low pressure tanks, reducing the need for venting of Liquified Natural Gas (LNG) tanks, etc. Improving NG engine and vehicle emissions and efficiency. In this category, the awardee will improve the efficiency and emissions of a natural gas engine as part of a conventional or hybrid powertrain, to the point of being capable of commercially saleable into a medium- or heavy-duty vehicle. The objective is to demonstrate fuel efficiency and emissions improvements through advanced combustion, hybridization or other methods that achieve comparable performance and efficiency as that of current diesel technologies and be capable of being certified to ultra low NNOxOx levels (e.g., 0.02g/bHp-hr, also referred to as near-zero NOx levels). Technologies leading to even lower emissions levels is encouraged. Efficiency and emissions improvements shall be proven through analytics or demonstration to quantify the impact to emissions and efficiency as compared to current conventional technology. The technical proposal should identify or develop appropriate deterioration factors to be used as a basis for emissions and efficiency calculations. Example strategies and methodologies include: • Use of non-diesel pilot fuel for direct-injection applications to increase efficiency and minimize PM and NOx. • Research homogeneous charge compression ignition, or other kinetically-controlled combustion regime, for natural gas engines, including engine controls development and/or spark-assistance to extend the range of compression ignition operation. • Demonstrate the benefits of reactivity controlled compression ignition using natural gas in combination with a higher-cetane pilot fuel. • Develop advanced emission control strategies that can enable the certification of high efficiency natural engines including but not limited to low temperature catalysis of methane. • Integrate an advanced natural gas engine as a high efficiency, low emission, and cost-effective range extender in a hybrid-electric vehicle application. Expanding natural gas engine and vehicle availability. Awardees will develop and make commercially available natural gas engine(s). The project(s) will address the near- to mid- term time (commercial introduction of their development within 3-5 years of the end of the subcontract). Proposals may consist of all phases of engine and vehicle development, from conceptual design and feasibility analysis to prototype engine development and laboratory testing, production engine development, chassis integration, and technical and market demonstration. A successful proposal must have engagement of third parties directly linked to the commercial product and must involve the Engine Original Equipment Manufacturer (OEM) or Vehicle OEM. Chassis integration and vehicle demonstration activities are required to validate real-world performance. The engine objective is for the resulting to be capable of being certified to the medium- and heavy-duty engine levels for the vehicle service class of the intended market, in particular markets where near-zero NOx levels are important. More aggressive emission levels are encouraged. All vehicle certification requirements must be comprehensively addressed either through this work effort or defined within a commercialization and emissions certification plan as part of the final report deliverable. Example strategies and methodologies include: • Develop a 12-liter or larger natural gas engine that can support over-the-road trucking with performance that exceeds existing commercially available on-road natural gas engines. • Integrate and demonstrate a prototype natural gas engine with advanced technology on a vehicle representative of the target market. • Integrate a smaller displacement, near-zero NOx, high efficiency natural gas engine as a range extender in a hybrid-electric Class 8 vehicle application. NREL will award 2-5 performance-based Firm Fixed Price subcontract(s) under this solicitation. Offerors may propose a period of performance between 24 and 36 months in duration with an estimated budget of approximately $1,000,000-$4,000,000 per award amount from NREL.
2022-12-07 10:30:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3358670473098755, "perplexity": 5235.940790685306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00215.warc.gz"}
https://math.libretexts.org/Visualizations_and_Simulations/CalcPlot3D_Interactive_Figures/OpenStax_Calculus_Dynamic_Figures/FIGURE_12.5.5%3A_Non-intersecting_lines_in_space_do_no_have_to_be_parallel
Skip to main content FIGURE 12.5.5: Non-intersecting lines in space do no have to be parallel $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
2019-05-26 16:10:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4166836142539978, "perplexity": 1294.8899017957415}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259316.74/warc/CC-MAIN-20190526145334-20190526171334-00294.warc.gz"}
https://www.groundai.com/project/numerical-optimization-of-eigenvalues-of-hermitian-matrix-functions/
Numerical Optimization of Eigenvalues of Hermitian Matrix Functions # Numerical Optimization of Eigenvalues of Hermitian Matrix Functions Emre Mengi Department of Mathematics, Koç University, Rumelifeneri Yolu, 34450 Sarıyer, İstanbul, Turkey (emengi@ku.edu.tr). The work of this author was supported in part by the European Commission grant PIRG-GA-268355 and the TÜBİTAK (The Scientific and Technological Research Council of Turkey) Career Grant 109T660.    E. Alper Yildirim Department of Industrial Engineering, Koç University, Rumelifeneri Yolu, 34450 Sarıyer-İstanbul, Turkey (alperyildirim@ku.edu.tr). This author was supported in part by TÜBİTAK (The Scientific and Technological Research Council of Turkey) Grant 112M870 and by TÜBA-GEBİP (Turkish Academy of Sciences Young Scientists Award Program).    Mustafa Kiliç Department of Mathematics, Koç University, Rumelifeneri Yolu, 34450 Sarıyer, İstanbul, Turkey (mukilic@ku.edu.tr). The work of this author was partly supported by the European Commission Grant PIRG-GA-268355. ###### Abstract This work concerns the global minimization of a prescribed eigenvalue or a weighted sum of prescribed eigenvalues of a Hermitian matrix-valued function depending on its parameters analytically in a box. We describe how the analytical properties of eigenvalue functions can be put into use to derive piece-wise quadratic functions that underestimate the eigenvalue functions. These piece-wise quadratic under-estimators lead us to a global minimization algorithm, originally due to Breiman and Cutler. We prove the global convergence of the algorithm, and show that it can be effectively used for the minimization of extreme eigenvalues, e.g., the largest eigenvalue or the sum of the largest specified number of eigenvalues. This is particularly facilitated by the analytical formulas for the first derivatives of eigenvalues, as well as analytical lower bounds on the second derivatives that can be deduced for extreme eigenvalue functions. The applications that we have in mind also include the -norm of a linear dynamical system, numerical radius, distance to uncontrollability and various other non-convex eigenvalue optimization problems, for which, generically, the eigenvalue function involved is simple at all points. Key words. Hermitian eigenvalues, analytic, global optimization, perturbation of eigenvalues, quadratic programming AMS subject classifications. 65F15, 90C26 ## 1 Introduction The main object of this work is a matrix-valued function that is analytic and Hermitian at all . Here, we consider the numerical global minimization of a prescribed eigenvalue of over , where denotes a box. From an application point of view, a prescribed eigenvalue typically refers to the th largest eigenvalue, i.e., , or a weighted sum of largest eigenvalues, i.e., for given real numbers . However, it may as well refer to a particular eigenvalue with respect to a different criterion as long as the (piece-wise) analyticity properties discussed below and in Section LABEL:sec:Her_eig_pert are satisfied. The literature from various engineering fields and applied sciences is rich with eigenvalue optimization problems that fits into the setting of the previous paragraph. There are problems arising in structural design and vibroacoustics, for which the minimization of the largest eigenvalue or maximization of the smallest eigenvalue of a matrix-valued function is essential, e.g., the problem of designing the strongest column which originated from Euler in the 18th century [30]. In control theory, various quantities regarding dynamical systems can be posed as eigenvalue optimization problems. For instance, the distance from a linear dynamical system to a nearest unstable system [47], and the -norm of a linear dynamical system have non-convex eigenvalue optimization characterizations [3]. In graph theory, relaxations of some NP-hard graph partitioning problems give rise to optimization problems in which the sum of the largest eigenvalues is to be minimized [10]. In this paper, we offer a generic algorithm based on the analytical properties of eigenvalues of an analytic and Hermitian matrix-valued function, that is applicable for any eigenvalue optimization problem whenever lower bounds on the second derivatives of the eigenvalue function can be calculated analytically or numerically. All of the existing global eigenvalue optimization algorithms in the non-convex setting are designed for specific problems, e.g., [3, 5, 6, 7, 15, 17, 19, 20, 21, 22, 32], while widely adopted techniques such as interior point methods [34] - when it is possible to pose an eigenvalue optimization problem as a semi-definite program - or a bundle method [31] are effective in the convex setting. We foresee non-convex eigenvalue optimization problems that depend on a few parameters as the typical setting for the use of the algorithm here. For the optimization of non-convex eigenvalue functions, it appears essential to benefit from the global properties of eigenvalue functions, such as their global Lipschitzness or global bounds on their derivatives. Such global properties lead us to approximate globally with under-estimating functions, which we call support functions. Furthermore, the derivatives of the eigenvalue functions can be evaluated effectively at no cost once the eigenvalue function is evaluated (due to analytic expressions for the derivatives of eigenvalues in terms of eigenvectors as discussed in Section LABEL:subsec:first_der). Therefore, the incorporation of the derivatives into the support functions yields quadratic support functions on which our algorithm relies. The quadratic support functions for eigenvalue functions are derived exploiting the analytical properties of eigenvalues and presume the availability of a lower bound on the second derivatives of the eigenvalue function that is obtained either analytically or numerically. Example: Consider the minimization of the largest eigenvalue of A:R→Rn×n,A(ω):=A0+ωA1+ω2A2, where are given symmetric matrices. It can be deduced from the expressions in Section LABEL:subsec:sec_der that for all such that is simple. Furthermore, due to expressions in Section LABEL:subsec:first_der, at all such , we have where is a unit eigenvector associated with . Consequently, it turns out that, about any where is simple, there is a support function q(ω):=λ1(ωk)+λ′1(ωk)(ω−ωk)+γ2(ω−ωk)2 satisfying for all ; see Section LABEL:subsec:support_extreme for the details. Support functions have earlier been explored by the global optimization community. The Piyavskii-Shubert algorithm [40, 45] is derivative-free, and constructs conic support functions based on Lipschitz continuity with a known global Lipschitz constant. It converges sub-linearly in practice. Sophisticated variants that make use of several Lipschitz constants simultaneously appeared in the literature [24, 43]. The idea of using derivatives in the context of global optimization yields powerful algorithms. Breimann and Cutler [4] developed an algorithm that utilizes quadratic support functions depending on the derivatives. Some variants of the Breimann-Cutler algorithm are also suggested for functions with Lipschitz-continuous derivatives; for instance [18, 26, 27] benefit from multiple Lipschitz constants for the derivatives, [42] estimates Lipschitz constants for the derivatives locally, while [29] modifies the support functions of the Breimann-Cutler algorithm in the univariate case so that the subproblems become smooth; however, all these variants in the multivariate case end up working on a mesh as a downside. The quadratic support functions that we derive for coincide with the quadratic support functions on which the Breimann-Cutler algorithm is built on. Consequently, our approach is a variant of the algorithm due to Breimann and Cutler [4]. At every iteration of the algorithm, a global minimizer of a piece-wise quadratic model defined as the maximum of a set of quadratic support functions is determined. A new quadratic support function is constructed around this global minimizer, and the piece-wise quadratic model is refined with the addition of this new support function. In practice, we observe a linear rate of convergence to a global minimizer. The algorithm appears applicable especially to extremal eigenvalue functions of the form λ(ω)=j∑k=1dkλk(A(ω)), where are given real numbers such that . This is facilitated by the simple quadratic support functions derived in Section LABEL:subsec:support_extreme, and expressions for the lower bound on the second derivatives derived in Section LABEL:sec:extreme_low_bound. The algorithm is also applicable if the eigenvalue function is simple over all , which holds for various eigenvalue optimization problems of interest. Outline: We start in the next section with a list of eigenvalue optimization problems to which our proposed algorithm fits well. In Section LABEL:sec:Her_eig_pert, the basic results concerning the analyticity and derivatives of the eigenvalues of a Hermitian matrix-valued function that depends analytically on are reviewed. In Section LABEL:sec:quad_support, for a general eigenvalue function, the piece-wise quadratic support functions that are defined as the minimum of quadratic functions are derived. In Section LABEL:sec:squad_support, it is shown that these piece-wise quadratic support functions simplify to smooth quadratic support functions for the extremal eigenvalue functions, as well as for the eigenvalue functions that are simple for all . Global lower bounds on the second derivatives of an extremal eigenvalue function are deduced in Section LABEL:sec:extreme_low_bound. The algorithm based on the quadratic support functions is presented in Section LABEL:sec:algorithm. We establish the global convergence of the proposed algorithm in Section LABEL:sec:1d_anal. Finally, comprehensive numerical experiments are provided in Section LABEL:sec:eig_opt. The examples indicate the superiority of the algorithm over the Lipschitz continuity based algorithms, e.g., [24, 40, 45], as well as the level-set based approaches devised for particular non-convex eigenvalue optimization problems, e.g., [20, 32]. The reader who prefers to avoid technicalities at first could glance at the algorithm in Section LABEL:sec:algorithm, then go through Sections LABEL:sec:Her_eig_pert-LABEL:sec:extreme_low_bound for the theoretical foundation. ## 2 Applications ### 2.1 Quantities Related to Dynamical Systems The numerical radius of is the modulus of the outer-most point in its field of values [23], and is defined by r(A):=max{|z∗Az||z∈Cns.t.∥z∥2=1}. This quantity gives information about the powers of , e.g.,, and is used in the literature to analyze the convergence of iterative methods for the solution of linear systems [1, 11]. An eigenvalue optimization characterization is given by [23]: r(A)=−[minθ∈[0,2π]λn(A(θ))],A(θ):=−(Aeiθ+A∗e−iθ)/2. The -norm is one of the most widely used norms in practice for the descriptor system Ex′(t)=Ax(t)+Bu(t),andy(t)=Cx(t)+Du(t), where and are the input and output functions, respectively, and , with are the system matrices. The -norm of the transfer function for this system is defined as ∥H∥∞:=1infω∈Rσn[H(iω)†],H(s):=[C(sE−A)−1B+D]. Here and elsewhere, represents the th largest singular value, and denotes the pseudoinverse of . Also above, with zero initial conditions for the descriptor system, the transfer function reveals the linear relation between the input and output, as with and denoting the Laplace transformations of and , respectively. Note that the -norm above is ill-posed (i.e., the associated operator is unbounded) if the pencil has an eigenvalue on the imaginary axis or to the right of the imaginary axis. Therefore, when the -norm is well-posed, the matrix-valued function is analytic at all . A relevant quantity is the (continuous) distance to instability from a matrix ; the eigenvalue optimization characterization for the -norm with and reduces to that for the distance to instability [47] from with respect to the -norm. Paige [39] suggested the distance to uncontrollability, for a given and with , defined by as a robust measure of controllability. Here, the controllability of a linear control system of the form means that the function can be driven into any state at a particular time by some input , and could be equivalently characterized as Therefore, the eigenvalue optimization characterization for the distance to uncontrollability takes the form [12]: τ(A,B)=minz∈Cσn(A(z)),A(z):=[A−zIB]. ### 2.2 Minimizing the Largest or Maximizing the Smallest Eigenvalues In the 18th century, Euler considered the design of the strongest column with a given volume with respect to the radii of the cross-sections [30, 37]. The problem can be formulated as finding the parameters, representing the radii of cross-sections, maximizing the smallest eigenvalue of a fourth order differential operator. The analytical solution of the problem has been considered in several studies in 1970s and in 1980s [2, 33, 35], which were motivated by the earlier work of Keller and Tadjbakhsh [46]. Later, the problem is treated numerically [8] by means of the finite-element discretization, giving rise to the problem minω∈Rdλ1(A(ω)). \hb@xt@.01(2.1) The treatment in [8] yields . In this affine setting, the minimization of the largest eigenvalue is a convex optimization problem (immediate from Theorem LABEL:thm:extreme_low_bound below) and received considerable attention [14, 16, 36]. In the general setting, when the dependence of the matrix function on the parameters is not affine, the problem in (LABEL:eq:minimize_largest) is non-convex. Such non-convex problems are significant (though they are not studied much excluding a few studies such as [38] that offer only local analysis) in robust control theory for instance to ensure robust stability. The dual form that concerns the maximization of the smallest eigenvalue is of interest in vibroacoustics. ### 2.3 Minimizing the Sum of the j Largest Eigenvalues In graph theory, relaxations of the NP-hard partitioning problems lead to eigenvalue optimization problems that require the minimization of the sum of the largest eigenvalues. For instance, given a weighted graph with vertices and nonnegative integers summing up to , consider finding a partitioning of the graph such that the th partition contains exactly vertices for and the sum of the weights of the edges within each partition is maximized. The relaxation of this problem suggested in [10] is of the form minω∈Rdj∑k=1dkλk(A(ω)). \hb@xt@.01(2.2) The problem (LABEL:eq:minimize_jlargest) is convex, if is an affine function of , as in the case considered by [10], see also [9]. Once again, in general, the minimization of the sum of the largest eigenvalues is not a convex optimization problem, and there are a few studies in the literature that attempted to analyze the problem locally for instance around the points where the eigenvalues coalesce [44]. ## 3 Background on Perturbation Theory of Eigenvalues In this section, we first briefly summarize the analyticity results, mostly borrowed from [41, Chapter 1], related to the eigenvalues of matrix-valued functions. Then, expressions [28] are provided for the derivatives of Hermitian eigenvalues in terms of eigenvectors and the derivatives of matrix-valued functions. Finally, we elaborate on the analyticity of singular value problems as special Hermitian eigenvalue problems. ### 3.1 Analyticity of Eigenvalues #### 3.1.1 Univariate Matrix Functions For a univariate matrix-valued function that depends on analytically, which may or may not be Hermitian, the characteristic polynomial is of the form g(ω,λ):=det(λI−A(ω))=an(ω)λn+⋯+a1(ω)λ+a0(ω), where are analytic functions of . It follows from the Puiseux’ theorem (see, e.g., [48, Chapter 2]) that each root such that has a Puiseux series of the form ~λj(ω)=∞∑k=0ck,jωk/r, \hb@xt@.01(3.1) for all small , where is the multiplicity of the root . Now suppose is Hermitian for all , and let be the smallest integer such that . Then, we have limω→0+~λj(ω)−~λj(0)ωℓ/r=cℓ,j, which implies that is real, since and are real numbers for each . Furthermore, limω→0−~λj(ω)−~λj(0)(−ω)ℓ/r=(−1)ℓ/rcℓ,j is real, which implies that is real, or equivalently that is integer. This observation reveals that the first nonzero term in the Puiseux series of is an integer power of . The same argument applied to the derivatives of and the associated Puiseux series indicates that only integer powers of can appear in the Puiseux series (LABEL:eq:Puiseux), that is the Puiseux series reduces to a power series. This establishes that is an analytic function of . Indeed, it can also be deduced that, associated with , there is an orthonormal set of eigenvectors, where each of varies analytically with respect to (see [41] for details). ###### Theorem 3.1 (Rellich) Let be a Hermitian matrix-valued function that depends on analytically. • The roots of the characteristic polynomial of can be arranged so that each root for is an analytic function of . • There exists an eigenvector associated with for that satisfies the following: 1. , 2. , 3. for , and 4. is an analytic function of . #### 3.1.2 Multivariate Matrix Functions The eigenvalues of a multivariate matrix-valued function that depends on analytically do not have a power series representation in general even when is Hermitian. As an example, consider A(ω)=⎡⎣ω1ω1+ω22ω1+ω22ω2⎤⎦with~λ1,2(ω)=ω1+ω22±√ω21+ω222. On the other hand, it follows from Theorem LABEL:thm:Rellich that, there are underlying eigenvalue functions , of , each of which is analytic along every line in , when is Hermitian. This analyticity property along lines in implies the existence of the first partial derivatives of everywhere. Expressions for the first partial derivatives will be derived in the next subsection, indicating their continuity. As a consequence of the continuity of the first partial derivatives, each must be differentiable. ###### Theorem 3.2 Let be a Hermitian matrix-valued function that depends on analytically. Then, the roots of the characteristic polynomial of can be arranged so that each root is (i) analytic on every line in , and (ii) differentiable on . ### 3.2 Derivatives of Eigenvalues #### 3.2.1 First Derivatives of Eigenvalues Consider a univariate Hermitian matrix-valued function that depends on analytically. An analytic eigenvalue and the associated eigenvector as described in Theorem LABEL:thm:Rellich satisfy A(ω)vj(ω)=~λj(ω)vj(ω). Taking the derivatives of both sides, we obtain dA(ω)dωvj(ω)+A(ω)dvj(ω)dω=d~λj(ω)dωvj(ω)+~λj(ω)dvj(ω)dω. \hb@xt@.01(3.2) Multiplying both sides by and using the identities as well as , we get d~λj(ω)dω=vj(ω)∗dA(ω)dωvj(ω). \hb@xt@.01(3.3) #### 3.2.2 Second Derivatives of Eigenvalues By differentiating both sides of (LABEL:eq:eigval_1der), it is possible to deduce the formula (the details are omitted for brevity) d2~λj(ω)dω2=vj(ω)∗d2A(ω)dω2vj(ω)+2n∑k=1,k≠j1~λj(ω)−~λk(ω)∣∣∣vk(ω)∗dA(ω)dωvj(ω)∣∣∣2 \hb@xt@.01(3.4) for the second derivatives assuming that the (algebraic) multiplicity of is one. If, on the other hand, the eigenvalues repeat at a given , specifically when the (algebraic) multiplicity of is greater than one, the formula (LABEL:eq:eigval_2der) generalizes as d2~λj(^ω)dω2=vj(^ω)∗d2A(^ω)dω2vj(^ω)+2n∑k=1,k≠j,k∉αlim~ω→^ω(1~λj(~ω)−~λk(~ω)∣∣∣vk(~ω)∗dA(~ω)dωvj(~ω)∣∣∣2). \hb@xt@.01(3.5) Here, denotes the set of indices of the analytic eigenvalues (specified in Theorem LABEL:thm:Rellich) that are identical to at all . #### 3.2.3 Derivatives of Eigenvalues for Multivariate Hermitian Matrix Functions Let be Hermitian and analytic. It follows from (LABEL:eq:eigval_1der) that ∂~λj(ω)∂ωk=v∗j(ω)∂A(ω)∂ωkvj(ω). \hb@xt@.01(3.6) Since and are analytic with respect to for , this implies the continuity, also the analyticity with respect to , of each partial derivative , and hence the existence of , everywhere. If the multiplicity of is one, differentiating both sides of (LABEL:eq:eigval_part_der) with respect to would yield the following expressions for the second partial derivatives. ∂2~λj(ω)∂ωk∂ωℓ= v∗j(ω)∂2A(ω)∂ωk∂ωlvj(ω)+ 2⋅R⎛⎝n∑m=1,m≠j1~λj(ω)−~λm(ω)(vj(ω)∗∂A(ω)∂ωkvm(ω))(vm(ω)∗∂A(ω)∂ωℓvj(ω))⎞⎠. Expressions similar to (LABEL:eq:eigval_2der_rep) can be obtained for the second partial derivatives when has multiplicity greater than one. ### 3.3 Analyticity of Singular Values Some of the applications (see Section LABEL:sec:app_dyn_sys) concern the optimization of the th largest singular value of an analytic matrix-valued function. The singular value problems are special Hermitian eigenvalue problems. In particular, denoting the th largest singular value of an analytic matrix-valued function (not necessarily Hermitian) by , the set of eigenvalues of the Hermitian matrix-valued function A(ω):=[0B(ω)B(ω)∗0], is . In the univariate case is the th largest of the analytic eigenvalues, , of . The multivariate -dimensional case is similar, with the exception that each eigenvalue is differentiable and analytic along every line in . Let us focus on the univariate case throughout the rest of this section. Extensions to the multi-variate case are similar to the previous sections. Suppose , with , , is the analytic eigenvector function as specified in Theorem LABEL:thm:Rellich of associated with , that is [0B(ω)B(ω)∗0][uj(ω)wj(ω)]=~λj(ω)[[]cuj(ω)wj(ω)]. The above equation implies B(ω)wj(ω)=~λj(ω)uj(ω)andB(ω)∗uj(ω)=~λj(ω)wj(ω). \hb@xt@.01(3.7) In other words, , are analytic, and consist of a pair of consistent left and right singular vectors associated with . To summarize, in the univariate case, can be considered as a signed analytic singular value of , and there is a consistent pair of analytic left and right singular vector functions, and , respectively. Next, in the univariate case, we derive expressions for the first derivative of , in terms of the corresponding left and right singular vectors. It follows from the singular value equations (LABEL:eq:sing_val_defn) above that (if , this equality follows from analyticity). Now, the application of the expression (LABEL:eq:eigval_1der) yields d~λj(ω)dω= [uj(ω)∗wj(ω)∗][0dB(ω)/dωdB(ω)∗/dω0][uj(ω)wj(ω)], = uj(ω)∗dB(ω)dωwj(ω)+wj(ω)∗dB(ω)∗dωuj(ω), = 2⋅R(uj(ω)∗dB(ω)dωwj(ω)). In terms of the unit left and right singular vectors and , respectively, associated with , we obtain d~λj(ω)dω=R(^uj(ω)∗dB(ω)dω^wj(ω)). \hb@xt@.01(3.8) Notation: Throughout the rest of the text, we denote the eigenvalues of that are analytic in the univariate case (stated in Theorem LABEL:thm:Rellich), and differentiable and analytic along every line in the multivariate case (stated in Theorem LABEL:thm:anal_mul_var) with . On the other hand, or denotes the th largest eigenvalue, and or denotes the th largest singular value of . ## 4 Piece-wise Quadratic Support Functions Let be eigenvalue functions of a Hermitian matrix-valued function that are analytic along every line in and differentiable on , and let be the box defined by B:=B(ω(l)1,ω(u)1,…,ω(l)d,ω(u)d):={ω∈Rd|ωj∈[ω(l)j,ω(u)j]forj=1,…,d}. \hb@xt@.01(4.1) Consider the closed and connected subsets of , with as small as possible, such that , and for each and , and such that in the interior of none of the eigenvalue functions intersect each other. Define as follows: λ(ω):=f(~λsk1(ω),…,~λskj(ω))forallω∈int(Pk) \hb@xt@.01(4.2) where is analytic, and is a vector of indices such that ~λski(ω)=~λsℓi(ω)fori=1,…,j, and for all in order to ensure the continuity of on . The extremal eigenvalue function fits into the framework. We derive a piece-wise quadratic support function about a given point bounding from below for all , and such that . Let us focus on the direction , the univariate function , and the analytic univariate functions for . Also, let us denote the isolated points in the interval , where two distinct functions among intersect each other by . At these points, may not be differentiable. We have λ(ω)=λ(ωk)+m∑ℓ=0∫α(ℓ+1)α(ℓ)ϕ′(t)dt, \hb@xt@.01(4.3) where and . Due to the existence of the second partial derivatives of (since the expression (LABEL:eq:eigval_part_der) implies the analyticity of the first partial derivatives with respect to each parameter disjointly), there exists a constant that satisfies λmin(∇2~λj(ω))≥γ,for all ω∈B,j=1,…,n. \hb@xt@.01(4.4) Furthermore, for all . Thus, applying the mean value theorem to the analytic functions for and since , we obtain ϕ′(t)≥minj=1,…,n~ϕ′j(0)+γt. By substituting the last inequality in (LABEL:eq:quad_der_mult_int), integrating the right-hand side of (LABEL:eq:quad_der_mult_int), and using (since is differentiable), we arrive at the following: ###### Theorem 4.1 Suppose is an analytic and Hermitian matrix-valued function, the eigenvalue function is defined as in (LABEL:eq:eigval_defn) in terms of the eigenvalues of that are differentiable and analytic on every line in , and is a lower bound as in (LABEL:eq:lb_sec_der). Then the following inequality holds for all : λ(ω)≥[qk(ω):=λ(ωk)+(minj=1,…,n∇~λj(ωk)T(ω−ωk))+γ2∥ω−ωk∥2]. \hb@xt@.01(4.5) ## 5 Simplified Piece-wise Quadratic Support Functions ### 5.1 Support Functions under Generic Simplicity In various instances, the eigenvalue functions do not intersect each other at any generically. In such cases, for some , we have for all , therefore is analytic in the univariate case and analytic along every line in the multivariate case. For instance, the singular values of the matrix function involved in the definition of the -norm do not coalesce at any on a dense subset of the set of quadruples . Similar remarks apply to all of the specific eigenvalue optimization problems in Section LABEL:sec:app_dyn_sys. Under the generic simplicity assumption, the piece-wise quadratic support function (LABEL:eq:quad_model_multi) simplifies to qk(ω)=λ(ωk)+∇λ(ωk)T(ω−ωk)+γ2∥ω−ωk∥2. \hb@xt@.01(5.1) Here, is a lower bound on for all . In many cases, it may be possible to obtain a rough lower bound numerically by means of the expressions for the second derivatives in Sections LABEL:subsec:sec_der and LABEL:subsec:multi_der, and exploiting the Lipschitz continuity of the eigenvalue and other eigenvalues. ### 5.2 Support Functions for Extremal Eigenvalues Consider the extremal eigenvalue function λ(ω)=j∑k=1dkλk(ω) \hb@xt@.01(5.2) for given real numbers . A special case when are integers is discussed in Section LABEL:sec:app_min_sum_largest. When , this reduces to the maximal eigenvalue function in Section LABEL:sec:app_min_largest. For simplicity, let us suppose that is differentiable at , about which we derive a support function below. This is generically the case. In the unlikely case of two eigenvalues coalescing at , the non-differentiability is isolated at this point. Therefore, is differentiable at all nearby points. For a fixed , as in the previous section, define for . Denote the points where either one of is not simple by in increasing order. These are the points where is possibly not analytic. At any , by we refer to the analytic function satisfying for all sufficiently close to . Similarly, refers to the analytic function satisfying for all sufficiently close to . Furthermore, and represent the right-hand and left-hand derivatives of , respectively. ###### Lemma 5.1 The following relation holds for all : (i) for all sufficiently close to ; (ii) consequently . Proof. The functions and are of the form ϕ−,α(~α)=j∑k=1dk~λnk(ωk+~αp)andϕ+,α(~α)=j∑k=1dkλk(ωk+~αp) \hb@xt@.01(5.3) for some indices and for all sufficiently close to . In (LABEL:eq:leftright_pieces), the latter equality follows from for all sufficiently close to by definition. The former equality is due to for all sufficiently close to implying is a weighted sum of of the analytic eigenvalues with weights . We rephrase the inequality as where , where . Note that for each since are the largest eigenvalues. In particular, their sum cannot be less than the sum of . Therefore, j∑k=1dk⋅ak = djj∑k=
2019-10-23 17:51:03
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9316185712814331, "perplexity": 448.4938750614565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987835748.66/warc/CC-MAIN-20191023173708-20191023201208-00246.warc.gz"}
https://www.r-bloggers.com/2017/08/i-made-a-3d-movie-with-ggplot2-once-heres-how-i-did-it/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. Some time ago (last year actually ?) I had a blast developing a feature for ggforce which had been on my mind for far to long than its limited utility warranted. The idea was to showcase the new facetting extension powers I’d added to ggplot2 by making a facetting function that created a stereoscopic pair of plots that would simulate 3D. To procrastinate and show off I made a little animated video with the feature and posted it on Twitter, promising I’d write about it someday. Now (again, one year later), I think the world is finally ready to see what went through my R console to make that little animation. While I’ve been very timely with this blog post, the feature is still not available on CRAN, so you’ll need to install the GitHub version of ggforce to follow along. Setup The goal is to create a spinning hollow cube so we’ll need to define the cube somehow. This was way before ggraph was published, and tidygraph was not even a thought in my head, so while it may make sense to handle the cube as a network, I did it the hard way: Now that we have our data, we need to create some transformation functions. Currently, if plotted in ggplot2 it would just look like a square as, unsurprisingly, the third dimension would get lost. What we need is a projection of three dimension down to two. This is a bit hairy, and when I tried to achieve this back in the days by setting up a transformation matrix manually I failed miserably (if you are knowledgable in this and want to explain it to me - please reach out on twitter). In the end I took the shortcut and used the persp() function, which, besides the side effect of plotting stuff in 3D, also returns the transformation matrix invisibly: Having a look at the transformation matrix I can safely say that I have no idea what’s going on: But that doesn’t matter - the only thing required is that I know how to use it. The trans3d() provides the means for taking in three dimensional points, and a transformation matrix, and outputting two dimensional points: With a bit of imagination we can see the 3D cube. Lets draw it with line segments as well - we add the added information of depth, which is simply the value in the dimension that gets dropped in the transformation. We can of course improve the illusion even more. Using geom_link2() from ggforce, we can add a gradient size to the lines, based on the depth of the endpoints (remember, these were added in the to_grid() function). What can we do more? Well, we could decide to rotate it a bit. Let’s, make a function that rotates it around the vertical axis: We now have all the ingredients to make an animation of a spinning cube - we really only need to animate a 90 degree spin as it is selfrepeating. For added fiz we’ll improve the depth perception by simulating a bit of haze by greying parts more distant: That’s all fine and well, but this is just a regular and boring 2D video. Let us update it to the brave new world, where Avatar has taught us that 3D is not tacky: Enter facet_stereo(). That is all it takes…
2021-04-23 01:28:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5186761021614075, "perplexity": 796.3948073655152}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039626288.96/warc/CC-MAIN-20210423011010-20210423041010-00180.warc.gz"}
https://math.stackexchange.com/questions/1386474/the-range-of-the-function-f-mathbbr-to-mathbbr-given-by-fx-frac32
# The range of the function $f:\mathbb{R}\to \mathbb{R}$ given by $f(x)=\frac{3+2 \sin x}{\sqrt{1+ \cos x}+\sqrt{1- \cos x}}$ The range of the function $f:\mathbb{R}\to \mathbb{R}$ given by $f(x)=\frac{3+2 \sin x}{\sqrt{1+ \cos x}+\sqrt{1- \cos x}}$ contains $N$ integers. Find the value of $10N$. I tried to find the minimum and maximum value of the function.First i simplified the function. $f(x)=\frac{3+2 \sin x}{\sqrt{1+ \cos x}+\sqrt{1- \cos x}}=\frac{1+4\sin^2\left(\frac{x}{2}+\frac{\pi}{4}\right)}{2\sin \left(\frac{x}{2}+\frac{\pi}{4}\right)}$ Then i differentiated the function and equate it to zero to get the critical points. Critical point equations are $\cos\left(\frac{x}{2}+\frac{\pi}{4}\right)=0$ $\sin\left(\frac{x}{2}+\frac{\pi}{4}\right)=\frac{1}{2},\sin\left(\frac{x}{2}+\frac{\pi}{4}\right)=\frac{-1}{2}$ When i checked plotted the function on desmos.com graphing calculator,i found minimum value to be $0.5$ and maximum value to be $2.5$. • An approach : What is the minimum value, what is the maximum value (not local minima and maxima). Is the function continuous ? . Then will it not take all values between the 2 extremes ?. You have already simplified the function – Shailesh Aug 6 '15 at 11:32 • Do you mean "contains precisely $N$ integers"? – Matemáticos Chibchas Sep 10 '15 at 3:10 Put $\sqrt{1+\cos x}$ +$\sqrt{1-\cos x} = A$ $A^2 = 2\pm 2 \sin x ,\quad A^2 - 2 =\pm 2 \sin x$ $-2\leq A^2 - 2\leq 2,\quad -2\leq A\leq2$ So $f(x) = \frac{5 - A^2}{A}$ or $\frac{A^2 + 1}{A}$ Find the minimum and maximum of $f(x)$ in the two conditions with $-2\leq A\leq 2$ • I'm not so sure about your approach for the following reason. When you say $\sqrt{1 + \cos x}$, I think we always mean the positive square root (I am open to correction in this respect). In that case, the denominator is always positive and so is the numerator. That's why I took 0.5 to 2.5 as the range. – Shailesh Sep 10 '15 at 14:09 Hint: The minimum value of the function is $1/2$ and the maximum is $2.5$. The function is clearly continuous. So it takes every value between these numbers, specifically 1 and 2. So $N=2$ which gives $10N$. Can you show that these are indeed the minimum and maximums. I have outlined the general approach • How has minimum value come,Maximum value i got 2.5,but minimum i did not get. – Brahmagupta Aug 6 '15 at 11:56 • The same way you got maximum – Shailesh Aug 6 '15 at 12:40 • The denominator goes to 0 as $x \rightarrow -\pi /2$ how can it be continuous there? – Subhasish Basak Sep 5 '15 at 0:25 Your simplified equation is correct only for $0 \leq x \leq \pi$. During your derivation be sure to consider both positive and negative square roots in the denominator. The result is an alternate version of your simplified equation $f(x)=\frac{3+2 \sin x}{\sqrt{1+ \cos x}+\sqrt{1- \cos x}}=\frac{5-4\sin^2\left(\frac{x}{2}-\frac{\pi}{4}\right)}{2\sin \left(\frac{x}{2}-\frac{\pi}{4}\right)}$ valid for $\pi < x < 2\pi$. When you work this through you will get another critical point at $\cos(x/2 - \pi/4)=0$.
2020-09-29 09:56:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8953274488449097, "perplexity": 259.2483735869312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401641638.83/warc/CC-MAIN-20200929091913-20200929121913-00023.warc.gz"}
http://buy.ads-ez.com/ez-manoj.php?show=easy-latex
Thank you for considering Easy LaTeX Pro Easy WP LaTeX is a premium plugin that provides a very easy way to display math and equations in your posts. Easy WP LaTeX provides a very easy way to display equations or mathematical formulas (typed in as TeX or LaTeX code) in your posts. It translates LaTeX formulas like this [math](a+b)^2 = a^2 + b^2 + 2ab[/math] into this: $\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"(a+b)^2$ The Lite version of the plugin is fully functional. The Pro version gives you options to cache the equation images so that your pages load faster. You will be redirected to PayPal to buy Easy LaTeX Pro (for \$2.95) in seconds
2013-05-20 15:22:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9714502692222595, "perplexity": 3775.34909546902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699056351/warc/CC-MAIN-20130516101056-00096-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/954186/what-is-the-symbol-divideontimes-divide-times-for
# What is the symbol ''$\divideontimes$'' (DIVIDE TIMES) for? I looked "$\divideontimes$" up on Google and now I know that it's Unicode U+22c7 but when would it be used? I am guessing that $5 \divideontimes 5 = 25$ and $1$...? • I've never seen that before, but I'm guessing that it's similar to $\pm$. – Akiva Weinberger Oct 1 '14 at 16:20 • Are you sure that it is even meant to be used in a mathematical context? – N. Owad Oct 1 '14 at 17:41 • $a\divideontimes b=a~b^{\pm1}$ – Lucian Oct 1 '14 at 18:17 • @N.Owad No, I'm not even sure but it seems to be composed of mathematical operators. – Mark Cidade Oct 1 '14 at 18:47 • Sorry for bumping such an old post, I am just really curious about what type of functions produce numbers that take the form. $⋇n$ – Albert Renshaw Oct 24 '15 at 6:28 The symbol $\pm$ (produced with \pm in LaTeX or MathJax) can be used when the underlying group structure is additive, such as the group of real numbers with addition, $(\mathbb{R}, +)$. The symbol $\divideontimes$ (\divideontimes) can be used when the group structure is multiplicative, such as the group of positive-real numbers with multiplication, $(\mathbb{R}_{>0}, \cdot)$. ### Use in statistics A prominent application of the symbols is in descriptive statistics, where they are used to express confidence intervals and error bars. For example, if a random variable is normally distributed, the expression $$\mu_\text{ar} \pm \sigma_\text{ar}$$ tells you that about $68.27 \%$ of the data are between $\mu_\text{ar} - \sigma_\text{ar}$ and $\mu_\text{ar} + \sigma_\text{ar}$. Here, $\mu_\text{ar}$ is the arithmetic mean and $\sigma_\text{ar}$ is the arithmetic standard deviation. Similarly, if a random variable is log-normally distributed, the expression $$\mu_\text{geo} \divideontimes \sigma_\text{geo}$$ tells you that about $68.27 \%$ of the data are between $\frac{\mu_\text{geom}}{\sigma_\text{geo}}$ and $\mu_\text{geo} \cdot \sigma_\text{geo}$. Here, $\mu_\text{geo}$ is the geometric mean and $\sigma_\text{geo}$ is the geometric standard deviation (e.g., Limpert et al, 2001). • Note that the paper you mentioned uses $^\times /$ instead of this symbol. – Random832 Jul 16 '16 at 18:14 I think that @Lucian has the right answer (in the comments to OP): $a\divideontimes b=a~b^{\pm1}$ Note that the root of a number can yield a $±$ result Perhaps the inverse tetration (super-root) of a specific type of number can yield a $⋇$ result $±n$ really means     $0+n$     or     $0-n$ $⋇n$ would mean     $x*n$     or     $x/n$    (I would assume $x=1$) That "specific type" of number (for applying a super-root to) is probably between -n and +n IMO (where n is the number being tetrated to (i.e. na), I'd also hypothesize that most solutions take place in the complex plane.) A very trivial (and boring) example of a super-root creating one of these numbers would be the inverse of the tetration, 2x where 2x=1 2x=1 aka      xx=1 therefor      x=$⋇1$      because      (1/n)(1/n)=1      and      (1*n)(1*n)=1      when n is 1 Again, I know that's a super boring example, I'm curious if anyone can find any values of actual intrigue (Edit: Found, scroll to bottom), I tried using Wolfram alpha with x^x^x=(1/x)^(1/x)^(1/x) but it timed out. Just looking at the graph of 3x (or x^x^x) makes it seem like there should be values that create $⋇$ numbers. It seems the inverse tetration of 3x, while $⋇1$ is valid, also has the complex solution $≈⋇(-0.6782039202617192-0.73487375959523527*i)$... (Along with many others). Neat! Glad Wolfram Alpha has improved over the years! (2018 edit) xxx=(1/x)(1/x)(1/x) x ≈ $1*(-0.6782039202617192-0.73487375959523527*i)$ x ≈ $1 / (-0.6782039202617192-0.73487375959523527*i)$ therefore x ≈ $⋇(-0.6782039202617192-0.73487375959523527*i)$
2019-05-24 03:46:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8558635115623474, "perplexity": 989.9127791369526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257497.4/warc/CC-MAIN-20190524024253-20190524050253-00349.warc.gz"}
https://admin.clutchprep.com/organic-chemistry/practice-problems/16142/write-a-structural-formula-for-the-most-stable-conformation-of-each-of-the-follo-3
🤓 Based on our data, we think this question is relevant for Professor Axelson's class at UIUC. # Solution: Write a structural formula for the most stable conformation of each of the following compounds:  (d)  trans-1-Isopropyl-3-methylcyclohexane ###### Problem Write a structural formula for the most stable conformation of each of the following compounds: (d)  trans-1-Isopropyl-3-methylcyclohexane
2020-04-10 06:13:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8117733597755432, "perplexity": 2207.2084386860224}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371886991.92/warc/CC-MAIN-20200410043735-20200410074235-00231.warc.gz"}
https://www.rdocumentation.org/packages/spatstat/versions/1.56-1/topics/envelope
# envelope 0th Percentile ##### Simulation Envelopes of Summary Function Computes simulation envelopes of a summary function. Keywords hplot, htest, spatial, iteration ##### Usage envelope(Y, fun, …) # S3 method for ppp envelope(Y, fun=Kest, nsim=99, nrank=1, …, funargs=list(), funYargs=funargs, simulate=NULL, fix.n=FALSE, fix.marks=FALSE, verbose=TRUE, clipdata=TRUE, transform=NULL, global=FALSE, ginterval=NULL, use.theory=NULL, alternative=c("two.sided", "less", "greater"), scale=NULL, clamp=FALSE, savefuns=FALSE, savepatterns=FALSE, nsim2=nsim, VARIANCE=FALSE, nSD=2, Yname=NULL, maxnerr=nsim, do.pwrong=FALSE, envir.simul=NULL) # S3 method for ppm envelope(Y, fun=Kest, nsim=99, nrank=1, …, funargs=list(), funYargs=funargs, simulate=NULL, fix.n=FALSE, fix.marks=FALSE, verbose=TRUE, clipdata=TRUE, start=NULL, control=update(default.rmhcontrol(Y), nrep=nrep), nrep=1e5, transform=NULL, global=FALSE, ginterval=NULL, use.theory=NULL, alternative=c("two.sided", "less", "greater"), scale=NULL, clamp=FALSE, savefuns=FALSE, savepatterns=FALSE, nsim2=nsim, VARIANCE=FALSE, nSD=2, Yname=NULL, maxnerr=nsim, do.pwrong=FALSE, envir.simul=NULL) # S3 method for kppm envelope(Y, fun=Kest, nsim=99, nrank=1, …, funargs=list(), funYargs=funargs, simulate=NULL, verbose=TRUE, clipdata=TRUE, transform=NULL, global=FALSE, ginterval=NULL, use.theory=NULL, alternative=c("two.sided", "less", "greater"), scale=NULL, clamp=FALSE, savefuns=FALSE, savepatterns=FALSE, nsim2=nsim, VARIANCE=FALSE, nSD=2, Yname=NULL, maxnerr=nsim, do.pwrong=FALSE, envir.simul=NULL) ##### Arguments Y Object containing point pattern data. A point pattern (object of class "ppp") or a fitted point process model (object of class "ppm" or "kppm"). fun Function that computes the desired summary statistic for a point pattern. nsim Number of simulated point patterns to be generated when computing the envelopes. nrank Integer. Rank of the envelope value amongst the nsim simulated values. A rank of 1 means that the minimum and maximum simulated values will be used. Extra arguments passed to fun. funargs A list, containing extra arguments to be passed to fun. funYargs Optional. A list, containing extra arguments to be passed to fun when applied to the original data Y only. simulate Optional. Specifies how to generate the simulated point patterns. If simulate is an expression in the R language, then this expression will be evaluated nsim times, to obtain nsim point patterns which are taken as the simulated patterns from which the envelopes are computed. If simulate is a list of point patterns, then the entries in this list will be treated as the simulated patterns from which the envelopes are computed. Alternatively simulate may be an object produced by the envelope command: see Details. fix.n Logical. If TRUE, simulated patterns will have the same number of points as the original data pattern. This option is currently not available for envelope.kppm. fix.marks Logical. If TRUE, simulated patterns will have the same number of points and the same marks as the original data pattern. In a multitype point pattern this means that the simulated patterns will have the same number of points of each type as the original data. This option is currently not available for envelope.kppm. verbose Logical flag indicating whether to print progress reports during the simulations. clipdata Logical flag indicating whether the data point pattern should be clipped to the same window as the simulated patterns, before the summary function for the data is computed. This should usually be TRUE to ensure that the data and simulations are properly comparable. start,control Optional. These specify the arguments start and control of rmh, giving complete control over the simulation algorithm. Applicable only when Y is a fitted model of class "ppm". nrep Number of iterations in the Metropolis-Hastings simulation algorithm. Applicable only when Y is a fitted model of class "ppm". transform Optional. A transformation to be applied to the function values, before the envelopes are computed. An expression object (see Details). global Logical flag indicating whether envelopes should be pointwise (global=FALSE) or simultaneous (global=TRUE). ginterval Optional. A vector of length 2 specifying the interval of $$r$$ values for the simultaneous critical envelopes. Only relevant if global=TRUE. use.theory Logical value indicating whether to use the theoretical value, computed by fun, as the reference value for simultaneous envelopes. Applicable only when global=TRUE. Default is use.theory=TRUE if Y is a point pattern, or a point process model equivalent to Complete Spatial Randomness, and use.theory=FALSE otherwise. alternative Character string determining whether the envelope corresponds to a two-sided test (side="two.sided", the default) or a one-sided test with a lower critical boundary (side="less") or a one-sided test with an upper critical boundary (side="greater"). scale Optional. Scaling function for global envelopes. A function in the R language which determines the relative scale of deviations, as a function of distance $$r$$, when computing the global envelopes. Applicable only when global=TRUE. Summary function values for distance r will be divided by scale(r) before the maximum deviation is computed. The resulting global envelopes will have width proportional to scale(r). clamp Logical value indicating how to compute envelopes when alternative="less" or alternative="greater". Deviations of the observed summary function from the theoretical summary function are initially evaluated as signed real numbers, with large positive values indicating consistency with the alternative hypothesis. If clamp=FALSE (the default), these values are not changed. If clamp=TRUE, any negative values are replaced by zero. savefuns Logical flag indicating whether to save all the simulated function values. savepatterns Logical flag indicating whether to save all the simulated point patterns. nsim2 Number of extra simulated point patterns to be generated if it is necessary to use simulation to estimate the theoretical mean of the summary function. Only relevant when global=TRUE and the simulations are not based on CSR. VARIANCE Logical. If TRUE, critical envelopes will be calculated as sample mean plus or minus nSD times sample standard deviation. nSD Number of estimated standard deviations used to determine the critical envelopes, if VARIANCE=TRUE. Yname Character string that should be used as the name of the data point pattern Y when printing or plotting the results. maxnerr Maximum number of rejected patterns. If fun yields an error when applied to a simulated point pattern (for example, because the pattern is empty and fun requires at least one point), the pattern will be rejected and a new random point pattern will be generated. If this happens more than maxnerr times, the algorithm will give up. do.pwrong Logical. If TRUE, the algorithm will also estimate the true significance level of the “wrong” test (the test that declares the summary function for the data to be significant if it lies outside the pointwise critical boundary at any point). This estimate is printed when the result is printed. envir.simul Environment in which to evaluate the expression simulate, if not the current environment. ##### Details The envelope command performs simulations and computes envelopes of a summary statistic based on the simulations. The result is an object that can be plotted to display the envelopes. The envelopes can be used to assess the goodness-of-fit of a point process model to point pattern data. For the most basic use, if you have a point pattern X and you want to test Complete Spatial Randomness (CSR), type plot(envelope(X, Kest,nsim=39)) to see the $$K$$ function for X plotted together with the envelopes of the $$K$$ function for 39 simulations of CSR. The envelope function is generic, with methods for the classes "ppp", "ppm" and "kppm" described here. There are also methods for the classes "pp3", "lpp" and "lppm" which are described separately under envelope.pp3 and envelope.lpp. Envelopes can also be computed from other envelopes, using envelope.envelope. To create simulation envelopes, the command envelope(Y, ...) first generates nsim random point patterns in one of the following ways. • If Y is a point pattern (an object of class "ppp") and simulate=NULL, then we generate nsim simulations of Complete Spatial Randomness (i.e. nsim simulated point patterns each being a realisation of the uniform Poisson point process) with the same intensity as the pattern Y. (If Y is a multitype point pattern, then the simulated patterns are also given independent random marks; the probability distribution of the random marks is determined by the relative frequencies of marks in Y.) • If Y is a fitted point process model (an object of class "ppm" or "kppm") and simulate=NULL, then this routine generates nsim simulated realisations of that model. • If simulate is supplied, then it determines how the simulated point patterns are generated. It may be either • an expression in the R language, typically containing a call to a random generator. This expression will be evaluated nsim times to yield nsim point patterns. For example if simulate=expression(runifpoint(100)) then each simulated pattern consists of exactly 100 independent uniform random points. • a list of point patterns. The entries in this list will be taken as the simulated patterns. • an object of class "envelope". This should have been produced by calling envelope with the argument savepatterns=TRUE. The simulated point patterns that were saved in this object will be extracted and used as the simulated patterns for the new envelope computation. This makes it possible to plot envelopes for two different summary functions based on exactly the same set of simulated point patterns. The summary statistic fun is applied to each of these simulated patterns. Typically fun is one of the functions Kest, Gest, Fest, Jest, pcf, Kcross, Kdot, Gcross, Gdot, Jcross, Jdot, Kmulti, Gmulti, Jmulti or Kinhom. It may also be a character string containing the name of one of these functions. The statistic fun can also be a user-supplied function; if so, then it must have arguments X and r like those in the functions listed above, and it must return an object of class "fv". Upper and lower critical envelopes are computed in one of the following ways: pointwise: by default, envelopes are calculated pointwise (i.e. for each value of the distance argument $$r$$), by sorting the nsim simulated values, and taking the m-th lowest and m-th highest values, where m = nrank. For example if nrank=1, the upper and lower envelopes are the pointwise maximum and minimum of the simulated values. The pointwise envelopes are not “confidence bands” for the true value of the function! Rather, they specify the critical points for a Monte Carlo test (Ripley, 1981). The test is constructed by choosing a fixed value of $$r$$, and rejecting the null hypothesis if the observed function value lies outside the envelope at this value of $$r$$. This test has exact significance level alpha = 2 * nrank/(1 + nsim). simultaneous: if global=TRUE, then the envelopes are determined as follows. First we calculate the theoretical mean value of the summary statistic (if we are testing CSR, the theoretical value is supplied by fun; otherwise we perform a separate set of nsim2 simulations, compute the average of all these simulated values, and take this average as an estimate of the theoretical mean value). Then, for each simulation, we compare the simulated curve to the theoretical curve, and compute the maximum absolute difference between them (over the interval of $$r$$ values specified by ginterval). This gives a deviation value $$d_i$$ for each of the nsim simulations. Finally we take the m-th largest of the deviation values, where m=nrank, and call this dcrit. Then the simultaneous envelopes are of the form lo = expected - dcrit and hi = expected + dcrit where expected is either the theoretical mean value theo (if we are testing CSR) or the estimated theoretical value mmean (if we are testing another model). The simultaneous critical envelopes have constant width 2 * dcrit. The simultaneous critical envelopes allow us to perform a different Monte Carlo test (Ripley, 1981). The test rejects the null hypothesis if the graph of the observed function lies outside the envelope at any value of $$r$$. This test has exact significance level alpha = nrank/(1 + nsim). This test can also be performed using mad.test. based on sample moments: if VARIANCE=TRUE, the algorithm calculates the (pointwise) sample mean and sample variance of the simulated functions. Then the envelopes are computed as mean plus or minus nSD standard deviations. These envelopes do not have an exact significance interpretation. They are a naive approximation to the critical points of the Neyman-Pearson test assuming the summary statistic is approximately Normally distributed. The return value is an object of class "fv" containing the summary function for the data point pattern, the upper and lower simulation envelopes, and the theoretical expected value (exact or estimated) of the summary function for the model being tested. It can be plotted using plot.envelope. If VARIANCE=TRUE then the return value also includes the sample mean, sample variance and other quantities. Arguments can be passed to the function fun through .... This means that you simply specify these arguments in the call to envelope, and they will be passed to fun. In particular, the argument correction determines the edge correction to be used to calculate the summary statistic. See the section on Edge Corrections, and the Examples. Arguments can also be passed to the function fun through the list funargs. This mechanism is typically used if an argument of fun has the same name as an argument of envelope. The list funargs should contain entries of the form name=value, where each name is the name of an argument of fun. There is also an option, rarely used, in which different function arguments are used when computing the summary function for the data Y and for the simulated patterns. If funYargs is given, it will be used when the summary function for the data Y is computed, while funargs will be used when computing the summary function for the simulated patterns. This option is only needed in rare cases: usually the basic principle requires that the data and simulated patterns must be treated equally, so that funargs and funYargs should be identical. If Y is a fitted cluster point process model (object of class "kppm"), and simulate=NULL, then the model is simulated directly using simulate.kppm. If Y is a fitted Gibbs point process model (object of class "ppm"), and simulate=NULL, then the model is simulated by running the Metropolis-Hastings algorithm rmh. Complete control over this algorithm is provided by the arguments start and control which are passed to rmh. For simultaneous critical envelopes (global=TRUE) the following options are also useful: ginterval determines the interval of $$r$$ values over which the deviation between curves is calculated. It should be a numeric vector of length 2. There is a sensible default (namely, the recommended plotting interval for fun(X), or the range of r values if r is explicitly specified). transform specifies a transformation of the summary function fun that will be carried out before the deviations are computed. Such transforms are useful if global=TRUE or VARIANCE=TRUE. The transform must be an expression object using the symbol . to represent the function value (and possibly other symbols recognised by with.fv). For example, the conventional way to normalise the $$K$$ function (Ripley, 1981) is to transform it to the $$L$$ function $$L(r) = \sqrt{K(r)/\pi}$$ and this is implemented by setting transform=expression(sqrt(./pi)). It is also possible to extract the summary functions for each of the individual simulated point patterns, by setting savefuns=TRUE. Then the return value also has an attribute "simfuns" containing all the summary functions for the individual simulated patterns. It is an "fv" object containing functions named sim1, sim2, ... representing the nsim summary functions. It is also possible to save the simulated point patterns themselves, by setting savepatterns=TRUE. Then the return value also has an attribute "simpatterns" which is a list of length nsim containing all the simulated point patterns. See plot.envelope and plot.fv for information about how to plot the envelopes. Different envelopes can be recomputed from the same data using envelope.envelope. Envelopes can be combined using pool.envelope. ##### Value An object of class "envelope" and "fv", see fv.object, which can be printed and plotted directly. Essentially a data frame containing columns r the vector of values of the argument $$r$$ at which the summary function fun has been estimated obs values of the summary function for the data point pattern lo lower envelope of simulations hi upper envelope of simulations and either theo theoretical value of the summary function under CSR (Complete Spatial Randomness, a uniform Poisson point process) if the simulations were generated according to CSR mmean estimated theoretical value of the summary function, computed by averaging simulated values, if the simulations were not generated according to CSR. Additionally, if savepatterns=TRUE, the return value has an attribute "simpatterns" which is a list containing the nsim simulated patterns. If savefuns=TRUE, the return value has an attribute "simfuns" which is an object of class "fv" containing the summary functions computed for each of the nsim simulated patterns. ##### Errors and warnings An error may be generated if one of the simulations produces a point pattern that is empty, or is otherwise unacceptable to the function fun. The upper envelope may be NA (plotted as plus or minus infinity) if some of the function values computed for the simulated point patterns are NA. Whether this occurs will depend on the function fun, but it usually happens when the simulated point pattern does not contain enough points to compute a meaningful value. ##### Confidence intervals Simulation envelopes do not compute confidence intervals; they generate significance bands. If you really need a confidence interval for the true summary function of the point process, use lohboot. See also varblock. ##### Edge corrections It is common to apply a correction for edge effects when calculating a summary function such as the $$K$$ function. Typically the user has a choice between several possible edge corrections. In a call to envelope, the user can specify the edge correction to be applied in fun, using the argument correction. See the Examples below. Summary functions in spatstat Summary functions that are available in spatstat, such as Kest, Gest and pcf, have a standard argument called correction which specifies the name of one or more edge corrections. The list of available edge corrections is different for each summary function, and may also depend on the kind of window in which the point pattern is recorded. In the case of Kest (the default and most frequently used value of fun) the best edge correction is Ripley's isotropic correction if the window is rectangular or polygonal, and the translation correction if the window is a binary mask. See the help files for the individual functions for more information. All the summary functions in spatstat recognise the option correction="best" which gives the “best” (most accurate) available edge correction for that function. In a call to envelope, if fun is one of the summary functions provided in spatstat, then the default is correction="best". This means that by default, the envelope will be computed using the “best” available edge correction. The user can override this default by specifying the argument correction. For example the computation can be accelerated by choosing another edge correction which is less accurate than the “best” one, but faster to compute. User-written summary functions If fun is a function written by the user, then envelope has to guess what to do. If fun has an argument called correction, or has … arguments, then envelope assumes that the function can handle a correction argument. To compute the envelope, fun will be called with a correction argument. The default is correction="best", unless overridden in the call to envelope. Otherwise, if fun does not have an argument called correction and does not have … arguments, then envelope assumes that the function cannot handle a correction argument. To compute the envelope, fun is called without a correction argument. ##### References Baddeley, A., Diggle, P.J., Hardegen, A., Lawrence, T., Milne, R.K. and Nair, G. (2014) On tests of spatial pattern based on simulation envelopes. Ecological Monographs 84 (3) 477--489. Cressie, N.A.C. Statistics for spatial data. John Wiley and Sons, 1991. Diggle, P.J. Statistical analysis of spatial point patterns. Arnold, 2003. Ripley, B.D. (1981) Spatial statistics. John Wiley and Sons. Ripley, B.D. Statistical inference for spatial processes. Cambridge University Press, 1988. Stoyan, D. and Stoyan, H. (1994) Fractals, random shapes and point fields: methods of geometrical statistics. John Wiley and Sons. dclf.test, mad.test for envelope-based tests. fv.object, plot.envelope, plot.fv, envelope.envelope, pool.envelope for handling envelopes. There are also methods for print and summary. Kest, Gest, Fest, Jest, pcf, ppp, ppm, default.expand ##### Aliases • envelope • envelope.ppp • envelope.ppm • envelope.kppm ##### Examples # NOT RUN { X <- simdat # Envelope of K function under CSR # } # NOT RUN { plot(envelope(X)) # } # NOT RUN { # } # NOT RUN { # Translation edge correction (this is also FASTER): # } # NOT RUN { plot(envelope(X, correction="translate")) # } # NOT RUN { # } # NOT RUN { # Global envelopes # } # NOT RUN { plot(envelope(X, Lest, global=TRUE)) plot(envelope(X, Kest, global=TRUE, scale=function(r) { r })) # } # NOT RUN { # } # NOT RUN { # Envelope of K function for simulations from Gibbs model # } # NOT RUN { fit <- ppm(cells ~1, Strauss(0.05)) plot(envelope(fit)) plot(envelope(fit), global=TRUE) # } # NOT RUN { # } # NOT RUN { # Envelope of K function for simulations from cluster model fit <- kppm(redwood ~1, "Thomas") # } # NOT RUN { plot(envelope(fit, Gest)) plot(envelope(fit, Gest, global=TRUE)) # } # NOT RUN { # } # NOT RUN { # Envelope of G function under CSR # } # NOT RUN { plot(envelope(X, Gest)) # } # NOT RUN { # } # NOT RUN { # Envelope of L function under CSR # L(r) = sqrt(K(r)/pi) # } # NOT RUN { E <- envelope(X, Kest) plot(E, sqrt(./pi) ~ r) # } # NOT RUN { # } # NOT RUN { # Simultaneous critical envelope for L function # (alternatively, use Lest) # } # NOT RUN { plot(envelope(X, Kest, transform=expression(sqrt(./pi)), global=TRUE)) # } # NOT RUN { # } # NOT RUN { ## One-sided envelope # } # NOT RUN { plot(envelope(X, Lest, alternative="less")) # } # NOT RUN { # } # NOT RUN { # How to pass arguments needed to compute the summary functions: # We want envelopes for Jcross(X, "A", "B") # where "A" and "B" are types of points in the dataset 'demopat' data(demopat) # } # NOT RUN { plot(envelope(demopat, Jcross, i="A", j="B")) # } # NOT RUN { # } # NOT RUN { # Use of simulate' # } # NOT RUN { plot(envelope(cells, Gest, simulate=expression(runifpoint(42)))) plot(envelope(cells, Gest, simulate=expression(rMaternI(100,0.02)))) # } # NOT RUN { # } # NOT RUN { # Envelope under random toroidal shifts data(amacrine) # } # NOT RUN { plot(envelope(amacrine, Kcross, i="on", j="off", # } # NOT RUN { # Envelope under random shifts with erosion # } # NOT RUN { plot(envelope(amacrine, Kcross, i="on", j="off", # } # NOT RUN { # Envelope of INHOMOGENEOUS K-function with fitted trend # The following is valid. # Setting lambda=fit means that the fitted model is re-fitted to # each simulated pattern to obtain the intensity estimates for Kinhom. # (lambda=NULL would also be valid) fit <- kppm(redwood ~1, clusters="MatClust") # } # NOT RUN { plot(envelope(fit, Kinhom, lambda=fit, nsim=19)) # } # NOT RUN { # } # NOT RUN { # Note that the principle of symmetry, essential to the validity of # simulation envelopes, requires that both the observed and # simulated patterns be subjected to the same method of intensity # estimation. In the following example it would be incorrect to set the # argument 'lambda=red.dens' in the envelope command, because this # would mean that the inhomogeneous K functions of the simulated # patterns would be computed using the intensity function estimated # from the original redwood data, violating the symmetry. There is # still a concern about the fact that the simulations are generated # from a model that was fitted to the data; this is only a problem in # small datasets. # } # NOT RUN { red.dens <- density(redwood, sigma=bw.diggle) plot(envelope(redwood, Kinhom, sigma=bw.diggle, simulate=expression(rpoispp(red.dens)))) # } # NOT RUN { # Precomputed list of point patterns # } # NOT RUN { nX <- npoints(X) PatList <- list() for(i in 1:19) PatList[[i]] <- runifpoint(nX) E <- envelope(X, Kest, nsim=19, simulate=PatList) # } # NOT RUN { # re-using the same point patterns # } # NOT RUN { EK <- envelope(X, Kest, savepatterns=TRUE) EG <- envelope(X, Gest, simulate=EK) # } ` Documentation reproduced from package spatstat, version 1.56-1, License: GPL (>= 2) ### Community examples Looks like there are no examples yet.
2020-12-05 10:14:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5664466619491577, "perplexity": 2848.466219019782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747323.98/warc/CC-MAIN-20201205074417-20201205104417-00495.warc.gz"}
https://puzzling.stackexchange.com/questions/3090/4-doors-and-a-half-value-prize/3094
4 Doors and a Half Value prize You're on a game show, playing for 1 million dollars! You've made it to the final round, there are 4 doors, you pick a random door (obviously, there's a 1/4 chance for all of them, let's say you picked #3.) The host reveals door #1, it's empty. You now have the choice to stick with your door, switch to another door, or take 5 hundred thousand dollars! Which should you take? • Seriously, if I had the choice between just taking half a million or I had a 1 out of 3 chance of getting nothing... it should be clear to anyone except the greedy. Oct 21 '14 at 12:56 • You're not saying what the strategy of the host is, i.e. why he chooses door #1. Therefore the answer is as undefined as with all those mis-told versions of the Monty Hall puzzle. Please elaborate. Oct 21 '14 at 14:31 • I agree with general crispy: choose a door: significant chance of being sad: take 1/2 million: zero chance of being sad. Easy choice. Oct 23 '14 at 15:16 Note: I assume the host knew door #1 would be empty. This is analogous to the original Monty Hall problem. If the host randomly opens a door, the door-opening is moot and can be left out completely. Let's first look at the situation without the 500K consolation prize: Let's say I do NOT switch doors, then I have a 25% chance to win 1 million. Let's say that I do switch: 1. I pick the right door immediately (25% chance). Switching wins me nothing. 2. I pick the wring door first (75% chance). After the elimination of a wrong door, I switch with a 50% chance to win 1 million. I have in total $$\frac{3}{4} \cdot \frac{1}{2}=\frac{3}{8}$$ chance of winning the 1 million. So switching is, like in the original Monty Hall problem, certainly beneficial, raising my chance of winning from $$\frac{1}{4}$$ to $$\frac{3}{8}$$. However, my expected win is $$\frac{3}{8} \times \text{1 million}$$. That is less then the 500K. Basically, I have a choice to gamble a sum of money with the chance of doubling it, but I have only $$\tfrac{3}{8}$$ chance of doubling it. Going by the math, I should stick with the money I have, which means I should choose to keep the 500k. This problem introduces something that is not in the original problem, and that is utility. That is the phenomenon that what I can win changes my life much more than the money I lose. It is the reason people play in lotteries with a negative expected return (winning some millions changes your life. Paying some dollars every month makes no change. So even if I never win, the possible win is much more important than the almost certain loss). Now, in the case of "investing" 500k at bad odds to double it, this doesn't seem to play (both 500k and 1 million are "a lot of money!), but let's say I have a loan shark on my case who will kill me if I do not pay him back 1 million tomorrow. Now, the 500k will do me no good at all. In this situation, I will certainly select another door, giving me a 3/8 chance of surviving tomorrow. (Versus taking the 500k which will see me dead, or not switching and sticking with my original door, which gives me 25% chance of survival). • Awesome job! Just the answer I was looking for! Oct 21 '14 at 13:18 • +many for mentioning the utility of money - which so many of these problems ignore! Oct 21 '14 at 13:26 • @JuliaHayward honestly, how important is that? I mean, it's an interesting thought, but do we really need to add "but what if there is a crazed gunman demanding one million dollars" to every question? It's safe to assume each question wants a strategy maximizes expected dollar outcome, unless a utility breakdown is explicitly stated. Oct 21 '14 at 18:59 • @EnvisionAndDevelop: The guntoting madman is only relevant when there is a small difference in order of size between the prizes. Utility only comes into play when it influences the strategy. It is an essential difference between the original MH problem (which is always, in any strategy, an all-or-nothing game) and this variation (where I may have the opportunity to select my strategy based on utility). A more common occurrence of utility is indeed in standard lotteries, where the expected result is negative, and yet people play (small loss = no change, but big win = big change). Oct 21 '14 at 20:10 • I'm moreso commenting on the arbitrary inclusion of an extra factor. You could have just as easily included a change in strategy for when the madman will steal money from you if you swap, or if you choose 3, his unlucky number. Why include one but not the other? I suppose you can include what's interesting to you, but I was responding to the sentiment that it's something people should be rewarded for doing. Oct 21 '14 at 20:26 Take the $500,000. I assume that opening a door for an empty room means you get nothing. You picked on, and one was eliminated. So, there is now a$1/3$probability of picking the correct door from the remaining 3. \$500,000 $> 1/3*1000000$. • This is not true. The chance is 1/4 * 1000000. See the Monty Hall problem. The answer is still right, but the calculation isn't Oct 21 '14 at 12:58 • Yes, @Mathias711: see the Monty Hall problem. The chance is 3/8 to win the million, not 1/4 (if you switch, of course) Oct 21 '14 at 13:11 I would take the money , because its higher than the probability of winning the chance which is 1/3rd (Edit: its 3/8 as explained in the other answer), but that is assuming the host knows which door it is in. • The chance of winning is 1/4th. But yes, keep the money! Oct 21 '14 at 12:53 • @Mathias711 It's 1/3 because an empty room has been eliminated Oct 21 '14 at 12:55 • puzzling.stackexchange.com/questions/2167/… Oct 21 '14 at 12:56 • @Mathias711 You are making an assumption that the host knew door #1 was empty. That has not been stated here. Oct 21 '14 at 12:58 • The probability of winning with switching is not 1/4. It is 3/8. Oct 21 '14 at 13:08
2021-12-07 18:37:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6397614479064941, "perplexity": 797.9320146646455}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00547.warc.gz"}
http://archytas.birs.ca/events/2018/5-day-workshops/18w5130/schedule
# Schedule for: 18w5130 - Hessenberg Varieties in Combinatorics, Geometry and Representation Theory Arriving in Banff, Alberta on Sunday, October 21 and departing Friday October 26, 2018 Sunday, October 21 16:00 - 17:30 Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre) 17:30 - 19:30 Dinner A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) 20:00 - 22:00 Informal gathering (Corbett Hall Lounge (CH 2110)) Monday, October 22 07:00 - 08:45 Breakfast Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) 08:45 - 09:00 Introduction and Welcome by BIRS Staff A brief introduction to BIRS with important logistical information, technology instruction, and opportunity for participants to ask questions. (TCPL 201) 09:00 - 10:00 Hiraku Abe: An introduction to Hessenberg varieties In this talk, I will give a brief survey of recent developments on Hessenberg varieties. The goal of this talk is to share the idea that Hessenberg varieties can be studied from various perspectives. (TCPL 201) 10:00 - 10:30 Coffee Break (TCPL Foyer) 10:30 - 11:30 Julianna Tymoczko: K-theory/cohomology classes of Hessenberg varieties (TCPL 201) 11:30 - 13:00 Lunch Lunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) 13:00 - 14:00 Guided Tour of The Banff Centre Meet in the Corbett Hall Lounge for a guided tour of The Banff Centre campus. (Corbett Hall Lounge (CH 2110)) 14:00 - 14:15 Group Photo Meet in foyer of TCPL to participate in the BIRS group photo. The photograph will be taken outdoors, so dress appropriately for the weather. Please don't be late, or you might not be in the official group photo! (TCPL 201) 14:15 - 15:15 Timothy Chow: A proof of the Shareshian-Wachs conjecture Motivated by a 1993 conjecture of Stanley and Stembridge, Shareshian and Wachs conjectured that the characteristic map takes the dot action of the symmetric group on the cohomology of a regular semisimple Hessenberg variety to $\omega XG(t)$, where XG(t) is the chromatic quasisymmetric function of the incomparability graph G of the corresponding natural unit interval order, and $\omega$ is the usual involution on symmetric functions. Our main result is a proof of the Shareshian-Wachs conjecture. The proof makes essential use of both geometric arguments and combinatorial arguments; the talk will focus on the combinatorics, but we also briefly describe the geometric ingredients. This is joint work with Patrick Brosnan. (TCPL 201) 14:30 - 15:00 Coffee Break (TCPL Foyer) 15:45 - 16:45 Mathieu Guay-Paquet: Linear relations between q-chromatic symmetric functions The Brosnan--Chow--Shareshian--Wachs theorem gives a deep connection between the cohomology of regular semisimple Hessenberg varieties on one hand, and the $q$-chromatic symmetric functions of unit interval orders ($q$-CSFs) from combinatorics. Through this connection, a very interesting interplay of combinatorial and geometric results and techniques becomes possible, leading to progress on the long-standing $e$-positivity conjecture of Stanley--Stembridge and its generalizations. This talk is about recent combinatorial results on $q$-chromatic symmetric functions: (1) a complete description of the linear relations between $q$-CSFs; (2) a description of a large class of such linear relations which express a $q$-CSF as a positive combinations of other $q$-CSFs; and (3) a slight generalization of the $e$-positivity conjecture suggested by these results. (TCPL 201) 17:30 - 19:30 Dinner A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) Tuesday, October 23 07:00 - 09:00 Breakfast (Vistas Dining Room) 09:00 - 10:00 Martha Precup: The Betti numbers of Hessenberg varieties This talk will begin with a survey of results describing the Betti numbers of Hessenberg varieties. There are many geometric and combinatorial applications of these formulas. In the second half of the talk, I will report on recent joint work with M. Harada in which we prove an inductive formula for the Betti numbers of certain regular Hessenberg varieties called abelian Hessenberg varieties. Using a theorem of Brosnan and Chow, this formula yields a proof of the Stanley-Stembridge conjecture for this special case. (TCPL 202) 10:00 - 10:30 Coffee Break (TCPL Foyer) 10:30 - 11:30 Erik Insko: Singularities of Hessenberg varieties Hessenberg varieties are subvarieties of the flag variety with important connections to representation theory, algebraic geometry, and combinatorics. The local geometric structure of Hessenberg varieties can often be studied using cell decompositions, group actions, patch ideals, and the combinatorics of the symmetric group. In this talk, we will give a survey of results regarding the singularities of Hessenberg varieties. This is based on joint works with Alex Yong and Martha Precup. (TCPL 202) 11:30 - 13:30 Lunch (Vistas Dining Room) 13:15 - 14:15 Mikiya Masuda: The cohomology rings of regular semisimple Hessenberg varieties for $h=(h(1),n,\ldots,n)$ I will talk about an explicit ring presentation of the cohomology ring of the regular semisimple Hessenberg variety in the flag variety $Fl(\mathbb{C}^n)$ with Hessenberg function $h=(h(1),n,\dots,n)$ where $h(1)$ is an arbitrary integer between $2$ and $n$. This is joint work with Hiraku Abe and Tatsuya Horiguchi. (TCPL 201) 14:30 - 15:30 Tatsuya Horiguchi: The cohomology rings of regular nilpotent Hessenberg varieties and Schubert polynomials A regular nilpotent Hessenberg variety Hess(N,h) is a subvariety of a flag variety determined by a Hessenberg function h. I will explain a relation between the cohomology ring of a regular nilpotent Hessenberg variety and Schubert polynomials. To describe an explicit presentation of the cohomology ring of a regular nilpotent Hessenberg variety, polynomials $f_{i,j}$ were introduced by Abe-Harada-Horiguchi-Masuda. In this talk, I will show that every polynomial $f_{i,j}$ is an alternating sum of certain Schubert polynomials $\mathfrak{S}_{w_k^{(i,j)}}$ (k=1,2,...,i-j). Moreover, we can interpret the permutations $w_k^{(i,j)}$ from a geometric viewpoint under the circumstances of having a codimension one regular nilpotent Hessenberg variety Hess(N,h′) in the original regular nilpotent Hessenberg variety Hess(N,h) where the Hessenberg function h' is obtained from the original Hessenberg function h and (i,j). (TCPL 201) 15:15 - 15:45 Coffee Break (TCPL Foyer) 16:00 - 17:00 James Carrell: Cohomology algebras of varieties with a 'regular' $(\mathbb{G}_a,\mathbb{G}_m)$-action This is an expository talk on "regular $(\mathbb{G}_a,\mathbb{G}_m)$-actions" on projective varieties (over the complexes). A $(\mathbb{G}_a,\mathbb{G}_m)$-action is the same thing as an action of a Borel subgroup of $\mathrm{SL}(2)$. We call a $(\mathbb{G}_a,\mathbb{G}_m)$-action regular when the maximal unipotent subgroup $\mathbb{G}_a$ has a unique fixed point. There are lots of examples of varieties with a regular $(\mathbb{G}_a,\mathbb{G}_m)$-action such as Schubert varieties. Of particular interest are regular nilpotent Hessenberg varieties. The main point is that the cohomology algebra of a smooth projective variety X with $(\mathbb{G}_a,\mathbb{G}_m)$-action is isomorphic with the fixed point scheme of the $\mathbb{G}_a$ action, and this is also true of many singular $(\mathbb{G}_a,\mathbb{G}_m)$-stable subvarieties such as the two classes mentioned above. The $\mathbb{G}_m$-equivariant cohomology admits a nice description which I will also explain. (TCPL 201) 17:30 - 19:30 Dinner (Vistas Dining Room) Wednesday, October 24 07:00 - 09:00 Breakfast (Vistas Dining Room) 09:00 - 10:00 Andy Wilson: Macdonald polynomials and chromatic functions I will discuss how chromatic quasisymmetric functions can be viewed as modifications of LLT polynomials, certain building blocks from the combinatorial theory of Macdonald polynomials. I will also explain how Macdonald polynomials enter the picture, and what I hope can be gained from these relationships. Based on joint work with Jim Haglund. (TCPL 202) 10:00 - 10:15 Coffee Break (TCPL Foyer) 10:15 - 10:45 Jim Haglund: The $q \to q+1$ phenomenon LLT polynomials depend on a tuple of skew shapes and a parameter q. They occur in formulas for Macdonald polynomials, where the skew shapes are ribbons, and in character formulas for Sn modules such as the diagonal coinvariant ring. LLT polynomials are known to be Schur positive, but combinaotrial formulas for the Schur coefficients are known only for special cases. Several researchers including Alexandersson, Bergeron, and Garsia have noticed independently that when q is replaced by q+1, LLT polynomials often become e-positive, and it has been conjectured that for tuples of vertical strips, this is always the case. In this talk we discuss some partial results on the e-expansion of LLT polynomials corresponding to Dyck paths which rise in the character of the diagonal coinvariant ring; in particular we discuss a conjecture for the coefficient of e_n in their e-expansion. (TCPL 202) 11:00 - 12:00 Mark Skandera: Trace functions, the Kazhdan-Lusztig basis, and chromatic quasisymmetric functions When G is the indifference graph of a natural unit interval order, the Shareshian-Wachs chromatic quasisymmetric function is symmetric, and serves as a generating function for Hecke algebra trace evaluations. In particular, every coefficient appearing in every common symmetric function expansion is the evaluation of a common Hecke algebra trace at a modified Kazhdan-Lusztig basis element indexed by a 3412-, 4231- avoiding permutation. We will discuss various combinatorial interpretations of these, and related ideas in generating functions. (TCPL 202) 12:00 - 13:30 Lunch (Vistas Dining Room) 13:30 - 17:30 Free Afternoon (Banff National Park) 17:30 - 19:30 Dinner (Vistas Dining Room) Thursday, October 25 07:00 - 09:00 Breakfast (Vistas Dining Room) 09:00 - 10:00 Satoshi Murai: Hessenbergs and hyperplane arrangements part I In this talk, I will show that there is an interesting connection between cohomology rings of regular nilpotent Hessenberg varieties and logarithmic derivation modules of hyperplane arrangements. In particular, I will explain that this connection gives an affirmative answer to a conjecture of Sommers and Tymoczko on Poincare polynomials of regular nilpotent Hessenberg varieties. If time allows, I also explain how this connection can be applied to compute relations of cohomology rings of regular nilpotent Hessenberg varieties explicitly. (TCPL 202) 10:00 - 10:30 Coffee Break (TCPL Foyer) 10:30 - 11:30 Takuro Abe: Hessenbergs and hyperplane arrangements part II A hyperplane arrangement is a finite set of linear hyperplanes in the complex vector space. From each hyperplane arrangement and arbitrary degree two homogeneous polynomial, we can define a finite dimensional algebra, called the Solomon-Terao algebra. If we choose an arrangement the ideal arrangement corresponding to the reflecting hyperplanes belonging to the lower ideal in the root poset (which corresponds to a Hessenberg space), and the lowest degree basic invariant polynomial which is always degree two, then their Solomon-Terao algebra coincides with the cohomology ring of the regular nilpotent Hessenberg variety, as shown in the talk by Murai. We discuss the origin and properties of the Solomon-Terao algebra coming from so called the mysterious Solomon-Terao formula and polynomials by Luis Solomon and Hiroaki Terao in 1986. We discuss also the possibility whether it could be a cohomology ring of some other varieties, e.g., Schubert varieties. This talk is based on the joint work with T. Maeno, S. Murai and Y. Numata. (TCPL 202) 11:30 - 13:30 Lunch (Vistas Dining Room) 14:00 - 15:00 Peter Crooks: Integrable systems on families of Hessenberg varieties Kostant's invariant-theoretic version of the Toda lattice has given rise to a powerful synergy of ideas from symplectic geometry, algebraic geometry, and representation theory. An example is the appearance of this Kostant-Toda lattice in calculations related to the quantum cohomology of the flag variety. It is in this setting that one compactifies the leaves of the Kostant-Toda lattice, thereby obtaining a certain class of Hessenberg varieties. One might then expect that the Kostant-Toda lattice can be defined on (the total space of) a family of Hessenberg varieties. I will show this to be the case, emphasizing the roles played by Slodowy slices and Mishchenko-Fomenko theory. This represents joint work with Hiraku Abe. (TCPL 201) 15:00 - 15:30 Coffee Break (TCPL Foyer) 15:30 - 16:30 Ting Xue: Springer correspondence and Hessenberg varieties The Springer theory for reductive algebraic groups plays an important role in representation theory. It relates nilpotent orbits in the Lie algebra to irreducible representations of the Weyl group. We develop a Springer theory in the case of symmetric spaces using Fourier transform, which relates nilpotent orbits in this setting to irreducible representations of Hecke algebras with parameter -1. As an application we explain a general strategy for computing the cohomology of Hessenberg varieties. Examples of such varieties include classical objects in algebraic geometry: Jacobians, moduli spaces of vector bundles on curves with extra structure, and Fano varieties of linear subspaces in the intersection of two quadrics, etc. The talk is based on joint work with Tsao-Hsien Chen and Kari Vilonen. (TCPL 201) 17:30 - 19:30 Dinner (Vistas Dining Room) Friday, October 26 07:00 - 09:00 Breakfast (Vistas Dining Room) 10:00 - 10:30 Coffee Break (TCPL Foyer) 11:30 - 12:00 Checkout by Noon 5-day workshop participants are welcome to use BIRS facilities (BIRS Coffee Lounge, TCPL and Reading Room) until 3 pm on Friday, although participants are still required to checkout of the guest rooms by 12 noon. (Front Desk - Professional Development Centre) 12:00 - 13:30 Lunch from 11:30 to 13:30 (Vistas Dining Room)
2019-04-22 22:58:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47404199838638306, "perplexity": 1348.375476543683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578582736.31/warc/CC-MAIN-20190422215211-20190423001211-00418.warc.gz"}
https://itprospt.com/num/5596630/implicit-differentiation-to-find-y-x27-cos-xy-x-quot-yy
5 # Implicit differentiation to find y'.cos (xy )=x"_yy... ## Question ###### Implicit differentiation to find y'.cos (xy )=x"_yy implicit differentiation to find y'. cos (xy )=x"_y y #### Similar Solved Questions ##### (14 pts:) Let 9 be the curve in the Iy-plane given by V = Vz 0 <I <1. The curve 9t is rotated about the I-axis . Compute the resulting surface area (14 pts:) Let 9 be the curve in the Iy-plane given by V = Vz 0 <I <1. The curve 9t is rotated about the I-axis . Compute the resulting surface area... ##### Consider the matrix(a) Perform row operations to bring into row echelon form:(b) Answer the following questions. Find basis for the row space of and basis for its column space_ Find basis for the null space of A_ Verify the rank theorem for this matrix, carefully explaining your reasoning: Consider the matrix (a) Perform row operations to bring into row echelon form: (b) Answer the following questions. Find basis for the row space of and basis for its column space_ Find basis for the null space of A_ Verify the rank theorem for this matrix, carefully explaining your reasoning:... ##### 3 - Corrosion kinetics in acidic solutions shows the rate of corrosion of Fe is more than Zn, although the potential reduction of jron is more than Zinc: Why? 3 - Corrosion kinetics in acidic solutions shows the rate of corrosion of Fe is more than Zn, although the potential reduction of jron is more than Zinc: Why?... ##### Ji 1 [ 1 1 {1rmo Oin?]1 2 opCron 1 1 1 Suluazua 1 Idwy? (on / otfl 1 1 acuvolon (tryptophan , 1 3 Close tryptophanl idisrandam poaitive) nabolic inducible culabolicnoucibie rynaboncicanbolic] Ji 1 [ 1 1 { 1 rmo Oin? ] 1 2 opCron 1 1 1 Suluazua 1 Idwy? (on / otfl 1 1 acuvolon (tryptophan , 1 3 Close tryptophanl idisrandam poaitive) nabolic inducible culabolic noucibie rynaboncicanbolic]... ##### 012vuIq delandetaaWeclnTit andla Uf Kicidetve muruMnara ! enuuah intatin Blae unenttet Ditne enpad bi W Iedarc a Iatr HuralaInatheE anele TANkb anala pt uicidenct muil aquj [O 0 cnacoiualc 012vuIq delan detaa Wecln Tit andla Uf Kicidetve muru Mnara ! enuuah intatin Blae unenttet Ditne enpad bi W Iedarc a Iatr HuralaInatheE anele TANkb anala pt uicidenct muil aquj [O 0 cnacoiualc... ##### Put any units j | In your answer; J -just Bas the U cooibors alm How1 put any units j | In your answer; J -just Bas the U cooibors alm How 1... ##### A Sft tall S0kg gymnast is walking along 7m long solid wooden beam supported but two legs on either side set Im from the ends. If she is carrying a hollow metal cylinder of uniform density 2.7 g/cm? and 1m length under her arm such that 20% of its length is in front of her center of mass how close to the edge can she walk before equilibrium is lost? The inner radius of the cylinder is 3cm and the outer radius is Zcm. A Sft tall S0kg gymnast is walking along 7m long solid wooden beam supported but two legs on either side set Im from the ends. If she is carrying a hollow metal cylinder of uniform density 2.7 g/cm? and 1m length under her arm such that 20% of its length is in front of her center of mass how close t...
2022-10-05 18:12:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8263920545578003, "perplexity": 9285.036781629453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00294.warc.gz"}
https://robertovitillo.com/2016/07/29/differential-privacy-for-dummies/
# Differential Privacy for Dummies Technology allows companies to collect more data and with more detail about their users than ever before. Sometimes that data is sold to third parties, other times it’s used to improve products and services. In order to protect users’ privacy, anonymization techniques can be used to strip away any piece of personally identifiable data and let analysts access only what’s strictly necessary. As the Netflix competition in 2007 has shown though, that can go awry. The richness of data allows to identify users through a sometimes surprising combination of variables like the dates on which an individual watched certain movies. A simple join between an anonymized datasets and one of many publicly available, non-anonymized ones, can re-identify anonymized data. Aggregated data is not much safer either under some circumstances! For example, say we have two summary statistics: one is the number of users, including Frank, that watch one movie per day and the other is the number of users, without Frank, that watch one movie per day. Then, by comparing the counts, we could tell if Frank watches one movie per day. #### Differential Privacy to the rescue Differential privacy formalizes the idea that a query should not reveal whether any one person is present in a dataset, much less what their data are. Imagine two otherwise identical datasets, one with your information in it, and one without it. Differential Privacy ensures that the probability that a query will produce a given result is nearly the same whether it’s conducted on the first or second dataset. The idea is that if an individual’s data doesn’t significantly affect the outcome of a query, then he might be OK in giving his information up as it is unlikely that the information would be tied back to him. The result of the query can damage an individual regardless of his presence in a dataset though. For example, if an analysis on a medical dataset finds a correlation between lung cancer and smoking, then the health insurance cost for a particular smoker might increase regardless of his presence in the study. More formally, differential privacy requires that the probability of a query producing any given output changes by at most a multiplicative factor when a record (e.g. an individual) is added or removed from the input. The largest multiplicative factor quantifies the amount of privacy difference. This sounds harder than it actually is and the next sections will iterate on the concept with various examples, but first we need to define a few terms. Dataset We will think of a dataset $d$ as being a collection of records from an universe $U$. One way to represent a dataset $d$ is with a histogram in which each entry $d_i$ represents the number of elements in the dataset equal to $u_i \in U$. For example, say we collected data about coin flips of three individuals, then given the universe $U = \{head, tail\}$, our dataset $d$ would have two entries: $d_{head} = i$ and $d_{tail} = j$, where $i + j = 3$. Note that in reality a dataset is likely to be an ordered lists of rows (i.e. a table) but the former representation makes the math a tad easier. Distance Given the previous definition of dataset, we can define the distance between two datasets $x, y$ with the $l_1$ norm as: $\displaystyle ||x - y||_1 = \sum\limits_{i=1}^{|U|} |x_i - y_i|$ Mechanism A mechanism is an algorithm that takes as input a dataset and returns an output, so it can really be anything, like a number, a statistical model or some aggregate. Using the previous coin-flipping example, if mechanism $C$ counts the number of individuals in the dataset, then $C(d) = 3$. In practice though we will specifically considering randomized mechanisms, where the randomization is used to add privacy protection. Differential Privacy A mechanism $M$ satisfies $\epsilon$ differential privacy if for every pair of datasets $x, y$ such that $||x - y||_1 \leq 1$, and for every subset $S \subseteq \text{Range}(M)$: $\displaystyle \frac{Pr[M(x) \in S]}{Pr[M(y) \in S]} \leq e^{\epsilon}$ What’s important to understand is that the previous statement is just a definition. The  definition  is  not  an  algorithm,  but  merely  a  condition that must be satisfied by a mechanism to claim that it satisfies $\epsilon$ differential privacy. Differential privacy allows researchers to use a common framework to study algorithms and compare their privacy guarantees. Let’s check if our mechanism $C$ satisfies $1$ differential privacy. Can we find a counter-example for which: $\displaystyle \frac{Pr[C(x) \in S]}{Pr[C(y) \in S]} \leq e$ is false? Given $x, y$ such that $||x - y||_1 = 1$ and $||x||_1 = k$, then: $\displaystyle \frac{Pr[C(x) = k]}{Pr[C(y) = k]} \leq e$ i.e. $\frac{1}{0} \leq e$, which is clearly false, hence this proves that mechanism $C$ doesn’t satisfy $1$ differential privacy. Composition theorems A powerful property of differential privacy is that mechanisms can easily be composed. These require the key assumption that the mechanisms operate independently given the data. Let $d$ be a dataset and $g$ an arbitrary function. Then, the sequential composition theorem asserts that if $M_I(d)$ is $\epsilon_i$ differentially private, then $M(d) = g(M_1(d), M_2(d), ..., M_N(d))$ is $\sum_{i=1}^{N}\epsilon_i$ differentially private. Intuitively this means that given an overall fixed privacy budget, the more mechanisms are applied to the same dataset, the more the available privacy budget for each individual mechanism will decrease. The parallel composition theorem asserts that given $N$ partitions of a dataset $d$, if for an arbitrary partition $d_i$, $M_i(d_i)$ is $\epsilon$ differentially private, then $M(d) = g(M_1(d_1), M_2(d_2), ..., M_N(d_N))$ is $\epsilon$ differentially private. In other words, if a set of $\epsilon$ differentially private mechanisms is applied to a set of disjoint subsets of a dataset, then the combined mechanism is still $\epsilon$ differentially private. #### The randomized response mechanism The first mechanism we will look into is “randomized response”, a technique developed in the sixties by social scientists to collect data about embarrassing or illegal behavior. The study participants have to answer a yes-no question in secret using the following mechanism $M_R(d, \alpha, \beta)$: 1. Flip a biased coin with probability of heads $\alpha$; 2. If heads, then answer truthfully with $d$; 3. If tails, flip a coin with probability of heads $\beta$ and answer “yes” for heads and “no” for tails. In code: def randomized_response_mechanism(d, alpha, beta): if random() < alpha: return d elif random() < beta: return 1 else: return 0 Privacy is guaranteed by the noise added to the answers. For example, when the question refers to some illegal activity, answering “yes” is not incriminating as the answer occurs with a non-negligible probability whether or not it reflects reality, assuming $\alpha$ and $\beta$ are tuned properly. Let’s try to estimate the proportion $p$ of participants that have answered “yes”. Each participant can be modeled with a Bernoulli variable $X_i$ which takes a value of 0 for “no” and a value of 1 for “yes”. We know that: $\displaystyle P(X_i = 1) = \alpha p + (1 - \alpha) \beta$ Solving for $p$ yields: $\displaystyle p = \frac{P(X_i = 1) - (1 - \alpha) \beta}{\alpha}$ Given a sample of size $n$, we can estimate $P(X_i = 1)$ with $\frac{\sum_{i=1}^{i=n} X_i}{n}$.  Then, the estimate $\hat{p}$ of $p$ is: $\displaystyle \hat{p} = \frac{\frac{\sum\limits_{i=1}^{i=n} X_i}{n} - (1 - \alpha) \beta}{\alpha}$ To determine how accurate our estimate is we will need to compute its standard deviation. Assuming the individual responses $X_i$ are independent, and using basic properties of the variance, $\displaystyle \mathrm{Var}(\hat{p}) = \mathrm{Var}\biggl({\frac{\sum_{i=1}^{i=n} X_i}{n \alpha}}\biggr) = {\frac{\mathrm{Var}(X_i)}{n \alpha^2}}$ By taking the square root of the variance we can determine the standard deviation of $\hat{p}$. It follows that the standard deviation $s$ is proportional to $\frac{1}{\sqrt{n}}$, since the other factors are not dependent on the number of participants. Multiplying both $\hat{p}$ and $s$ by $n$ yields the estimate of the number of participants that answered “yes” and its relative accuracy expressed in number of participants, which is proportional to $\sqrt{n}$. The next step is to determine the level of privacy that the randomized response method guarantees. Let’s pick an arbitrary participant. The dataset $d$ is represented with either 0 or 1 depending on whether the participant answered truthfully with a “no” or “yes”. Let’s call the two possible configurations of the dataset respectively $d_{no}$ and $d_{yes}$. We also know that $||d_i - d_j||_1 \leq 1$ for any $i, j$. All that’s left to do is to apply the definition of differential privacy to our randomized response mechanism $M_R(d, \alpha, \beta)$: $\displaystyle \frac{Pr[M_R(d_{i}, \alpha, \beta) = k]}{Pr[M_R(d_{j}, \alpha, \beta) = k]} \leq e^{\epsilon}$ The definition of differential privacy applies to all possible configurations of $i, j \text{ and } k$, e.g.: $\displaystyle \begin{gathered}\frac{Pr[M_R(d_{yes}, \alpha, \beta) = 1]}{Pr[M_R(d_{no}, \alpha, \beta) = 1]} \leq e^{\epsilon} \\\ln \left( {\frac{\alpha + (1 - \alpha)\beta}{1 - (\alpha + (1 - \alpha)\beta)}} \right) \leq \epsilon \\\ln \left( \frac{\alpha + \beta - \alpha \beta}{1 - (\alpha + \beta - \alpha \beta)} \right) \leq \epsilon\end{gathered}$ The privacy parameter $\epsilon$ can be tuned by varying $\alpha \text{ and } \beta$. For example, it can be shown that the randomized response mechanism with $\alpha = \frac{1}{2}$ and $\beta = \frac{1}{2}$ satisfies $\ln{3}$ differential privacy. The proof applies to a dataset that contains only the data of a single participant, so how does this mechanism scale with multiple participants? It follows from the parallel composition theorem that the combination of $\epsilon$ differentially private mechanisms applied to the datasets of the individual participants is $\epsilon$ differentially private as well. #### The Laplace mechanism The Laplace mechanism is used to privatize a numeric query. For simplicity we are going to assume that we are only interested in counting queries $f$, i.e. queries that count individuals, hence we can make the assumption that adding or removing an individual will affect the result of the query by at most 1. The way the Laplace mechanism works is by perturbing a counting query $f$ with noise distributed according to a Laplace distribution centered at 0 with scale $b = \frac{1}{\epsilon}$, $\displaystyle Lap(x | b) = \frac{1}{2b}e^{- \frac{|x|}{b}}$ Then, the Laplace mechanism is defined as: $\displaystyle M_L(x, f, \epsilon) = f(x) + Z$ where $Z$ is a random variable drawn from $Lap(\frac{1}{\epsilon})$. In code: def laplace_mechanism(data, f, eps): return f(data) + laplace(0, 1.0/eps) It can be shown that the mechanism preserves $\epsilon$ differential privacy. Given two datasets $x, y$ such that $||x - y||_1 \leq 1$ and a function $f$ which returns a real number from a dataset, let $p_x$ denote the probability density function of $M_L(x, f, \epsilon)$ and $p_y$ the probability density function of $M_L(y, f, \epsilon)$. Given an arbitrary real point $k$, $\displaystyle \frac{p_x(k)}{p_y(k)} = \frac{e^{- \epsilon |f(x) - k|}}{e^{- \epsilon |f(y) - k|}} =$ $\displaystyle e^{\epsilon (|f(x) - k| - |f(y) - k|)} \leq$ $\displaystyle e^{\epsilon |f(x) - f(y)|}$ by the triangle inequality. Then, $\displaystyle e^{\epsilon |f(x) - f(y)|} \leq e^\epsilon$ What about the accuracy of the Laplace mechanism? From the cumulative distribution function of the Laplace distribution it follows that if $Z \sim Lap(b)$, then $Pr[|Z| \ge t \times b] = e^{-t}$. Hence, let $k = M_L(x, f, \epsilon)$ and $\forall \delta \in (0, 1]$: $\displaystyle Pr \left[|f(x) - k| \ge \ln{\left (\frac{1}{\delta} \right)} \times \frac{1}{\epsilon} \right] =$ $\displaystyle Pr \left[|Z| \ge \ln{\left (\frac{1}{\delta} \right)} \times \frac{1}{\epsilon} \right] = \delta$ where $Z \sim Lap(\frac{1}{\epsilon})$. The previous equation sets a probalistic bound to the accuracy of the Laplace mechanism that, unlike the randomized response, does not depend on the number of participants $n$. #### Counting queries The same query can be answered by different mechanisms with the same level of differential privacy. Not all mechanisms are born equally though; performance and accuracy have to be taken into account when deciding which mechanism to pick. As a concrete example, let’s say there are $n$ individuals and we want to implement a query that counts how many possess a certain property $p$. Each individual can be represented with a Bernoulli random variable: participants = binomial(1, p, n) We will implement the query using both the randomized response mechanism $M_R(d, \frac{1}{2}, \frac{1}{2})$, which we know by now to satisfy $\ln{3}$ differential privacy, and the Laplace mechanism $M_L(d, f, \ln{3})$ which satisfies $\ln{3}$ differential privacy as well. def randomized_response_count(data, alpha, beta): randomized_data = randomized_response_mechanism(data, alpha, beta) return len(data) * (randomized_data.mean() - (1 - alpha)*beta)/alpha def laplace_count(data, eps): return laplace_mechanism(data, np.sum, eps) r = randomized_response_count(participants, 0.5, 0.5) l = laplace_count(participants, log(3)) Note that while that while $M_R$ is applied to each individual response and later combined in a single result, i.e. the estimated count, $M_L$ is applied directly to the count, which is intuitively why $M_R$ is noisier than $M_L$. How much noisier?  We can easily simulate the distribution of the accuracy for both mechanisms with: def randomized_response_accuracy_simulation(data, alpha, beta, n_samples=1000): return [randomized_response_count(data, alpha, beta) - data.sum() for _ in range(n_samples)] def laplace_accuracy_simulation(data, eps, n_samples=1000): return [laplace_count(data, eps) - data.sum() for _ in range(n_samples)] r_d = randomized_response_accuracy_simulation(participants, 0.5, 0.5) l_d = laplace_accuracy_simulation(participants, log(3)) As mentioned earlier, the accuracy of $M_R$ grows with the square root of the number of participants: while the accuracy of $M_L$ is a constant: You might wonder why one would use the randomized response mechanism if it’s worse in terms of accuracy compared to the Laplace one. The thing about the Laplace mechanism is that the private data about the users has to be collected and stored, as the noise is applied to the aggregated data. So even with the best of intentions there is the remote possibility that an attacker might get access to it.  The randomized response mechanism though applies the noise directly to the individual responses of the users and so only the perturbed responses are collected! With the latter mechanism any individual’s information cannot be learned with certainty, but an aggregator can still infer population statistics. That said, the choice of mechanism is ultimately a question of which entities to trust. In the medical world, one may trust the data collectors (e.g. researchers), but not the general community who will be accessing the data. Thus one collects the private data in the clear, but then derivatives of it are released on request with protections. However, in the online world, the user is generally looking to protect their data from the data collector itself, and so there is a need to prevent the data collector from ever accumulating the full dataset in the clear. #### Real world use-cases The algorithms presented in this post can be used to answer simple counting queries. There are many more mechanisms out there used to implement complex statistical procedures like machine learning models. The concept behind them is the same though: there is a certain function that needs to be computed over a dataset in a privacy preserving manner and noise is used to mask an individual’s original data values. One such mechanism is RAPPOR, an approach pioneered by Google to collect frequencies of an arbitrary set of strings. The idea behind it is to collect vectors of bits from users where each bit is perturbed with the randomized response mechanism.  The bit-vector might represent a set of binary answers to a group of questions, a value from a known dictionary or, more interestingly, a generic string encoded through a Bloom filter. The bit-vectors are aggregated and the expected count for each bit is computed in a similar way as shown previously in this post. Then, a statistical model is fit to estimate the frequency of a candidate set of known strings. The main drawback with this approach is that it requires a known dictionary. Later on the approach has been improved to infer the collected strings without the need of a known dictionary at the cost of accuracy and performance. To give you an idea, to estimate a distribution  over  an  unknown  dictionary  of  6-letter  strings without knowing  the  dictionary,  in  the  worst  case,  a sample size in  the  order  of  300  million  is required; the sample size grows quickly as the length of the strings increases. That said, the mechanism consistently finds the most frequent strings which enable to learn the dominant trends of a population. Even though the theoretical frontier of differential privacy is expanding quickly there are only a handful implementations out there that, by ensuring  privacy without the need for a trusted third party like RAPPOR, suit well the kind of data collection schemes commonly used in the software industry. References
2017-06-22 16:27:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 129, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4010665714740753, "perplexity": 539.7550608924126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319636.73/warc/CC-MAIN-20170622161445-20170622181445-00602.warc.gz"}
http://openstudy.com/updates/4f1768abe4b0aeb795f570b4
## anonymous 4 years ago Find the domain of the following function. State your answer in interval notation. Please show all of your work f(x)= -10x-3 ------------- Find the domain of the following function. State your answer in interval notation. Please show all of your work f(x)= -10x-3 ----------- √-6x-9+x 1. anonymous does posting a link to openstudy.com count as showing all your work? 2. anonymous the domain does not include places where the square root is negative, and it also skips points where the denominator evaluates to 0. otherwise all the x-values are fair game. 3. anonymous $\sqrt{(-6x-9)}$ means -6x-9 cannot be negative $\ge 0$ add 9 to both sides, and divide by -6 x $\le$ -3/2 The whole denominator cannot = 0 $\sqrt{(-6x-9)} +x$ = 0 subtract x on both sides $\sqrt{(-6x-9)} =-x square both sides -6x-9 = x^2 Get equation = 0 x^2 + 6x + 9 = 0 Factor and solve (x + 3)(x + 3) = 0 x+3 = 0 x = -3 Therefore x is any real number except -3. (-\[\infty$, -3) $\cup$ (-3, $\infty$ )
2016-10-26 04:27:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5346988439559937, "perplexity": 881.3913104663874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720615.90/warc/CC-MAIN-20161020183840-00496-ip-10-171-6-4.ec2.internal.warc.gz"}
https://kr.mathworks.com/help/lte/ref/ltepdcchindices.html
ltePDCCHIndices PDCCH resource element indices Description example ind = ltePDCCHIndices(enb) returns a NRE-by-CellRefP matrix of one-based linear indexing RE indices given the structure enb. It returns the subframe resource element (RE) indices for the physical downlink control channels (PDCCH). The NRE indices returned cover all PDCCH resources in the control region not already assigned to PCFICH or PHICH (see ltePDCCHInfo). They are ordered as the complete block of padded, interleaved, and shifted PDCCH modulation symbols that ready to be mapped, as described in TS 36.211 [1], Section 6.8.5. ind = ltePDCCHIndices(enb,opts) formats the returned indices using options specified by opts. example ind = ltePDCCHIndices(enb,exreg,opts) returns a matrix of indices, where the vector exreg explicitly defines resources not to be assigned to PDCCH. The exreg must contain valid resource element group (REG) indices but can be either zero-based or one-based throughout, and indices which do not fall within the control region are ignored. Examples collapse all Retrieve PDCCH resource element (RE) indices. Create an RMC R.0 configuration structure and find its PDCCH RE indices. Display the size of the indices. enb = lteRMCDL('R.0'); ind = ltePDCCHIndices(enb); size(ind) ans = 1×2 452 1 Explicitly exclude resources when retrieving PDCCH indices. Create a cell-wide configuration structure initialized for RMC R.0. Generate RE indices for the PDCCH providing an empty matrix for the argument exreg so that no resources are excluded. enb = lteRMCDL('R.0'); ind = ltePDCCHIndices(enb,[],'re'); numPDCCHwithNoExclusion = size(ind) numPDCCHwithNoExclusion = 1×2 480 1 All RE indices are returned in the required mapping order. Explicitly exclude the PCFICH and PHICH indices. enb = lteRMCDL('R.0'); exreg = [ltePCFICHIndices(enb,'reg'); ltePHICHIndices(enb,'reg')]; ind = ltePDCCHIndices(enb,exreg,'re'); numPDCCHwithExclusion = size(ind) numPDCCHwithExclusion = 1×2 452 1 This call returns the same result as the default syntax call, ltePDCCHIndices(enb). Input Arguments collapse all enb is a structure having the following fields. Parameter FieldRequired or OptionalValuesDescription NDLRBRequired Integer within the range (6,...,110) Number of downlink resource blocks. (${N}_{\text{RB}}^{\text{DL}}$) NCellIDRequired Integer from 0 to 503 Physical layer cell identity CyclicPrefixOptional 'Normal' (default), 'Extended' Cyclic prefix length CellRefPRequired 1, 2, 4 Number of cell-specific reference signal (CRS) antenna ports CFIRequired 1, 2, or 3 Control format indicator value NgRequired 'Sixth', 'Half', 'One', 'Two' HICH group multiplier PHICHDurationOptional 'Normal' (default), 'Extended' PHICH duration DuplexModeOptional 'FDD' (default), 'TDD' Duplexing mode, specified as: • 'FDD' for Frequency Division Duplex or • 'TDD' for Time Division Duplex The following field is required when DuplexMode is set to 'TDD'. TDDConfigOptional 0, 1 (default), 2, 3, 4, 5, 6 NSubframeRequired Integer greater than 0 Subframe number Data Types: struct Output format, base, and unit of generated indices, specified as one of these forms. • 'format base unit' • "format base unit" • {'format','base','unit'} • ["format","base","unit"] Where format, base, and unit are defined in this table. Option Values Description format 'ind' (default), 'sub' Output format of generated indicesTo return the indices as a column vector, specify this option as 'ind'.To return the indices as an NRE-by-3 matrix, where NRE is the number of REs, specify this option as 'sub'. Each row of the matrix contains the subcarrier, symbol, and antenna port as its first, second, and third element, respectively. base '1based' (default), '0based' Index baseTo generate indices whose first value is 1, specify this option as '1based'. To generate indices whose first value is 0, specify this option as '0based'. unit 're' (default), 'reg' Unit of returned indicesTo indicate that the returned values correspond to individual resource elements (REs), specify this option as 're'. To indicate that the returned values correspond to resource element groups (REGs), specify this option as 'reg'. Example: 'ind 0based reg', "ind 0based reg", {'ind','0based','reg'}, and ["ind","0based","reg"] specify the same output options. Data Types: char | string | cell Resources excluded from PDCCH, specified as a vector. This vector explicitly defines those resources not to be assigned to PDCCH. exreg must contain valid resource element group (REG) indices but can be either zero-based or one-based throughout. Indices which do not fall within the control region are ignored. Data Types: double Output Arguments collapse all PDCCH RE indices, returned as an NRE-by-CellRefP numeric matrix by default. The matrix contains one-based linear indexing RE indices. Each column of ind identifies the same set of NRE subframe resource elements but with indices offset to select them in a different antenna “page” of the 3-D resource array. The default matrix of indices in a one-based linear indexing style which can directly index elements of an M-by-N-by-CellRefP array, where M is the number of symbols, and N is the number of subcarriers representing the subframe grid across CellRefP antenna ports. Data Types: double References [1] 3GPP TS 36.211. “Physical Channels and Modulation.” 3rd Generation Partnership Project; Technical Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access (E-UTRA). URL: https://www.3gpp.org.
2021-11-28 11:50:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6191715598106384, "perplexity": 14618.37357087953}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358520.50/warc/CC-MAIN-20211128103924-20211128133924-00172.warc.gz"}
https://www.trustudies.com/ncert-solutions/class-10/maths/circles/
# NCERT Solutions for Class 10 Maths Chapter 10 Circles Written by Team Trustudies Updated at 2021-05-07 ## NCERT solutions for class 10 Maths Chapter 10 Circles Exercise - 10.1 Q1 ) How many tangents can a circle have ? NCERT Solutions for Class 10 Maths Chapter 10 Circles A circle can have an infinite number of tangents. Q2 ) Fill in the blanks : (i) A tangent to a circle intersects it in _________ point(s). (ii) A line intersecting a circle in two points is called a _______. (iii) A circle can have _______ parallel tangents at the most. (iv) The common point of a tangent to a circle and the circle is called ______. NCERT Solutions for Class 10 Maths Chapter 10 Circles (i) exactly one (ii) secant (iii) two (iv) point of contact Q3 ) A tangent PQ at a point P of a circle of radius 5 cm meets a line through the centre O at a point Q so that OQ = 12 cm. Length of PQ is : (A) $12$ cm (B) $13$ cm (C) $8.5$ cm (D) $\sqrt{119}$ cm NCERT Solutions for Class 10 Maths Chapter 10 Circles cm So, (D) option is correct Q4 ) Draw a circle and two lines parallel to a given line such that one is tangent and the other, a secant to the circle. NCERT Solutions for Class 10 Maths Chapter 10 Circles ## NCERT solutions for class 10 Maths Chapter 10 Circles Exercise - 10.2 Q1 ) From a point Q, the length of the tangent to a circle is 24 cm and the distance of Q from the centre is 25 cm. The radius of the circle is (A) $7$ cm (B) $12$ cm (C) $15$ cm (D) $24.5$ cm NCERT Solutions for Class 10 Maths Chapter 10 Circles Since QT is a tangent to the circle at T and OT is radius, It is given that OQ = 25 cm and QT = 24 cm, $?$ By Pythagoras theorem, we have $?$ radius of the circle is 7 cm $?$ Option (A) is correct. Q2 ) In figure, if TP and TQ are the two tangents to a circle with centre O so that , then is equal to (A) $60°$ (B) $70°$ (C) $80°$ (D) $90°$ NCERT Solutions for Class 10 Maths Chapter 10 Circles Since TP and TQ are tangents to a circle with centre O so that , $?$ and and In the quadrilateral TPOQ, we have [ \therefore Sum of all angles in a quadrilateral = 360° ] $?$ Option (B) is correct. Q3 ) If tangents PA and PB from a point P to a circle with centre O are inclined to each other at angle of 80°, then is equal to (A) $50°$ (B) $60°$ (C) $70°$ (D) $80°$ NCERT Solutions for Class 10 Maths Chapter 10 Circles Since PA and PB are tangents to a circle with centre O, , and and In the quadrilateral PAOB, we have In and $OBP$, we have [Common] [Each = 90°] $?$ ( By SAS Criterion) [C.P.C.T.] $?$ ? The (A) is the correct option. Q4 ) Prove that the tangents drawn at the ends of a diameter of a circle are parallel. NCERT Solutions for Class 10 Maths Chapter 10 Circles Let PQ be a diameter of the given circle with centre O. Let AB and CD be the tangents drawn to the circle at the end points of the diameter PQ respectively. $?$ Tangent at a point to a circle is perpendicular to the radius through the point, $?$ and [$?$ and are alternate angles] Hence, the tangents drawn at the ends of a diameter of a circle are parallel. Q5 ) Prove that the perpendicular at the point of contact to the tangent to a circle passes through the centre. NCERT Solutions for Class 10 Maths Chapter 10 Circles Let AB be the tangent drawn at the point P on the circle with O. If possible, let PQ be perpendicular to AB, not passing through O. Join OP. $?$ Tangent at a point to a circle is perpendicular to the radius through the point, $?$ (Construction) , which is not possible. $?$ It contradicts our supposition. Hence, the perpendicular at the point of contact to the tangent to a circle passes through the centre. Q6 ) The Length of a tangent from a point A at distance 5 cm from the centre of the circle is 4 cm. Find the radius of the circle. NCERT Solutions for Class 10 Maths Chapter 10 Circles Since tangent to a circle is perpendicular to the radius through the point of contact, $?$ In , we have $?$ Radius of the circle is 3 cm. Q7. Two concentric circles are of radii 5 cm and 3 cm. Find the length of the chord of the larger circle which touches the smaller circle. NCERT Solutions for Class 10 Maths Chapter 10 Circles Let O be the common centre of two concentric circles, and let AB be a chord of the larger circle touching the smaller circle at P. Join OP. ? OP is the radius of the smaller circle and AB is tangent to this circle at P, ? . We know that, the perpendicular drawn from the centre of a circle to any chord of the circle bisects the chord. So, and . In , we have Now, ???? [? ] ? The length of the chord of the larger circle which touches the smaller circle is 8 cm. Q8 ) A quadrilateral ABCD is drawn to circumscribe a circle (see figure). Prove that AB + CD = AD + BC. NCERT Solutions for Class 10 Maths Chapter 10 Circles ? Lengths of two tangents drawn from an external point of circle are equal, , , , and Adding these all, we get Hence proved Q9 ) In figure, XY and X'Y' are two parallel tangents to a circle with centre O and another tangent AB with point of contact C intersecting XY at A and X'Y' at B. Prove that . NCERT Solutions for Class 10 Maths Chapter 10 Circles Since tangents drawn from an external point to a circle are equal, $?$ . Thus in and , we have [Common] [Radii of the same circle] [ By SSS-criterion of congruence] Similarly, we can prove that Now, [Sum of the interior angles on the same side of transversal is 180°] [$?$ ] Q10 ) Prove that the angle between the two tangents drawn from an external point to a circle is supplementary to the angle subtended by the line-segment joining the points of contact at the center. NCERT Solutions for Class 10 Maths Chapter 10 Circles Let PA and PB be two tangents drawn from an external point P to a circle with centre O. We have to prove that- In right and $OBP$, we have [Tangents drawn from an external point are equal] [Radii of the same circle] [Common] [by SSS - criterion of congruence] and, and, But, [ is right triangle] Hence proved that the angle between the two tangents drawn from an external point to a circle is supplementary to the angle subtended by the line segment joining the points of contact at the centre. Q11 ) Prove that the parallelogram circumscribing a circle is a rhombus. NCERT Solutions for Class 10 Maths Chapter 10 Circles Let ABCD be a parallelogram such that its sides touch a circle with centre O. We know that the tangents to a circle from an exterior point are equal in length. , , , and Adding these all, we get [$?$ ABCD is a parallelogram ] and ] $?$ $?$ ABCD is a rhombus. Q12 ) A triangle ABC is drawn to circumscribe a circle of radius 4 cm such that the segments BD and DC into which BC is divided by the point of contact D are of lengths 8 cm and 6 cm respectively (see figure). Find the sides AB and AC. NCERT Solutions for Class 10 Maths Chapter 10 Circles Let us join AO, OC, and OB. It is given that BD = 8cm, CD = 6 cm. $?$ Lengths of two tangents drawn from an external point of circle are equal. $?$ cm. Then, the sides of the triangle are 14 cm, (x+6) cm and (x+8) cm. , and, $?$ Area of Also, Area of Squaring, we get But x cannot be negative, Thus, AB = x + 8 = 7 + 8 = 15 cm and AC = x + 6 = 6 + 7 = 13 cm. Hence, the sides AB and AC are 15 cm and 13 cm respectively. Q13 ) Prove that opposite sides of a quadrilateral circumscribing a circle subtend supplementary angles at the centre of the circle. NCERT Solutions for Class 10 Maths Chapter 10 Circles Let a circle with centre O touch the sides AB, BC, CD and DA of a quadrilateral ABCD at the points P, Q, R and S respectively. We have to prove that- and, Join OP, OQ, OR and OS. $?$ The two tangents drawn from an external point to a circle subtend equal angles at the centre. $?$ Now, [$?$ Sum of all the ? s around a point is 360°] and, and Now since $?2+?3=?AOB$ , and
2022-01-29 04:26:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 45, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5464174151420593, "perplexity": 603.9623916263299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299927.25/warc/CC-MAIN-20220129032406-20220129062406-00358.warc.gz"}
http://mathoverflow.net/revisions/12575/list
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). If $X$ is a proper curve of genues $g$ over an algebraically closed field $K$ of characteristic $0$, and $U$ an open subset, say obtained by removing $n$ closed points from $X$, then by comparison with the complex topology (more precisely by the Riemann Existence Theorem) one can derive that $\pi_1^{et}(U)$ is isomorphic to the profinite completion of the group $$\lt a_1,\ldots,a_g, b_1,\ldots,b_g,c_1,\ldots,c_n|[a_1,b_1]\cdot\ldots\cdot[a_g,b_g]c_1\ldots c_n\gt$$ As far as I know, there is no algebraic proof for this fact.
2013-06-20 07:50:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8925178647041321, "perplexity": 143.19599254118697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710963930/warc/CC-MAIN-20130516132923-00019-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.precisionagreviews.com/products/SteerCommand
top of page # SteerCommand Average Review average rating is 3 out of 5, based on 150 votes, Product ratings Cost average rating is 3 out of 5 Ease average rating is 3 out of 5 Value average rating is 3 out of 5 Support average rating is 3 out of 5 Description SteerCommand® is an Automated Steering System Manufacturer: Tutorial: ###### Alan R average rating is 3.8 out of 5 Do you like this review? Cost average rating is 4 out of 5 Ease average rating is 4 out of 5 Value average rating is 3 out of 5 Support average rating is 4 out of 5 ###### Wendall Z average rating is 4 out of 5 Do you like this review? Cost average rating is 3 out of 5 Ease average rating is 4 out of 5 Value average rating is 5 out of 5 Support average rating is 4 out of 5 ###### Eric F average rating is 4.8 out of 5 Do you like this review? Cost average rating is 4 out of 5 Ease average rating is 5 out of 5 Value average rating is 5 out of 5 Support average rating is 5 out of 5 ###### Ray Z. average rating is 4 out of 5 Do you like this review? Cost average rating is 3 out of 5 Ease average rating is 4 out of 5 Value average rating is 5 out of 5 Support average rating is 4 out of 5 ###### Scott average rating is 4 out of 5 Do you like this review? Powerful and versital Cost average rating is 4 out of 5 Ease average rating is 4 out of 5 Value average rating is 4 out of 5 Support average rating is 4 out of 5 ###### Dwight average rating is 3.8 out of 5 Do you like this review? Good solid tractor with a lot of power. Fairly Cost average rating is 4 out of 5 Ease average rating is 4 out of 5 Value average rating is 4 out of 5 Support average rating is 3 out of 5 ###### Julius G. average rating is 5 out of 5 Do you like this review? Perfect for high-value crops like ours. Precise placement guaranteed. Hats off to support. A little problematic with this but support helped quickly and fully. Didnt need them after. lol Cost average rating is 5 out of 5 Ease average rating is 5 out of 5 Value average rating is 5 out of 5 Support average rating is 5 out of 5 ###### Gerald average rating is 4.8 out of 5 Do you like this review? Good Investment Cost average rating is 5 out of 5 Ease average rating is 4 out of 5 Value average rating is 5 out of 5 Support average rating is 5 out of 5 ###### Marvin average rating is 4.5 out of 5 Do you like this review? Great tool Cost average rating is 5 out of 5 Ease average rating is 4 out of 5 Value average rating is 4 out of 5 Support average rating is 5 out of 5 ###### Rob B average rating is 3.3 out of 5 Do you like this review? Easy to use. Mapping has not worked with Encirca in the past. Cost average rating is 3 out of 5 Ease average rating is 4 out of 5 Value average rating is 3 out of 5 Support average rating is 3 out of 5 ###### Tim average rating is 4.5 out of 5 Do you like this review? Cost average rating is 4 out of 5 Ease average rating is 4 out of 5 Value average rating is 5 out of 5 Support average rating is 5 out of 5 ###### Howard average rating is 4.8 out of 5 Do you like this review? Works good. Dealer is helpful with troubleshooting. Cost average rating is 4 out of 5 Ease average rating is 5 out of 5 Value average rating is 5 out of 5 Support average rating is 5 out of 5 ###### Carl average rating is 2.5 out of 5 Do you like this review? Used for GPS only, spraying worked ok. Cost average rating is 3 out of 5 Ease average rating is 2 out of 5 Value average rating is 3 out of 5 Support average rating is 2 out of 5 ###### Todd average rating is 4.5 out of 5 Do you like this review? Great product Cost average rating is 4 out of 5 Ease average rating is 4 out of 5 Value average rating is 5 out of 5 Support average rating is 5 out of 5 ###### David average rating is 4.5 out of 5 Do you like this review? Great Support Cost average rating is 4 out of 5 Ease average rating is 5 out of 5 Value average rating is 4 out of 5 Support average rating is 5 out of 5 ## Related Products Pluribus Dawn AI360 Helena AGRI-Enterprises SCS 660 Raven bottom of page
2023-04-01 19:38:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9544823169708252, "perplexity": 5830.644363840228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00743.warc.gz"}