url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://www.lil-help.com/questions/253477/sfty435-acceleration-components-exercise | sfty435 Acceleration Components Exercise
# sfty435 Acceleration Components Exercise
0 points
5.4 Acceleration Components Exercise
Question 1 5 pts A light aircraft crashes in a level rocky field. Flight path angle is 10 degrees and the true airspeed is 85 mph. Initial impact occurs with the fuselage level (zero pitch angle). The impact causes a two-foot deep gouge, and the aircraft comes to rest 25 feet from the initial impact. The fuselage is crushed 12 inches vertically and 5 feet horizontally.
FIND: GP and GN.
GIVEN:
VP 122.8 fps sP 30 ft
VN 21.7 fps sN 3 ft
Inc. Triangular Pulse G (4 x V02) / (96.6 x s)
1. GP
2. GN
3. GP (4 x VP2) / (96.6 x sp) (4 x ) / (96.6 x 30) / 2898
GP 20.8 G
2....
sfty435 Acceleration
0 points
#### Oh Snap! This Answer is Locked
Thumbnail of first page
Excerpt from file: 5.4 Acceleration Components Exercise Question 1 5 pts A light aircraft crashes in a level rocky field. Flight path angle is 10 degrees and the true airspeed is 85 mph. Initial impact occurs with the fuselage level (zero pitch angle). The impact causes a two-foot deep gouge, and the aircraft comes
Filename: 54-acceleration-components-exercise-14.pdf
Filesize: < 2 MB
Print Length: 10 Pages/Slides
Words: 2948
Surround your text in *italics* or **bold**, to write a math equation use, for example, $x^2+2x+1=0$ or $$\beta^2-1=0$$
Use LaTeX to type formulas and markdown to format text. See example. | 2019-01-23 13:56:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6809196472167969, "perplexity": 10832.973963908897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584332824.92/warc/CC-MAIN-20190123130602-20190123152602-00132.warc.gz"} |
https://imathworks.com/matlab/matlab-using-matlab-in-python/ | # MATLAB: Using Matlab in Python
MATLABmatlab engine
Hello,
I'm trying to execute matlab functions in python using the Matlab python package. However, when running an example from the documentation, I am getting an error message. When I run the code:
import matlabimport matlab.engineeng = matlab.engine.start_matlab()a = matlab.double([1,4,9,16,25])b = eng.sqrt(a)print(b)
I get the error message:
File "<ipython-input-7-6ccc095b323c>", line 1, in <module> runfile('/Users/rach/Google Drive/PHD/Programming/Winds/Trying.py', wdir='/Users/rach/Google Drive/PHD/Programming/Winds')File "//anaconda/envs/netcdf/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 866, in runfile execfile(filename, namespace)File "//anaconda/envs/netcdf/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 94, in execfile builtins.execfile(filename, *where)File "/Users/rach/Google Drive/PHD/Programming/Winds/Trying.py", line 4, in <module> a = matlab.double([1,4,9,16,25])File "//anaconda/envs/netcdf/lib/python2.7/site-packages/matlab/mlarray.py", line 51, in __init__ raise exTypeError: 'NoneType' object is not callable
What does this error mean? Is it something to do with how I have installed everything?
Thanks, Rachael | 2022-12-04 18:04:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20431143045425415, "perplexity": 10606.932901566472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00265.warc.gz"} |
http://www.cgl.uwaterloo.ca/wmcowan/teaching/cs452/s15/notes/l19.html | # Lecture 19 - Trains
## Public Service Annoucements
1. Kernel 4 due in class on 18 June.
2. The exam has three start times.
• 19.30, August 4
• 09.30, August 5
• 19.30, August 5
The end times are 26.5 hours after the start time.
Answers to questions asked from 19.30, 4 August to 22.00, 4 August will be answered on the newsgroup, whether they arrive by e-mail or on the newsgroup.
3. You can download data from the terminal, such as the track graph, by putting a file onto the terminal program's output.
4. You can upload data to the terminal by sending its input to a file.
# Calibration I
## Where is a train?
For your project you choose landmarks
• sensors, turn-outs, etc.
• Remember the importance of fiducial marks: on the track, on the train.
You then know when the train is at a given landmark, and find a way -- most likely by integrating velocity -- to know how far it is past the landmark at any given time. To do so, you need to know each train's velocity for a full range of operational parameters.
## 1. Calibrating Stopping Distance
The simplest objective:
• know where the train stops when you give it a command to stop
• restrict the stop commands to just after the train passes a sensor
• only one train moving
Sequence of events
1. Train triggers sensor at $t$
• train at ${S}_{n}$ + 0 cm
2. Application receives report at${t}_{1}=t+{\Delta }_{1}$
3. You give command at${t}_{2}=t+{\Delta }_{1}+{\Delta }_{2}$
4. Train receives and executes command at${t}_{3}=t+{\Delta }_{1}+{\Delta }_{2}+{\Delta }_{3}$
5. Train slows and stops at ${t}_{4}=t+{\Delta }_{1}+{\Delta }_{2}+{\Delta }_{3}+{\Delta }_{4}$
• train at${S}_{n}+y$ cm
• (You measure $y$ with a tape measure.)
• If you do this again, same sensor, same speed, will you get the same answer?
• If you do this again, different sensor, same speed, will you get the same answer?
• If you do this again, same sensor, different speed, will you get the same answer?
• If you do this again, different sensor, different speed, will you get the same answer?
• Or a different train, or different track condition, or ...
1. The sequence of events above has a whole lot of small delays that get added together
• Each one has a constant part and a random part. Try to use values that are differences of measurements to eliminate the constant parts.
• Separating a random delay into constant and random parts.
• The mean delay is the constant part.
• The delay minus the mean delay is the random part.
• The constant parts sum to the constant part of the sum.
• How you sum the random parts depend on how you are representing the randomness.
• The most common representation is an interval around the constant part.
• The best case is the constant part minus half the interval.
• The worst case is the constant part plus half the interval.
Add together the intervals of the two delays.
• Another representation is a probability distribution. Your long ago probability and statistics course taught you (maybe!) how to add probability distributions.
• Some delays can be eliminated a priori because they are extremely small compared to other delays. The more you figure this out in advance the less measurement you have to do.
2. Knowing where you stop is very important when running the train on routes that require reversing. Knowing how long it takes the train to stop is also important.
• Why are reversing routes important?
3. Clearly, knowing when you stop is equally important.
This is very time-consuming!
• The simplest way to reduce the number of measurements is to eliminate factors that are unimportant.
• The only way to know that a factor is always unimportant is to measure. Developing the ability to estimate quickly, and to find the worst case quickly is the main way of being smart in tasks like this one.
Now make a table
Sensor 1 Sensor 2 ... Speed 6 Speed 8 ...
There are enough measurements in each cell of the table that you can estimate the random error. (Check with other groups to make certain that your error is not too big.)
Based on calibrations I have seen in previous terms you will find substantial variation with speed setting and train, little variation with sensor.
Group across cells that have the `same' value. Maybe all have the same value.
Hint. Interacting with other groups is useful to confirm that you are on track. Of course, simply using another group's calibration, with or without saying so, is `academic dishonesty'.
### Measuring the time to stop
A good measure of the stopping time is possible only when you have a good velocity calibration.
## 2. Calibrating Constant Velocity
At this point there are a few places on the track where you can stop with a precision of a train length or better. However, suppose you want to stop not sitting on a switch.
• You want to be close to the switch, clear of the switch, and on the right side of the switch when you stop.
• You want to know when the train has stopped because until then you cannot give the command to throw the switch.
• You want to know when the switch-throwing is complete because until then you cannot start the train running in reverse.
To do this successfully you have to be able to give the stop command anywhere on the track.
### Calibrating Velocity
An implicit assumption you make is that the future will closely resemble the past.
1. You measure the time interval between two adjacent sensor reports.
2. Knowing the distance between the sensors you calculate the velocity of the train
• velocity = distance / time interval
• measured in cm / sec.
Subtraction removes the constant part of delays. Note that on average the lag mentioned above -- waiting for sensor read, time in train controller, time in your system before time stamp -- is unimportant.
3. After many measurements you build a table
• Use the table to determine the current velocity
• Use the time since the last sensor report to calculate the distance beyond the sensor
• distance = velocity * time interval
### Using Resources Effectively
The most scarce resources
• Bandwidth to the train controller
• Use of the train itself
The most plentiful resource
• CPU
Any time you can use a plentiful resource to eliminate use of a scarce one you have a win. For example
### Practical Problems You Have to Solve
1. The table is too big.
• You potentially need a ton of measurements
2. The values you measure vary randomly.
• You need to average and estimate error.
The values you measure vary systematically
• For example, each time you measure the velocity estimate is slower, presumably because the train is moving towards needing oiling.
• You need to make fewer measurements or use the measurement you make more effectively.
### How Long does it Take to Stop?
Try the following exercise.
1. Choose a sensor.
2. Put the train on a course that will cross the sensor.
3. Run the train up to a constant speed.
4. Give the speed zero command at a location that stops the train with its contact on the sensor
5. Calculate the time between when you gave the command and when the sensor triggered.
6. Look for regularities.
## 3. Calibrating Acceleration and Deceleration: short distances.
Trains often must travel short distance, starting with the train stopped, and finishing with it stopped. When doing so the train spends its whole time either accelerating or decelerating. Your constant speed calibration is useless because the train doesn't travel at constant speed. Simmilarly your measured stopping distances are not useful.
Creating a perfect calibration of the train's position while it is accelerating is hard. But there is an easy and precise calibration that covers most of the moves the train makes where you need a good calibration It's the subject of this section.
Most of the your train project can get away with ignoring acceleration and decelleration. The one place you can't is when you are doing a short move, giving a speed command followed by a stop command before it gets up to speed. How far will the train go? How long will it be before the train is fully stopped?
Short moves are common when the train is changing direction, which you need to increase the number of possible paths from one point to another.
The general idea is to give the train a carefully timed series of commands knowing how far and for how long the train moves during the series of commands.
#### A procedure to calibrate short moves.
Write a small application that performs the following sequence of actions.
1. Place the train on the track in the sort of location where you expect to make short moves.
2. Give the train a `speed n` command, where n is big enough to get the train moving reliably.
3. Wait `t` seconds.
4. Give the train a `speed 0` command.
5. Measure how far the train travelled from its initial location.
6. You how far the train will travel for the chosen values of `n` and `t`.
Experiment with different values of `t` and `n` until you have a reasonable set of distances you can travel.
You now know how far the train moves for a given sequence of commands.
1. Position the train that distance ahead of a sensor.
2. Read the time and give a `speed n` command.
3. After `t` seconds give a `speed 0` command.
4. When the train triggers the sensor read the time again.
The distance between the two readings is the time it takes to make that short move.
Together with knowing when and where the train will stop if given the speed 0 command when running at a constant velocity, this will provide most projects with all the calibration they need. But you can do better. | 2018-01-24 11:30:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5549066066741943, "perplexity": 1116.163283882122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084894125.99/warc/CC-MAIN-20180124105939-20180124125939-00303.warc.gz"} |
https://stats.stackexchange.com/questions/205036/bootstrapping-in-binary-response-data-with-few-clusters-and-within-cluster-corre | # Bootstrapping in Binary Response Data with Few Clusters and Within-Cluster Correlation
Beware: This is (almost) a cross-post to a thread I started on the Statalist but that has not received much attention so far.
# Introduction
I am learning about the problems when conducting hypothesis tests on a cluster-sample with very few clusters (<30) but considerable within-cluster correlation. So far, I read the work of Cameron/Gelbach/Miller "Bootstrap-Based Improvements for Inference with Clustered Errors (Review of Economics and Statistics 90, 414–427) [Working Paper here] as well as Cameron and Miller's "Practitioner’s Guide to Cluster-Robust Inference" (Journal of Human Resources 50, 317–370) [ Preprint here]. From what I learned if the number of clusters is small one would reject the null hypothesis too often and bootstraping procedures (possibly with asymptotic refinement) can help to overcome this problem. In their simulations a wild cluster bootstrap t procedure works best with rejection rates very close to the nominal 5%. My questions are about transferring their ideas to binary response models. To provide a basis for discussion I did some simulations in Stata to find out about the rejection rates of the null hypothesis.
# Data Generating Process
I assumed the following data generating process for the "latent" variable $y^*$:
$y^*_{ig}=\beta(z_g+z_{ig})+u_{ig}$
where $z_g$ is a standard random normal variable constant for any group $g$ and $z_{ig}$ is an independent random draw from the standard normal. To induce group dependent errors as well as heteroskedasticity I assume $u_{ig}$ to be a random draw from a logistic distribution with parameter $b$ defined as follows:
$b=\sqrt{\frac{27z_g^2}{\pi^2}}$
If $y^* > 0$ the binary response variable y takes on value 1, otherwise it is 0.
# Simulation
I tried to learn about the problem with simulated data. I simulated 499 datasets according to the data generating process described above, estimated a logit model and counted how often $H_0: \beta = 1$ is rejected. I assumed 15 groups, each comprising 30 observations. I estimated logit models:
1. with no adjustment,
2. a cluster-robust estimate for the variance-covariance matrix,
3. a cluster bootstrap estimate for the variance-covariance matrix,
4. a clustered bootstrap on the wald statistics.
Here is the Stata programme I wrote:
program define mysimu, rclass
version 13.1
clear
syntax [, groups(integer 15) obspgroup(integer 30) alpha(real 0.05) bootstraprep(integer 499)]
tempname cmat vmat cmat2 vmat2 cmat3 vmat3 cmat4 vmat4 b se w rej se_clust w_clust rej_clust se_boot w_boot rej_boot
tempvar group newgroup helpvar helpvar2 z_g helpvar3 e_g parb u x ystar y
//Set the number of observations
set obs groups'
gen group' = _n
expand obspgroup'
bysort group': gen helpvar' = _n
//Cluster-Level Regressor Correlation
gen helpvar2' = rnormal() if helpvar' == 1
bysort group': egen z_g' = min(helpvar2')
//Cluster-Level Error Correlation
gen helpvar3' = runiform() if helpvar' == 1
bysort group': egen e_g' = min(helpvar3')
//Data Generating Process
generate parb' = sqrt(27*z_g'^2/_pi^2)
generate u' = -parb'*log(1/uniform() - 1)
generate x' = rnormal()+z_g'
generate ystar' = x'+u'
generate y' = ystar' > 0
//Simple Logit
logit y' x'
matrix cmat' = e(b)
matrix vmat' = e(V)
return scalar b = cmat'[1,1]
return scalar se = sqrt(vmat'[1,1])
return scalar w = (cmat'[1,1]-1)^2/vmat'[1,1]
return scalar rej = (cmat'[1,1]-1)^2/vmat'[1,1] > invchi2(1, 1-alpha')
//Clustered SE
logit y' x', cluster(group')
matrix cmat2' = e(b)
matrix vmat2' = e(V)
return scalar b_clust = cmat2'[1,1]
return scalar se_clust = sqrt(vmat2'[1,1])
return scalar w_clust = (cmat2'[1,1]-1)^2/vmat2'[1,1]
return scalar rej_clust = (cmat2'[1,1]-1)^2/vmat2'[1,1] > invchi2(1, 1-alpha')
//Pairs clustered bootstrap se
logit y' x', vce(boot, reps(bootstraprep') cluster(group'))
matrix cmat3' = e(b)
matrix vmat3' = e(V)
return scalar b_boot = cmat3'[1,1]
return scalar se_boot = sqrt(vmat3'[1,1])
return scalar w_boot = (cmat3'[1,1]-1)^2/vmat3'[1,1]
return scalar rej_boot = (cmat3'[1,1]-1)^2/vmat3'[1,1] > invchi2(1, 1-alpha')
//Pairs clustered bootstrap wald test
tempfile dataset
logit y' x', cluster(group')
matrix cmat4' = e(b)
local theta = cmat4'[1,1]
matrix vmat4' = e(V)
local w = (cmat4'[1,1]-1)^2/vmat4'[1,1]
bootstrap wstar=((_b[x']-theta')^2/(_se[x'])^2), reps(bootstraprep') cluster(group') idcluster(newgroup') saving(dataset', replace): logit y' x', cluster(newgroup')
use dataset', clear
sum wstar, d
return scalar rej_boot2 = w' > r(p95)
end
The programme simulates one dataset at a time and estimates the four different versions mentioned above and saves the information on whether $H_0$ is rejected in r(rej*) at 5%. To get 499 datasets and 499 times the information on r(rej*) I run:
simulate rej=r(rej) rej_clust=r(rej_clust) rej_boot=r(rej_boot) rej_boot2=r(rej_boot2) , reps(499) seed(1342): mysimu
# Results
Rejection rates are:
1. 53%
2. 24%
3. 21%
4. 19%
# Questions
1. The rejection rate for approach 4) is smaller than for approaches 2) and 3) in accordance with the results in Cameron [3]. However, the difference is small relative to their findings. Does someone have an idea why this is?
2. As I read in Cameron/Trivedis "Microeconometrics Using Stata", the wild bootstrap procedure (not used here but "best" approach in their paper [2]) is for linear models only ("For linear regression, a wild bootstrap accomodates the more realistic assumptions that ..."). Is there a non-linear counterpart that may help in getting closer to the 5% rejection rate in this case?
3. In general, is there a "state-of-the-art"-approach to handle the problem of few clusters when modelling a binary response? If so, how would the (bootstrap?) procedure look like?
# EDIT
In their practitioners guide Cameron and Miller point out that:
"If cluster-specific effects are present then the pairs cluster bootstrap must be adapted to account for the following complication. Suppose cluster 3 appears twice in a bootstrap resample. Then if clusters in the bootstrap resample are identified from the original cluster-identifier, the two occurrences of cluster 3 will be incorrectly treated as one large cluster rather than two distinct clusters. In Stata, the bootstrap option idcluster ensures that distinct identifiers are used in each bootstrap resample. Examples are regress y x i.id_clu, vce(boot, cluster(id_clu) idcluster(newid) reps(400) seed(10101)) and, more simply, xtreg y x, vce(boot, reps(400) seed(10101)) , as in this latter case Stata automatically accounts for this complication."
Because of this I changed the code so that each cluster-draw in the bootstrap is identified as individual cluster using the idcluster option. Also, I decided to drop the initial seed value from the program as it is also defined within the simulate command. I revised the rejection rate above accordingly.
## 1 Answer
There is an extension of the wild bootstrap called the "score bootstrap" developed by Kline and Santos (2012) (working paper here). Whereas the wild works for OLS, the score method works additionally for ML models such as logit/probit and 2SLS and GMM models. The user-written Stata command boottest can calculate p-values using the score method after an initial estimation.
• Thanks for this valuable answer. I downloaded boottest from ssc and tried to redo the analysis using the "score bootstrap". Unfortunately, Stata (version 13.1) came up with an error message: "<istmt>: 3499 boottestStataVersion() not found" after typing "boottest, h0(1) cluster(group) bootcluster(group) ". Do you know what to do about it? – Roberto Liebscher Jun 30 '16 at 11:01
• That's a function in the provided mlib so it seems like Stata can't find the mlib. You can see the package contents here. Make sure the mlib is installed correctly and that it is found in the mlib search path (outputted from mata: mata mlib query). You can try rebuilding the mlib search list with mata: mata mlib index. Also ensure your ado-search path picks up the install folder/root (see \$S_ADO). – BeingQuisitive Jul 1 '16 at 13:57
• If I understand correctly it is about telling Stata to call a function from a library (like with library() in R). I typed mata: mata mlib index and found lboottest within the list. To see which functions are in there I typed mata: mata describe using lboottest and the function that comes closest is boottest_stata() but boottestStataVersion() was not in the list. I tried mata: mata mlib add lboottest boottestStataVersion() giving an error that the function could not be found. Have I missed something here? Does the package work for you? – Roberto Liebscher Jul 2 '16 at 10:06
• boottest.ado has some code at the top that rebuilds the mlib (via boottest.mata) if one's Stata version is different than the Stata version the mlib was created with (11.2). Maybe that rebuilding isn't working well (it does assume you install into the PLUS folder, which is usually <Stata program folder>/plus/b/). You could try debugging the rebuilding, or alternatively disable the rebuild (with a clean install it shouldn't be necessary). To display, uninstall boottest, install boottest from ssc, close Stata, edit boottest.ado to comment out lines 21-30, open Stata and try example. – BeingQuisitive Jul 3 '16 at 20:20
• I tried to uninstall the package and encountered an error: criterion matches more than one package. Luckily enough I found your conversation with Robert Picard here which helped solving this issue. I manually deleted the package, cleaned stata.trk, reinstalled boottest from ssc and it worked. The simulation above now yields a rejection rate close to 10% when using the score bootstrap. This is indeed a large improvement. – Roberto Liebscher Jul 4 '16 at 7:36 | 2019-08-21 22:46:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5463876724243164, "perplexity": 7716.926191067209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316549.78/warc/CC-MAIN-20190821220456-20190822002456-00363.warc.gz"} |
https://crypto.stackexchange.com/questions/91926/security-proof-for-prng | # Security proof for PRNG
Could you help to find an example of where the next kind of proof is performed, please? "if we can distinguish the randomly generated bits of a PRNG from a random sequence, then we can distinguish the underlying block cipher/permutation from a random permutation"
• A construction where you would see this kind of proof summary is counter mode with a fixed start using the random input as the key, i.e. $G(x)=E_x(0)\|E_x(1)\|E_x(2)\|E_x(3)\|\ldots$
– SEJPM
Jul 7 at 8:37 | 2021-10-23 13:45:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8252068161964417, "perplexity": 626.0152167304408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585696.21/warc/CC-MAIN-20211023130922-20211023160922-00587.warc.gz"} |
https://math.stackexchange.com/questions/3035304/fermats-little-theorem-proof | # Fermats Little Theorem Proof
So I have to prove Fermats Little Theorem which states that if p is a prime and a is a integer that cannot be divided by $$p$$, then
$$a^{p-1}\equiv 1\pmod{p}$$.
So my proof is:
Let $$p$$ be a prime and a be a integer that cannot be divided by $$p$$. Consider the two sequences of numbers where we represent the residual classes with the numbers $$1,2,...,p-1$$:
$$x: 1, 2, 3, \ldots, (p-1)$$, which are the residual classes,
$$a\cdot x: a\cdot 1, a\cdot 2, ..., a\cdot (p-1)$$.
Since $$\gcd(a,p)=1$$, two different numbers in the second row cannot be congruent modulo $$p$$. If they were we would have that $$c\cdot a≡b\cdot a\pmod{p}, 1\le c and since $$\gcd(a,p)=1$$ we can cancel out $$a$$ so $$c\equiv b\pmod{p}$$ which means that $$c=b$$. This means that we have the same remainders mod $$p$$ in both rows (maybe in a different order).
We therefore have that $$(a\cdot 1)\cdot (a\cdot 2) \cdot \cdots \cdot (a\cdot (p-1))\equiv 1\cdot 2\cdot 3\cdot \cdots \cdot (p-1)\pmod{p}$$. This means that $$a^{p-1}\cdot 1\cdot 2\cdot \cdots \cdot (p-1)≡1\cdot 2\cdot \cdots \cdot (p-1)\pmod{p}$$.
Since $$2,3,...,(p-1)$$ are relatively prime with $$p$$ we can cancel them out so we get that
$$a^{p-1}\equiv 1\pmod{p}$$.
Are there any errors or places which needs better/more explanation? Thank you for your time!
• Yes, this looks exactly like one of the proofs here, using modular arithmetic. Did you compare it already? Same question also here at this site. Looks also the same. – Dietrich Burde Dec 11 '18 at 13:57
• Yes I have looked, I might not have specified what part of the proof I'm not sure about. It's the part where I explain why two numbers in the second row cannot be congruent modulo p, is that explanation correct? – Nikolaj Dec 11 '18 at 14:06
Your proof is correct. I’d try to be a little more technical and state that $$\phi: x \mapsto ax$$ is an automorphism (you prove this the way you already did) and use modular inverses instead of “cancel out”, but everything is correct. | 2019-07-21 12:41:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998356699943542, "perplexity": 426.3942310996967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527000.10/warc/CC-MAIN-20190721123414-20190721145414-00275.warc.gz"} |
https://etheses.bham.ac.uk/id/eprint/8096/ | Design and synthesis of organic small molecules with high triplet energy for blue light emission
Sahotra, Nikhil (2018). Design and synthesis of organic small molecules with high triplet energy for blue light emission. University of Birmingham. Ph.D.
Preview
Sahotra18PhD.pdf
PDF - Accepted Version
Abstract
For the past two decades, organic light emitting diodes (OLEDs) have been the subject of intense research in the realm of display and lighting applications. Recently, thermally activated delayed fluorescence (TADF) has shown great potential in further advancing OLED technology. In order to achieve TADF, synthesis of acceptor and donor compounds has been undertaken to achieve exciplex formation. Little is currently known about exciplex formation and emission, so systematic structural variations have been performed on MCP and DPBI in order to gain fundamental knowledge.
Compound analyses were performed in both the solid and solution state. In the case of MCP derivatives, demonstration of their ability to act as an acceptor is possible, alongside an appropriate choice of donor molecule. Reducing the extent ofconjugation in derivatives of DPBI, did not result in an increase in triplet energy. Consequently, to eliminate possible conformers, steric blocking was introduced in an attempt to increase the triplet energy. In the case of the ME-DPBI derivative it was shown possible to formulate a device showing 2.5% external quantum efficiency while emitting at $$\approx$$450 nm which is a true blue colour.
Type of Work: Thesis (Doctorates > Ph.D.)
Award Type: Doctorates > Ph.D.
Supervisor(s):
Supervisor(s)EmailORCID
Baranoff, EtienneUNSPECIFIEDUNSPECIFIED
Davies, PaulUNSPECIFIEDUNSPECIFIED
Licence:
College/Faculty: Colleges (2008 onwards) > College of Engineering & Physical Sciences
School or Department: School of Chemistry
Funders: Engineering and Physical Sciences Research Council, Other
Other Funders: The University of Birmingham
Subjects: Q Science > QD Chemistry
T Technology > TP Chemical technology
URI: http://etheses.bham.ac.uk/id/eprint/8096
Actions
Request a Correction View Item | 2021-05-17 00:47:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42975983023643494, "perplexity": 4276.272735487482}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00135.warc.gz"} |
https://univ-lyon3.hal.science/hal-03467123v2 | A theory of optimal convex regularization for low-dimensional recovery - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année :
## A theory of optimal convex regularization for low-dimensional recovery
(1) , (2) , (3)
1
2
3
Yann Traonmilin
• Fonction : Auteur
Rémi Gribonval
Samuel Vaiter
#### Résumé
We consider the problem of recovering elements of a low-dimensional model from under-determined linear measurements. To perform recovery, we consider the minimization of a convex regularizer subject to a data fit constraint. Given a model, we ask ourselves what is the best'' convex regularizer to perform its recovery. To answer this question, we define an optimal regularizer as a function that maximizes a compliance measure with respect to the model. We introduce and study several notions of compliance. We give analytical expressions for compliance measures based on the best-known recovery guarantees with the restricted isometry property. These expressions permit to show the optimality of the ℓ1-norm for sparse recovery and of the nuclear norm for low-rank matrix recovery for these compliance measures. We also investigate the construction of an optimal convex regularizer using the examples of sparsity in levels and of sparse plus low-rank models.
### Dates et versions
hal-03467123 , version 1 (06-12-2021)
hal-03467123 , version 2 (12-12-2022)
### Identifiants
• HAL Id : hal-03467123 , version 2
• ARXIV :
### Citer
Yann Traonmilin, Rémi Gribonval, Samuel Vaiter. A theory of optimal convex regularization for low-dimensional recovery. 2022. ⟨hal-03467123v2⟩
### Exporter
BibTeX TEI Dublin Core DC Terms EndNote Datacite
### Collections
177 Consultations
70 Téléchargements | 2023-02-02 15:00:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7683978080749512, "perplexity": 4081.6004926345986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500028.12/warc/CC-MAIN-20230202133541-20230202163541-00714.warc.gz"} |
https://asmedigitalcollection.asme.org/appliedmechanics/article/87/9/091002/1083762/The-Transition-From-Rebound-to-Bonding-in-High | ## Abstract
A metallic microparticle impacting a metallic substrate with sufficiently high velocity will adhere, assisted by the emergence of jetting—the splash-like extrusion of solid matter at the periphery of the impact. In this work, we compare real-time observations of high-velocity single-microparticle impacts to an elastic–plastic model to develop a more thorough understanding of the transition between the regimes of rebound and bonding. We first extract an effective dynamic yield strength for copper from prior experiments impacting alumina spheres onto copper substrates. We then use this dynamic yield strength to analyze impacts of copper particles on copper substrates. We find that up to moderate impact velocities, impacts and rebound velocities follow a power-law behavior well-predicted on the basis of elastic-perfectly plastic analysis and can be captured well with a single value for the dynamic strength that subsumes many details not explicitly modeled (rate and hardening effects and adiabatic heating). However, the rebound behavior diverges from the power-law at higher impact velocities approaching bonding, where jetting sets on. This divergence is associated with additional lost kinetic energy, which goes into the ejection of the material associated with jetting and into breaking incipient bonds between the particle and substrate. These results further support and develop the idea that jetting facilitates bonding where a critical amount of bond formation is required to effect permanent particle deposition and prevent the particle from rebounding.
## Introduction
While the mechanics of elastic–plastic impacts have been extensively modeled [16], experimental comparisons are generally limited to subsonic velocities or large impactors [5,79]. However, high-velocity microscale impacts are relevant to some contemporary applications, including erosion, micrometeorite capture, and particle-spray processes. For example, cold spray coating is a technique where particle–substrate bonding is achieved with ∼1–100 µm particles impacted above a minimum “critical velocity” that is typically supersonic and below which particles rebound [1013]. An outward extrusion of the material from the interface between the impacting particle and substrate, known as jetting, has been considered closely related to bonding [1417]. Numerous mechanisms have been proposed for jetting [14,1820] although there is limited direct observational support. Experimental approaches to study mechanics at the relevant high impact velocities and small impactor sizes are vital for a thorough understanding of such mechanistic questions.
The recent development of an all-optical microballistic test platform has provided direct real-time observations of high-velocity single-microparticle impacts in a number of systems [2125]. Impact behaviors of metallic particles have been directly observed across the rebound and bonding regimes, providing more precise measurements of critical velocity for bonding as well as real-time observations of bonding-associated jetting and ejection [17]. While many of these investigations are centered around bonding at and above the critical velocity [17,26,27], the elastic–plastic mechanics for high-speed impacts in the lower-velocity rebound regime also involve extreme conditions worthy of study. Further, the measurement of both inbound and outbound velocities on such non-bonding impacts provides rich data from which to extract mechanical properties.
Therefore, in this work, we focus on the elastic–plastic rebound behavior of microparticles and compare experimental data with plastic impact models to extract some effective elastic–plastic properties. We first examine impacts of alumina particles on copper, originally conducted to measure metal hardness at very high strain rates [28]. By comparing to a simple model for the impact of an elastic sphere with an elastic-perfectly plastic (EPP) half-space [2], we extract an effective dynamic yield strength for copper. We then use this model and extracted strength to discuss the energy dissipation mechanisms present in a copper–copper impact system relevant to cold spray. Our goal is to analyze rebound behavior to develop a more comprehensive understanding of the impact and bonding behavior of metallic microparticles in the context of cold spray and to better appreciate the energy dissipation associated with the process of jetting.
## High-Velocity Micro-Impact Experiments
We use an all-optical platform, the laser-induced particle impact test (LIPIT), schematically shown in Fig. 1(a), to conduct impact experiments [17,21,22]. Particles are dispersed onto and launched from a “launching pad,” consisting of a glass substrate (210-µm thick), an ablative gold layer (60-nm thick), and a polyurea polymer layer (30-µm thick). The launching pad surface is imaged with an optical microscope, and a single particle is selected and measured immediately before being launched with a high-power laser pulse (Nd-YAG, 532-nm wavelength, 10-ns duration). This pulse is focused to ablate the gold layer, which in turn expands the polymer film and accelerates the particle toward the target. A second laser pulse (640-nm wavelength, 30-μs duration) illuminates the field of view, and 16 images of the impact event are captured by an ultra-high-speed camera (Specialised Imaging, SIMX16) with 5-ns exposure and variable interframe time. From these images, the impact and rebound velocities are measured with an uncertainty of ±2%. Particles impact normal to the substrate surface within ±3°. More details on the experimental setup and launch pad fabrication can be found in previous papers [17,22]. Figure 1(b) shows eight frames of a 12.2-µm diameter copper particle impacting a copper substrate at 400 m/s and then rebounding at 30 m/s. The coefficient of restitution (CoR) in this experiment is thus ∼30/400 = 0.075.
Fig. 1
Fig. 1
Close modal
For a portion of the analysis below, we consider impacts of 14 µm alumina particles, purchased from Inframat Advanced Materials LLC (Amherst, NY). Those experiments were already described in a previous study [28], and the data are reproduced here for impacts on a copper substrate of 3.175-mm thickness purchased from OnlineMetals (Seattle, WA). For this work, additional new experiments were conducted at lower rates using the same materials and procedures. In addition, new experiments were conducted on the same copper substrates, using atomized spherical copper particles with a nominal size of 10 µm purchased from Alfa Aesar (Ward Hill, MA). Substrate surfaces were ground and polished to a nominal 0.04 µm finish prior to impact experiments.
## Elastic–Plastic Impact Analysis
Our primary goal in this work is to understand the impact response of copper microparticles on copper substrates. We therefore aim to quantitatively evaluate impact and rebound velocity information, such as those shown in Fig. 1, in a way that permits the extraction of material properties such as the dynamic strength. Material behavior at high rates is complex, involving strain hardening, strain-rate hardening, adiabatic heat generation and transport, thermal softening, etc. Here, we are interested in a simple analysis with broad applicability to a number of materials and conditions where we will tend to aggregate these complex physics into some “effective” elastic–plastic properties. For this purpose, we consider the work of Wu et al. [2,3], who presented a series of detailed finite element computations in which they modeled the rebound behavior of both an elastic sphere impacting an EPP half-space and an EPP sphere impacting a rigid wall. They found that in either case, the coefficient of restitution could be empirically fitted with a simple non-dimensionalized power-law
$CoR=VrVi=α(VyVi⋅E*Yd)1/2$
(1)
In this expression, Vi is the impact velocity, Vr is the rebound velocity, Yd is the dynamic yield strength that controls plasticity at the rates in question, and E* is the reduced elastic modulus defined by elastic moduli E and Poisson’s ratios ν of the two materials
$1E*=1−ν12E1+1−ν22E2$
(2)
The velocity at which plastic deformation initiates is termed Vy, and is defined by Johnson [1] as
$Vy=(26Yd5ρE*4)1/2$
(3)
with ρ the density of the impacting material. Finally, α is a prefactor determined from fitting simulated impacts of an elastic sphere on an EPP substrate and an EPP sphere on a rigid wall, yielding values of 0.78 and 0.62, respectively. The power-law behaviors of Eq. (1), including the specific exponent of −1/2 with respect to Vi, are valid beyond the cases established by Wu et al. [2,3], as established for example in the more elaborate finite element simulations of impacts of aluminum, copper, and stainless steel microspheres onto matched substrates [6].
In the simulations of Wu et al., only simple elastic and plastic deformations were considered with no strain or strain-rate hardening and no adiabatic heating considered. As such, their analysis is formally limited to velocities below
$ρVi2Yd=0.1$
(4)
where heating effects are negligible, as established by Johnson [1]. For many metals, this velocity limit is below 100 m/s, and applying this analysis at higher velocities involves greater approximation. The present study includes high-velocity impacts well above this threshold, ranging from 100 to nearly 900 m/s, which involves quantities of ρVi2/Yd up to and beyond 1. Productive use of this analysis, therefore, requires that we acknowledge the production of heat and account for it in the changes to the dynamic yield strength. We achieve this by first extracting an “average” or “effective” plastic yield stress Yd for copper which essentially averages over all of the strains, rates, and temperatures that may prevail in the contact plasticity problem; we do this by analyzing impacts of alumina impactors on copper to extract Yd with Eq. (1). We then subsequently use this extracted yield stress for copper with Eq. (1) to model our system of interest, copper impacting copper, in the same velocity range. As we shall see, this approach appears effective although it certainly subsumes a great deal of deformation physics into the simple characteristic strength parameter Yd.
## Dynamic Yield Strength of Copper
To first develop insight into the effective dynamic strength of copper under the relevant experimental conditions (scale, strain, rate, etc.), we revisit a prior study done by our group where nominally rigid alumina particles (14.0 ± 0.5 µm diameter) were used as impactors to measure the dynamic hardness of copper [28]. The CoR data from those experiments are reproduced in Fig. 2(a), along with new data from the present work on the same system at lower velocities.
Fig. 2
Fig. 2
Close modal
Taking the alumina sphere as elastic and unyielding in the impacts against the softer copper (α = 0.78), we fit the data corresponding to impacts between 100 and 800 m/s with Eq. (1) to extract an “effective” or average dynamic yield strength of the copper substrate. The additional, low-velocity data are disregarded in this analysis, as they likely represent a low-strain-rate regime of rebound behavior, described by Johnson [1] where CoR ∝ Vi−1/4. We take values of density, Young’s modulus, and Poisson’s ratio to be independent of impact velocity, and based on the fit in Fig. 2(b) on double-logarithmic axes, we extract a value for the effective dynamic yield strength Yd = 450 MPa.
Interestingly, a single, constant Yd value appears to present a reasonable approximation that fits all of our experimental data, despite the many assumptions in the analysis. We further provide values of Yd corresponding to each individual impact (Fig. 2(c)). These Yd values are found to lie in a tight range across all our experimental velocities, with a standard deviation of just ∼45 MPa. We use this standard deviation subsequently to establish a range of expected behavior.
## Copper on Copper Impacts
With an effective dynamic yield strength for copper under the conditions of interest, we now proceed to consider the copper–copper system. Data for copper particles (12.5 ± 1 µm diameter) impacted on copper substrates between velocities of 100 and 900 m/s are shown in Fig. 3(a) and show a typical trend of CoR decreasing up to an apparent critical velocity above which particles adhere rather than rebounding. The critical velocity is calculated by finding the smallest single group of impacts with an equal number of rebound and bonding events and taking the average between the highest and lowest velocities of that group. This is analogous to finding the “ballistic limit” [29] for bonding. In this case, the critical velocity is the average between two impacts, yielding 580 m/s.
Fig. 3
Fig. 3
Close modal
We note that the critical bonding velocity is certainly a system-specific quantity and changes with details of the investigated material, including particle size [12] and chemical content, particularly oxygen content [30]. In copper, these variables have effects that spread the measured critical velocity over a broad range from ∼300 to ∼600 m/s or more [31]. The present value (580 m/s) as well as the trend of the data in Fig. 3(a) are in good agreement with data for similar single-particle impacts published previously [17] for slightly larger particles (14 ± 2 µm diameter) in the copper–copper system with the same commercial source. It is also in agreement with other reported values based on cold spray experiments with copper powders of oxygen contents standard for atomized commercial powders, around 0.1–0.15wt% [12,31]. In our subsequent analyses, we use only the new data from the present study with an average particle diameter of 12.5 µm and a critical velocity of 580 m/s.
Three scanning electron microscopy (SEM) images of impact sites are shown in Figs. 3(b)3(d) for Vi = 400, 550, and 770 m/s, respectively; these are denoted by arrows in Fig. 3(a) as well. At 770 m/s, the impacted particle appears bonded to the substrate with substantial signs of jetting around the edges of the particle where “lips” of prior jets can be seen originating from both the substrate and the particle (denoted by arrows). Similar signs of jetting are present around the edge of the crater formed at 550 m/s as well (Fig. 3(c)) although perhaps with smaller lips and less contiguously around the perimeter. At the lowest of the three velocities, no jet lips are observed in Fig. 3(b).
In the prior work from our group [17,26,27] and others [1416], jet formation has been closely linked to impact bonding under the premise that jet formation involves severe plastic deformation right at the interface between the particle and substrate where the adhesive bond forms. Such severe jetting deformation promotes flattening of microscopic surface roughness/asperities, and spreading and removal of contaminating surface films like native oxide and generally assists in the formation of clean metal-on-metal contacts that permit adhesive welding to occur. The observations in Fig. 3(d) support this general line of argument with the presence of substantial jets associated with the bonded state. However, Fig. 3(c) also suggests that it is possible to initiate some jets around the impact site without necessarily producing enough of a bond to result in permanent particle adhesion. Permanent particle adhesion apparently requires some critical amount of interfacial plasticity provided by jetting. Impacts like the one shown at 550 m/s are not sufficient to achieve that condition. Thus, jetting emerges at sufficiently high impact velocities but sets on below the critical velocity at which there is sufficient jetting to achieve bonding.
Closer inspection of the CoR data also leads us to a similar view—the transition between rebounding and bonding is not just a discontinuity; it also involves a divergence from the power-law scaling before adhesion occurs. This can be best seen in Fig. 4(a) by examining the copper–copper data in a double-logarithmic fashion. For lower velocities, we see a reasonably convincing conformity of these data to the characteristic scaling law of Eq. (1), CoR ∝ Vi−1/2. While fitting various ranges of the data can give slightly different values of the power-law exponent (0.6–0.7), the theoretical value of −1/2 is supported both by the EPP model [2,3] and more advanced mechanical models that incorporate hardening, rate effects, etc. [6]. Taking the dynamic yield strength of copper extracted independently earlier (Yd = 450 MPa), we present fitting-parameter-free predictions of Eq. (1) both for an elastic sphere impacting an EPP substrate (α = 0.78) and an EPP sphere impacting a rigid substrate (α = 0.62). The data lie rather close to the latter prediction, and in fact, a least-squares fitting to a single parameter (using only the data below 550 m/s) gives a best-fit α = 0.57, with a standard deviation as shown by the shaded band in Fig. 4(a) to account for uncertainty in Yd. The fact that the fitted prefactor is close to that expected for a plastic sphere impacting a rigid wall is intuitively reasonable: in matched material impacts, the substrate experiences a shallow spherical indentation which might have a characteristic strain on the order of 0.1, whereas the impacted particle is severely flattened by a factor of as much as ∼1.5, implying a far higher characteristic strain of ∼0.4. To first order, then, the particle bears most of the deformation, as contemplated in the EPP particle-on-rigid-wall model, explaining the conformity of the data to that model.
Fig. 4
Fig. 4
Close modal
As impact velocities rise above about 400 m/s and approach the critical velocity for bonding, the experimental CoR increasingly deviates from the power-law of Eq. (1) to values lower than expected for simple elastic–plastic behavior. The divergent CoR behavior in this range suggests that simple elastic–plastic impact mechanics, even when calibrated to account for high rates (and adiabatic heating), are not sufficient to describe the physics of these impacts. Viewed another way, there is an additional loss of kinetic energy in this range of velocities that apparently does not result simply from the plastic formation of a crater and the compression of the impactor. We can quantitatively assess this excess lost energy by first determining for each impact the rebound kinetic energy
$Er=mVr22$
(5)
where the mass of each particle m is calculated individually based on measured diameter. The difference between the actual rebound kinetic energy and that predicted by Eq. (1) (with α = 0.57) is presented in Fig. 4(b) as the excess lost energy, which rises as the impact velocity approaches the critical velocity.
The largest observed quantity of excess lost energy is ∼5.7 nJ and that at the critical velocity is 7.1 nJ. For reference, the impact kinetic energies are on the order of microjoules, three orders of magnitude greater, meaning that simple elastic–plastic mechanics still accounts for the vast majority of impact energy dissipation. Nevertheless, this last small proportion of energy loss is needed to slow and stop the particle and, therefore, plays a key role in the bonding behavior.
## Deviation From the Power-Law
Interestingly, the same kind of power-law deviation in the run-up to adhesion is seen in other published single-particle impact experiments on other materials. This is shown in Figs. 5(a)5(c) by replotting the published data for Ni, Al, and Zn, respectively [17]. In each case, the power-law of Eq. (1) is shown with α = 0.57 fixed and fitted to the lower-velocity data with a single parameter, Yd, as shown in Table 1.
Fig. 5
Fig. 5
Close modal
Table 1
Critical velocities, dynamic yield strengths, and divergence velocities for the present data on Cu as well as previously published data [17]
MaterialCuNiAlZn
Critical velocity (m/s)580 ± 12655 ± 5810 ± 14540 ± 25
Dynamic yield strength (MPa)450 ± 45800 ± 32290 ± 10470 ± 21
Divergence velocity (m/s)510 ± 36540 ± 25620 ± 42420 ± 17
MaterialCuNiAlZn
Critical velocity (m/s)580 ± 12655 ± 5810 ± 14540 ± 25
Dynamic yield strength (MPa)450 ± 45800 ± 32290 ± 10470 ± 21
Divergence velocity (m/s)510 ± 36540 ± 25620 ± 42420 ± 17
Whereas it is common to tabulate critical bonding velocities for different materials systems, we propose that the velocity at which impacts diverge from power-law behavior also represents a characteristic quantity worth tabulating and understanding. We determine the divergence velocity, termed Vd, in the same manner as was described above for the critical bonding velocity. First, a velocity range containing an equal number of impacts conforming and not conforming to the power-law is found such that the number of impacts in this range is the minimum number required to ensure that only one such range exists for a given data set. The velocity is taken as the average of the two outermost points, while the error is half their difference. Performing this analysis on our Cu data and the literature data from Fig. 5 returns the values in Table 1.
Hassani et al. proposed a hydrodynamic argument for jetting where the shock formed upon impact detaches from the particle–substrate edge leading to local tension that can initiate a jet. If the local tension in the jet exceeds the material spall strength, the jet extends and fragments into small ejecta [26]. For a matched material system in the limit where the shock speed, Cs, is independent of impact velocity, their result can be written as follows:
$Vspall∝KρCs$
(6)
where K is the bulk modulus and ρ is the density.
In their work, Hassani et al. showed a strong conformity of measured critical adhesion velocities to this expected scaling, as shown in Fig. 6. The adhesion condition involves considerable jetting, concomitant considerable interface straining, clean metal-on-metal contact, and bonding to a degree that the particle remains permanently attached. However, as illustrated in Fig. 3(c), there is some degree of jetting that is apparently not enough to achieve full irreversible particle adhesion at lower velocities, setting in at the point of power-law divergence. We hypothesize that the scaling of Eq. (6) should thus also be seen in the divergence velocity as a marker of the onset of such spall. This is tested in Fig. 6 where Vd is also plotted against K/ρ/Cs. The proportionality observed here suggests that the power-law divergence we observe originates from hydrodynamic phenomena.
Fig. 6
Fig. 6
Close modal
Taken together, all of these observations and analysis in Figs. 36 speak to the emergence of jetting as the source of power-law divergence in metal-on-metal impacts near the bonding critical velocity. The extra lost energy in these impacts is, therefore, most likely attributable to jetting (and its consequences), and we turn our attention to a detailed discussion of that in what follows.
## Jetting-Associated Energy Dissipation Mechanisms
As noted earlier, the maximum amount of excess energy loss that we are able to measure by virtue of the power-law divergence of our experimental data on copper is Ed = 7.1 nJ. This value corresponds to the energy predicted by the rebound power-law (Eq. (1)) at the critical velocity, and it is of similar magnitude when evaluated for the other metals in Fig. 5: Ed = 5.2, 6.2, and 5.1 nJ for Ni, Al, and Zn, respectively. This excess energy loss is associated with jetting, and we suggest two possible major contributions to Ed.
First, there is a clear source of energy loss associated with material ejection upon jetting. As the ejected material travels away from the interface at high speeds, the kinetic energy carried by it, Ed(K), is
$Ed(K)=12mjVj2$
(7)
with mj and Vj, respectively, the total mass and average velocity of the ejected material. Although we expect the ejected material to be some combination of both pure metal and oxide [27], we approximate ρj as the density of copper, 8930 kg/m3, and based on observations of material ejection in the prior work on Al [17], we suggest average ejection velocity Vj to be ∼1 km/s. With these values, we would require an ejected mass of the material of mj = 1.4 × 10−14 kg at the critical velocity to fully account for the lost particle kinetic energy Ed by only this mechanism. For our experiments, this would correspond to 0.16% of the initial particle mass. A molecular dynamics study observed, in impacts of copper spheres on rigid substrates under comparable conditions, material ejection on the same order of magnitude relative to the initial impactor mass [32]. Although there are extensive experimental efforts to measure impact-induced ejection mass in the context of planetary bodies, these experiments generally consider nonmetallic and porous targets [3335]. We are not aware of any experimental efforts to assess this mass directly in the present context of metallic microscale impacts, and we encourage experimentation to evaluate it in future work. In any event, 0.16% of the particle mass could be a plausible magnitude for the jetting-associated ejecta in experiments near the critical velocity, meaning that the kinetic energy of the jets themselves may account for a significant amount of the power-law deviation.
Second, because the jetting process facilitates bonding by providing intimate and pristine metallic surfaces [1417], we expect metallic bonds to form whenever there is some amount of jetting, even if the particle subsequently detaches from the substrate and rebounds. The excess lost energy Ed would then be associated with the energy of refracturing those metallic bonds. This view is in line with simulations contemplating the effects of temporary bonding on impact and rebound behavior [36]. Simplistically, we can estimate this debonding energy, Ed(D), as that dissipated in a mode I fracture event in a plane stress condition
$Ed(D)≈KIC2EAD$
(8)
where KIC is the mode I fracture toughness, E is the elastic modulus, and AD is the area that bonds and then must refracture to permit particle rebound. The modulus of copper is 110 GPa, but the other parameters in this expression are not known exactly. The fracture toughness can be bounded between that for perfectly brittle fracture (KIC ∼1 MPa m1/2) and that for ideally coherent bulk copper (KIC ∼60 MPa m1/2) [37]. However, these bounds are too far apart to be practically useful and intuitively do not correspond to the expected physical situation of evaluating a transient metallic bond interface in copper.
A better approximation can be made by directly using the bulk fracture toughness of unannealed cold-sprayed copper deposits, which are on the order of ∼9 MPa m1/2 [37] and which better reflect the nature of the bonded regions in the present experiments. With this value, Eq. (8) suggests that a bonded contact area of AD ≈ 10 µm2 would be required to fully account for the excess energy dissipation at the critical velocity corresponding to ∼5% the total contact area of the flattened particles in our experiments. This value seems reasonable when compared with cross-sectional images of particles deposited via cold spray [15,3840] where, prior to annealing and other post-processing techniques, the particle–substrate interface is extremely imperfect. The actual amount of the bonded interface area can vary between 15% and 95% depending on how far above the critical velocity the impact occurs [41]. In our calculation, we are concerned with impact precisely at the critical velocity, so our value of the 5% bonded area being even below that of 15% in the above experiments seems reasonable. Our value of 5% is also reasonable in comparison to observations like that of Fig. 3(c), below the critical velocity, where incipient jets are seen but represent a truly miniscule amount of the surface area for bonding.
Thus, starting at the onset of jetting and continuing above the critical velocity, it seems that the amount of interfacial area associated with jetting and bonding increases, corresponding to an increasing Ed until and beyond particle arrest and adhesion. After particle adhesion, we anticipate a further increase in the bonded area with impact velocity. It would be highly desirable in future work to quantitatively evaluate the dependence of the transiently and permanently bonded areas on impact velocity both above and below the critical adhesion velocity.
In conclusion, both of the above two effects would thus seem to be relevant in the discussion of energy dissipation in jetting and adhesion near the critical velocity. Other factors, such as the energy lost in breaking up jets into droplets, increases in the surface area, fracturing of oxides, etc., are also at play, but we have estimated these to be of significantly lower magnitude in their contributions to Ed compared with the kinetic energy of the ejected material and the debonding energy.
Further experimental quantification of both material ejection and incipient bonding is required to ascertain which of the two major energy dissipation mechanisms is more significant. Nonetheless, our present observations and calculations suggest there is a threshold level of jetting required to effect permanent bonding. Below this threshold, we observe substrate jetting and divergent rebound behavior. It is only when a sufficient extent of jetting is achieved, with material ejection and incipient bonding, that the particle permanently deposits onto the substrate.
## Conclusion
This work presents a new quantitative view of microparticle impacts over a range of velocities that span from rebound to those where solid-state material ejection and particle bonding occur. First, by comparing single-particle experiments of alumina particles impacting a copper substrate with a model for elastic particle impacts on an elastic-perfectly plastic substrate, we were able to determine an “effective” dynamic yield strength of copper, spanning a large range of velocities, and subsuming a great deal of deformation physics, including hardening and adiabatic heating effects. This dynamic yield strength, in turn, can be used to predict with very high accuracy the rebound behavior for impacts of copper on copper. What is more, such analysis reveals that in the metal-on-metal situation, rebound behavior increasingly diverges at higher impact velocities approaching the bonding transition. This power-law divergence is also seen in three other matched material systems in literature data we have reevaluated. Together, an analysis of all four materials suggests that the onset of this power-law divergence is linked to jetting and the hydrodynamic phenomenon of spall. Microscopic observations align with this view where craters left behind near—but below—the critical adhesion velocity show incipient jet structures on the surface.
We discuss two jetting-associated energy dissipation mechanisms—the kinetic energy transfer to ejected material and the fracture of incipiently formed bonds. We quantitatively estimate whether these mechanisms can effect the divergent energy loss observed in the present copper impacts and found both to be plausible sources of the excess lost energy in the run-up to particle adhesion. Our results demonstrate that jetting is not just a necessary condition for bonding but that there is additionally a threshold of jetting-induced energy dissipation that is required to prevent rebound and create permanent bonds. By analyzing rebound behavior, we have expanded our understanding of jetting in metallic microparticle impacts to build toward a more comprehensive understanding of the impact-induced metallic bonding that is fundamental to cold spray.
## Acknowledgment
This work was primarily supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, and Division of Materials Sciences and Engineering under Award DE-SC0018091. The work was performed in facilities supported by the U.S. Army Research Office and CCDC Army Research Laboratory through the Institute for Soldier Nanotechnologies, under Cooperative Agreement No. W911NF-18-2-0048. The key equipment (the multi-frame camera) was provided through the Office of Naval Research DURIP (Grant No. N00014-13-1-0676).
## Conflict of Interest
There are no conflicts of interest.
## References
1.
Johnson
,
K. L.
,
1985
,
Contact Mechanics
,
Cambridge University Press
,
Cambridge, UK
.
2.
Wu
,
C.
,
Li
,
L.
, and
Thornton
,
C.
,
2003
, “
Rebound Behaviour of Spheres for Plastic Impacts
,”
Int. J. Impact Eng.
,
28
(
9
), pp.
929
946
. 10.1016/S0734-743X(03)00014-9
3.
Wu
,
C.-Y.
,
Li
,
L.-Y.
, and
Thornton
,
C.
,
2005
, “
Energy Dissipation During Normal Impact of Elastic and Elastic–Plastic Spheres
,”
Int. J. Impact Eng.
,
32
(
1–4
), pp.
593
604
. 10.1016/j.ijimpeng.2005.08.007
4.
Brach
,
R. M.
, and
Dunn
,
P. F.
,
1998
, “
Models of Rebound and Capture for Oblique Microparticle Impacts
,”
Aerosol Sci. Technol.
,
29
(
5
), pp.
379
388
. 10.1080/02786829808965577
5.
Brach
,
R. M.
,
Dunn
,
P. F.
, and
Li
,
X.
,
2000
, “
Experiments and Engineering Models of Microparticle Impact and Deposition
,”
,
74
(
1–4
), pp.
227
282
. 10.1080/00218460008034531
6.
Yildirim
,
B.
,
Yang
,
H.
,
Gouldstone
,
A.
, and
Müftü
,
S.
,
2017
, “
Rebound Mechanics of Micrometre-Scale, Spherical Particles in High-Velocity Impacts
,”
Proc. R. Soc. A Math. Phys. Eng. Sci.
,
473
(
2204
), p.
20160936
. 10.1098/rspa.2016.0936
7.
Kharaz
,
A. H.
,
Gorham
,
D. A.
, and
Salman
,
A. D.
,
1999
, “
Accurate Measurement of Particle Impact Parameters
,”
Meas. Sci. Technol.
,
10
(
1
), pp.
31
35
. 10.1088/0957-0233/10/1/009
8.
Gorham
,
D. A.
, and
Kharaz
,
A. H.
,
2000
, “
The Measurement of Particle Rebound Characteristics
,”
Powder Technol.
,
112
(
3
), pp.
193
202
. 10.1016/S0032-5910(00)00293-X
9.
Labous
,
L.
,
Rosato
,
A. D.
, and
Dave
,
R. N.
,
1997
, “
Measurements of Collisional Properties of Spheres Using High-Speed Video Analysis
,”
Phys. Rev. E—Stat. Physics, Plasmas, Fluids, Relat. Interdiscip. Top.
,
56
(
5
), pp.
5717
5725
. 10.1103/PhysRevE.56.5717
10.
,
H.
,
Kreye
,
H.
,
Gärtner
,
F.
, and
Klassen
,
T.
,
2016
, “
Cold Spraying—A Materials Perspective
,”
Acta Mater.
,
116
, pp.
382
407
. 10.1016/j.actamat.2016.06.034
11.
Moridi
,
A.
,
Hassani-Gangaraj
,
S. M.
,
Guagliano
,
M.
, and
Dao
,
M.
,
2014
, “
Cold Spray Coating: Review of Material Systems and Future Perspectives
,”
Surf. Eng.
,
30
(
6
), pp.
369
395
. 10.1179/1743294414Y.0000000270
12.
Schmidt
,
T.
,
Gärtner
,
F.
,
,
H.
, and
Kreye
,
H.
,
2006
, “
Development of a Generalized Parameter Window for Cold Spray Deposition
,”
Acta Mater.
,
54
(
3
), pp.
729
742
. 10.1016/j.actamat.2005.10.005
13.
Papyrin
,
A.
,
Kosarev
,
V.
,
,
S.
,
Alkimov
,
A.
, and
Fomin
,
V.
,
2006
,
Cold Spray Technology
,
Elsevier Science
,
Oxford, UK
.
14.
,
H.
,
Gärtner
,
F.
,
Stoltenhoff
,
T.
, and
Kreye
,
H.
,
2003
, “
Bonding Mechanism in Cold Gas Spraying
,”
Acta Mater.
,
51
(
15
), pp.
4379
4394
. 10.1016/S1359-6454(03)00274-X
15.
King
,
P. C.
,
Busch
,
C.
,
Kittel-Sherri
,
T.
,
Jahedi
,
M.
, and
Gulizia
,
S.
,
2014
, “
Interface Melding in Cold Spray Titanium Particle Impact
,”
Surf. Coatings Technol.
,
239
, pp.
191
199
. 10.1016/j.surfcoat.2013.11.039
16.
Vidaller
,
M. V.
,
List
,
A.
,
Gaertner
,
F.
,
Klassen
,
T.
,
Dosta
,
S.
, and
Guilemany
,
J. M.
,
2015
, “
Single Impact Bonding of Cold Sprayed Ti-6Al-4V Powders on Different Substrates
,”
J. Therm. Spray Technol.
,
24
(
4
), pp.
644
658
. 10.1007/s11666-014-0200-4
17.
Hassani-Gangaraj
,
M.
,
Veysset
,
D.
,
Nelson
,
K. A.
, and
Schuh
,
C. A.
,
2018
, “
In-Situ Observations of Single Micro-Particle Impact Bonding
,”
Scr. Mater.
,
145
, pp.
9
13
. 10.1016/j.scriptamat.2017.09.042
18.
Bae
,
G.
,
Kumar
,
S.
,
Yoon
,
S.
,
Kang
,
K.
,
Na
,
H.
,
Kim
,
H.-J.
, and
Lee
,
C.
,
2009
, “
Bonding Features and Associated Mechanisms in Kinetic Sprayed Titanium Coatings
,”
Acta Mater.
,
57
(
19
), pp.
5654
5666
. 10.1016/j.actamat.2009.07.061
19.
Grujicic
,
M.
,
Zhao
,
C. L.
,
DeRosset
,
W. S.
, and
Helfritch
,
D.
,
2004
, “
Adiabatic Shear Instability Based Mechanism for Particles/Substrate Bonding in the Cold-Gas Dynamic-Spray Process
,”
Mater. Des.
,
25
(
8
), pp.
681
688
. 10.1016/j.matdes.2004.03.008
20.
Li
,
W.-Y.
,
Zhang
,
C.
,
Li
,
C.-J.
, and
Liao
,
H.
,
2009
, “
Modeling Aspects of High Velocity Impact of Particles in Cold Spraying by Explicit Finite Element Analysis
,”
J. Therm. Spray Technol.
,
18
(
5–6
), pp.
921
933
. 10.1007/s11666-009-9325-2
21.
Lee
,
J.-H.
,
Veysset
,
D.
,
Singer
,
J. P.
,
Retsch
,
M.
,
Saini
,
G.
,
Pezeril
,
T.
,
Nelson
,
K. A.
, and
Thomas
,
E. L.
,
2012
, “
High Strain Rate Deformation of Layered Nanocomposites
,”
Nat. Commun.
,
3:1164
. 10.1038/ncomms2166
22.
Veysset
,
D.
,
Hsieh
,
A. J.
,
Kooi
,
S.
,
Maznev
,
A. A.
,
Masser
,
K. A.
, and
Nelson
,
K. A.
,
2016
, “
Dynamics of Supersonic Microparticle Impact on Elastomers Revealed by Real-Time Multi-Frame Imaging
,”
Sci. Rep.
,
6
(
25577
), pp.
1
7
. 10.1038/srep25577
23.
Veysset
,
D.
,
Kooi
,
S. E.
,
Мaznev
,
A. A.
,
Tang
,
S.
,
Mijailovic
,
A. S.
,
Yang
,
Y. J.
,
Geiser
,
K.
,
Van Vliet
,
K. J.
,
Olsen
,
B. D.
, and
Nelson
,
K. A.
,
2018
, “
High-Velocity Micro-Particle Impact on Gelatin and Synthetic Hydrogel
,”
J. Mech. Behav. Biomed. Mater.
,
86
, pp.
71
76
. 10.1016/j.jmbbm.2018.06.016
24.
Wu
,
Y.-C. M. C. M.
,
Hu
,
W.
,
Sun
,
Y.
,
Veysset
,
D.
,
Kooi
,
S. E.
,
Nelson
,
K. A.
,
Swager
,
T. M.
, and
Hsieh
,
A. J.
,
2019
, “
Unraveling the High Strain-Rate Dynamic Stiffening in Select Model Polyurethanes—The Role of Intermolecular Hydrogen Bonding
,”
Polymer (Guildf)
,
168
, pp.
218
227
. 10.1016/j.polymer.2019.02.038
25.
Sun
,
Y.
,
Wu
,
Y.-C. M.
,
Veysset
,
D.
,
Kooi
,
S. E.
,
Hu
,
W.
,
Swager
,
T. M.
,
Nelson
,
K. A.
, and
Hsieh
,
A. J.
,
2019
, “
Molecular Dependencies of Dynamic Stiffening and Strengthening Through High Strain Rate Microparticle Impact of Polyurethane and Polyurea Elastomers
,”
Appl. Phys. Lett.
,
115
(
9
), p.
093701
. 10.1063/1.5111964
26.
Hassani-Gangaraj
,
M.
,
Veysset
,
D.
,
Champagne
,
V. K.
,
Nelson
,
K. A.
, and
Schuh
,
C. A.
,
2018
, “
,”
Acta Mater.
,
158
, pp.
430
439
. 10.1016/j.actamat.2018.07.065
27.
Hassani-Gangaraj
,
M.
,
Veysset
,
D.
,
Nelson
,
K. A.
, and
Schuh
,
C. A.
,
2019
, “
Impact-Bonding With Aluminum, Silver, and Gold Microparticles: Toward Understanding the Role of Native Oxide Layer
,”
Appl. Surf. Sci.
,
476
, pp.
528
532
. 10.1016/j.apsusc.2019.01.111
28.
Hassani
,
M.
,
Veysset
,
D.
,
Nelson
,
K. A.
, and
Schuh
,
C. A.
,
2020
, “
Material Hardness at Strain Rates Beyond 106 S−1 Via High Velocity Microparticle Impact Indentation
,”
Scr. Mater.
,
177
, pp.
198
202
. 10.1016/j.scriptamat.2019.10.032
29.
Carlucci
,
D. E.
, and
Jacobson
,
S. S.
,
2008
,
Ballistics: Theory and Design of Guns and Ammunition
,
CRC Press
,
Boca Raton, FL
.
30.
Gilmore
,
D. L.
,
Dykhuizen
,
R. C.
,
Neiser
,
R. A.
,
Roemer
,
T. J.
, and
Smith
,
M. F.
,
1999
, “
Particle Velocity and Deposition Efficiency in the Cold Spray Process
,”
J. Therm. Spray Technol.
,
8
(
4
), pp.
576
582
. 10.1361/105996399770350278
31.
Li
,
C. J.
,
Li
,
W. Y.
, and
Liao
,
H.
,
2006
, “
Examination of the Critical Velocity for Deposition of Particles in Cold Spraying
,”
J. Therm. Spray Technol.
,
15
(
2
), pp.
212
222
. 10.1361/105996306X108093
32.
Germann
,
T. C.
,
2006
, “
Large-Scale Molecular Dynamics Simulations of Hyperthermal Cluster Impact
,”
Int. J. Impact Eng.
,
33
(
1–12
), pp.
285
293
. 10.1016/j.ijimpeng.2006.09.049
33.
Housen
,
K. R.
, and
Holsapple
,
K. A.
,
2011
, “
Ejecta From Impact Craters
,”
Icarus
,
211
(
1
), pp.
856
875
. 10.1016/j.icarus.2010.09.017
34.
Johnson
,
B. C.
,
Bowling
,
T. J.
, and
Melosh
,
H. J.
,
2014
, “
Jetting During Vertical Impacts of Spherical Projectiles
,”
Icarus
,
238
, pp.
13
22
. 10.1016/j.icarus.2014.05.003
35.
Kurosawa
,
K.
, and
,
S.
,
2019
, “
Impact Cratering Mechanics: A Forward Approach to Predicting Ejecta Velocity Distribution and Transient Crater Radii
,”
Icarus
,
317
, pp.
135
147
. 10.1016/j.icarus.2018.06.021
36.
Rahmati
,
S.
, and
Jodoin
,
B.
,
2020
, “
Physically Based Finite Element Modeling Method to Predict Metallic Bonding in Cold Spray
,”
J. Therm. Spray Technol.
,
29
(
4
), pp.
611
629
. 10.1007/s11666-020-01000-1
37.
Kovarik
,
O.
,
Siegl
,
J.
,
Cizek
,
J.
,
,
T.
, and
Kondas
,
J.
,
2020
, “
Fracture Toughness of Cold Sprayed Pure Metals
,”
J. Therm. Spray Technol.
,
29
(
1–2
), pp.
147
157
. 10.1007/s11666-019-00956-z
38.
Li
,
W.-Y.
,
Yin
,
S.
, and
Wang
,
X.-F.
,
2010
, “
Numerical Investigations of the Effect of Oblique Impact on Particle Deformation in Cold Spraying by the SPH Method
,”
Appl. Surf. Sci.
,
256
(
12
), pp.
3725
3734
. 10.1016/j.apsusc.2010.01.014
39.
Goldbaum
,
D.
,
Chromik
,
R. R.
,
Yue
,
S.
,
Irissou
,
E.
, and
Legoux
,
J. G.
,
2011
, “
Mechanical Property Mapping of Cold Sprayed Ti Splats and Coatings
,”
J. Therm. Spray Technol.
,
20
(
3
), pp.
486
496
. 10.1007/s11666-010-9546-4
40.
Guetta
,
S.
,
Berger
,
M. H.
,
Borit
,
F.
,
Guipont
,
V.
,
Jeandin
,
M.
,
Boustie
,
M.
,
Ichikawa
,
Y.
,
Sakaguchi
,
K.
, and
Ogawa
,
K.
,
2009
, “
Influence of Particle Velocity on Adhesion of Cold-Sprayed Splats
,”
J. Therm. Spray Technol.
,
18
(
3
), pp.
331
342
. 10.1007/s11666-009-9327-0
41.
Schmidt
,
T.
,
,
H.
,
Gärtner
,
F.
,
Richter
,
H.
,
Stoltenhoff
,
T.
,
Kreye
,
H.
, and
Klassen
,
T.
,
2009
, “
From Particle Acceleration to Impact and Bonding in Cold Spraying
,”
J. Therm. Spray Technol.
,
18
(
5–6
), pp.
794
808
. 10.1007/s11666-009-9357-7 | 2023-03-24 13:47:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5581830739974976, "perplexity": 3396.1053535591564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00257.warc.gz"} |
https://www.shaalaa.com/concept-notes/properties-rational-numbers-commutativity-property-of-rational-numbers_7758 | Advertisement Remove all ads
# Properties of Rational Numbers - Commutativity Property of Rational Numbers
Advertisement Remove all ads
# Commutativity Property of Rational Numbers:
## 1. Commutativity of Addition of Rational Numbers:
(-2)/3 + 5/7 = 1/21 and 5/7 + ((-2)/3) = 1/21.
(-2)/3 + 5/7 = 5/7 + (-2/3)
-6/5 + (-8/3) = -58/15 and (-8)/3 + (-6/5) = -58/15
(-6)/5 + ((-8)/3) = (-8)/3 + ((-6)/5)
Two rational numbers can be added in any order.
We say that addition is commutative for rational numbers. That is, for any two rational numbers a, and b, a + b = b + a.
## 2. Commutativity of Subtraction of Rational Numbers:
2/3 – 5/4 = (-7)/12 and 5/4 – 2/3 = 7/12
1/2 - 3/5 = -1/10 and 3/5 - 1/2 = 1/10
Subtraction is not commutative for integers and integers are also rational numbers. So, Subtraction will not be commutative for rational numbers too.
## 3. Commutativity of Multiplication of Rational Numbers:
(-7)/3 xx 6/5 = (-42)/15 = 6/5 xx (-7)/3
- 8/9 xx (-4/7) = - 32/63 = (- 4)/7 xx (- 8/9)
Multiplication is commutative for rational numbers.
In general, a × b = b × a for any two rational numbers a, and b.
## 4. Commutativity of Division of Rational Numbers:
-5/4 ÷ 3/7 = - 35/12
3/7 ÷ (-5/4) = 12/-35
-5/4 ÷ 3/7 ≠ 3/7 ÷ (-5/4)
You will find that expressions on both sides are not equal.
So division is not commutative for rational numbers.
If you would like to contribute notes or other learning material, please submit them using the button below.
### Shaalaa.com
Commutativity of Addition of Rational Numbers [00:07:14]
S
##### Series: Commutativity Property of Rational Numbers
0%
Advertisement Remove all ads
Share
Notifications
View all notifications
Forgot password?
Course | 2021-04-20 15:49:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4224684536457062, "perplexity": 2353.446509643236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039476006.77/warc/CC-MAIN-20210420152755-20210420182755-00552.warc.gz"} |
https://www.bcaquaria.com/threads/co2-basics-new-to-co2.147769/ | 1 - 19 of 19 Posts
#### Adanac00
·
##### Registered
Joined
·
8 Posts
Discussion Starter · ·
Hello all
I am rather new to Planted tanks and like everyone else after getting the plated tank bug i want to look at Co2.
I was using Flourish Excel and also flourish but i am finding that in the 75g tank it goes pretty quick.
I am not running a high tech tank i would just like to see more growth from my Java Ferns and Anubis and some other low tech plants.
Can anyone give me some ideas of how i can get a little Co2 into my tank without breaking the Bank. I would be happy with a little Co2 i dont need to go heavy on the amount just enough to help a little bit! I would like to stay away from DIY bottles as its in my living room and the wife might not like that.
I thought about some of the Small Fluval systems for the 20g tanks just to see what happens and then maybe add a paintball canister to it instead. Any ideas totally lost and confused!
Adanac00
#### effox
·
Joined
·
5,556 Posts
Look into Metricide. You'll get a gallon or so jug. Might be a cheaper option even for a 75g. I remember having it delivered to my door and I think it was under $50. The problem with the disposables and paint balls is you save a little bit at first, and then end up paying for it later with refills big time. Read up on the safety and dosing of Metricide though before you commit to purchasing it. Just a thought since it's not a high tech tank. #### Adanac00 · ##### Registered Joined · 8 Posts Discussion Starter · · Thank you effox i am thinking this might be the solution although i would enjoy not having to dose daily but oh well it looks like a decent price for 1 gal and can last alot longer then excel! Adanac00 #### effox · ##### Registered Joined · 5,556 Posts Yeah, double check the dosage (essentially twice the strength of excel), but when I was using it on my tanks I think I only did it 2 or 3 times a week, not every day. It's definitely not something you want to get in your eyes\inhale through the nose, or even on your skin. Check the safety precautions for sure. I had 2 10g and a 29g I was dosing, I ended up giving the jug away with over half of it left, so it won't break the bank, some time in the future if you like the results but want to go canister, that options is always available without incompatible equipment\waste of expense. If you do go this route do your diligence with dosages\safety, and don't use that little activator bottle, just toss it away. #### Reckon · ##### Registered Joined · 3,016 Posts I used to dose 30mL of metricide every 2 days in my 50 gal and that was a little much - I noticed some of my plants turning to mush. I would say that you can safely dose 30mL x2 a week in a 75gal tank. If you plan on using CO2 then you will probably see the results you are looking for with 1-2 bubbles per second and not have to refill as often. Just to give you an idea I inject around 3 bubbles per second (7 hours/day) into my 50gal and I have to change out my 10lb tank around every 4 months. #### effox · ##### Registered Joined · 5,556 Posts I've used the paintball system, I just wouldn't recommend it. I'd say a 5-10lb tank would be the soundest investment long term. Another perk of metricide is you won't have unsightly equipment in your living room, and wouldn't increase any decibels. #### knucklehead · ##### Registered Joined · 477 Posts It only now that I have heard of metricide. Where can we get this as I want to try using it? #### effox · ##### Registered Joined · 5,556 Posts Bowers Medical Supply is where I ordered it from. The head office is located in Richmond: Unit 9 - 3691 Viking Way, Richmond, BC V6V 2J6 Give them a call at (604) 278-7566 as I'm not sure where the ship from. I believe you want Metricide 14, so double check this info provided. #### knucklehead · ##### Registered Joined · 477 Posts Thanks effox! I just called them and they have 2 left. What do you use to measure this? syringe? #### Reckon · ##### Registered Joined · 3,016 Posts I use a syringe to determine the mL put into the tank. How big is your tank again? Also, the metricide should come with an activator. DO NOT mix in the activator - you can just throw that out. #### knucklehead · ##### Registered Joined · 477 Posts I have a 30 and a 33 gal #### Reckon · ##### Registered Joined · 3,016 Posts Here's more information for using CO2: Also, it's very important to recognize that even though Excel and Metricide is often referred to as "Liquid Carbon" it is not a direct substitue to CO2. Here is more information on it: effox #### knucklehead · ##### Registered Joined · 477 Posts Thanks Reckon #### knucklehead · ##### Registered Joined · 477 Posts When I have just dosed the tank with metricide can I put my hand in the tank water? #### effox · ##### Registered Joined · 5,556 Posts It's a medical grade sterilizing solution from my understanding. My skin had reactions to just normal tank water without dosing it. I'd avoid it, you may have an allergy to it, and even if not, I'd wash my hands\arms with soap water immediately after anyways. This isn't stuff you want to get on yourself\in your eyes\drink\inhale. #### knucklehead · ##### Registered Joined · 477 Posts Thanks effox! #### jbyoung00008 · ##### Registered Joined · 2,318 Posts If you want a nice planted tank, save up for a proper Co2 system. All other methods simply don't compare.$215 will buy you a full setup and take your tank to a whole new level.
With that being said you don't need Co2 to have a nice planted tank. You just need faster growing plants than anubias and Java fern. Try hygrophilia species or even some crypt species grow fast and will do great without Co2. Lighting is also a factor. Too much light on slow growing plant can cause algae. Their needs to be a balance of light/fertz/Co2.
#### knucklehead
·
##### Registered
Joined
·
477 Posts
Yes, I've been wanting to get a CO2 setup for a while and been saving up for it
#### Hammer
·
##### Registered
Joined
·
362 Posts
I am not an expert but I have been using an aquatek mni and a 20 oz paintball... by refills are 7\$ at Badlands and I refill about every 2.5 months. I set at about 1-2 bubbles per second. I have moderate intensity LED light from Beamswork. After a large after a water change I add a little bit of trace ferts (about half of recommended...I have lots of fish in the tank as well) just to make sure there not a limiting factor for modest growth. I have mainly stem plants ludwigia repens and such and I am pretty happy with my growth. I trim about once every three weeks and my colours are good. I have said it before...you can have a medium tech for a less cost. I will probably end of spending more in refills over the course of a few years, but everything fits in the cabinet under the tank and I am able to have CO2 and growth and colour I am happy with. Obviously, I would not be as successful with the more demanding carpet plants, but I have had great success with stem plants as well as some lower bushier varieties so that my tank looks quite lush.Also, it is primarily a fish tank versus a manicured gardern. I see pictures of high tech tanks that balance both fish keeping and aquascaping...that represents a real pinnacle of the hobby and I love seeing them, but I am not there yet. That being said I still get a lot of joy out what I have been about to do for the amount I spent.
1 - 19 of 19 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread. | 2022-07-01 11:12:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24265454709529877, "perplexity": 1464.7461120547835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103940327.51/warc/CC-MAIN-20220701095156-20220701125156-00673.warc.gz"} |
https://zbmath.org/?q=an:1214.05139 | # zbMATH — the first resource for mathematics
The structure of even factors in claw-free graphs. (English) Zbl 1214.05139
Summary: Recently, Jackson and Yoshimoto proved that every bridgeless simple graph $$G$$ with $$\delta (G)\geq 3$$ has an even factor in which every component has order at least four, which strengthens a classical result of Petersen. In this paper, we give a strengthening of the above result and show that the above graphs have an even factor in which every component has order at least four that does not contain any given edge. We also extend the above result to the graphs with minimum degree at least three such that all bridges lie in a common path and to the bridgeless graphs that have at most two vertices of degree two respectively. Finally we use this extended result to show that every simple claw-free graph $$G$$ of order $$n$$ with $$\delta (G)\geq 3$$ has an even factor with at most $$\{1, \lfloor \frac{2n-2}{7}\rfloor \}$$ components. The upper bound is best possible.
##### MSC:
05C70 Edge subsets with special properties (factorization, matching, partitioning, covering and packing, etc.)
##### Keywords:
even factor; claw-free graph; components of an even factor
Full Text:
##### References:
[1] Fleischner, H., Spanning Eulerian subgraphs, the splitting lemma, and petersen’s theorem, Discrete math., 101, 33-37, (1992) · Zbl 0764.05051 [2] Fujisawa, J.; Xiong, L.; Yoshimoto, K.; Zhang, S., The upper bound of the number of cycles in a 2-factor of a line graph, J. graph theory, 55, 72-82, (2007) · Zbl 1118.05049 [3] Gould, R.; Hynds, E., A note on cycles in 2-factors of line graphs, Bull. ICA., 26, 46-48, (1999) · Zbl 0922.05046 [4] Jackson, B.; Yoshimoto, K., Even subgraphs of bridgeless graphs and 2-factors of line graphs, Discrete math., 307, 2775-2785, (2007) · Zbl 1127.05080 [5] B. Jackson, K. Yoshimoto, Spanning even subgraphs of 3-edge-connected graphs, Preprint · Zbl 1180.05057 [6] Kouider, M.; Vestergaard, P.D., Connected factors in graphs—a survey, Graphs combin., 21, 1-26, (2005) · Zbl 1066.05110 [7] Matthews, M.M.; Sumner, D.P., Longest paths and cycles in $$K_{1, 3}$$-free graphs, J. graph theory, 8, 269-277, (1985) · Zbl 0591.05041 [8] Mckee, T.A., Recharacterizing Eulerian: intimations of new duality, Discrete math., 51, 237-242, (1984) · Zbl 0547.05043 [9] Petersen, J., Die theorie der regulären graphen, Acta math., 15, 193-220, (1891) · JFM 23.0115.03 [10] West, D.B., Introduction to graph theory, (2001), Prentice Hall Upper Saddle River
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-12-09 03:51:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7880252599716187, "perplexity": 2106.889168032353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363659.21/warc/CC-MAIN-20211209030858-20211209060858-00419.warc.gz"} |
https://leancrew.com/all-this/2022/01/reducing-the-size-of-large-pdfs/ | # Reducing the size of large PDFs
This morning I was writing a report and realized that it was going to be too big to email to the client on Monday. The problem was that it contained 15–20 graphs, all of which were in the neighborhood of 2 MB. When this has happened before, I’d just send the client a Dropbox link or use whatever Apple Mail does to deal with large attachments. But today I decided to fix the problem.
Like all of the graphs I make for work, these were built in Python using Matplotlib. And although there is a fair amount of data being plotted in each graph, it’s always seemed to me that the PDF files produced were a lot bigger than they should be. My search for ways to reduce their size returned lots of web pages that will thin your PDFs for you, but I had no interest in uploading my files to some possibly sketchy site. The trick to getting the kind of answer I wanted was adding “ghostscript” to my search terms.
The solution came from adapting a decade-old Gist from Guilherme Rodrigues. The result was this shell script, which I named reduceMPL:
bash:
1: #!/usr/bin/env bash
2:
3: # Reduces the size of PDF plots created by Matplotlib.
4: # Assumes that the files to be reduced are named mpl[Something].pdf
5: # and that it's called via
6: #
7: # reduceMPL mpl*.pdf
8: #
9: # The results are a set of smaller files named [Something].pdf
10: # The original files are *not* deleted.
11:
12: for mpl do
13: new=$(cut -c 4- <<< "$mpl")
14: gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dNOPAUSE -dQUIET -dBATCH -sOutputFile="$new" "$mpl"
15: done
The key, of course, is Line 14, which opens the fat file in Ghostscript and spits out a skinny version. How skinny? My 1.9 MB inputs were turned into 23–25 KB outputs. That’s 25 kilobytes, the kind of file size you see only in plain text files nowadays. I could easily store all my graphs for this report on a 3½″ diskette—if I still had any 3½″ diskettes. And I don’t see any difference between the original and smaller version.
The script assumes the input files will be prefixed with “mpl,” and the output files will have the same name but with the “mpl” prefix stripped off. For example, mplHoopStress.pdf will lead to the much smaller HoopStress.pdf. All I had to do to meet this assumption was alter one line in the Python code that generates the graphs. So when I have a bunch of graphs that need thinning, a simple
reduceMPL mpl*.pdf
is all I need to get them converted. As the comment block at the top of the script says, reduceMPL does not delete the original files.
There are a couple of other interesting things in the script. First, you’ll note that Line 12 is
for mpl do
for mpl in "$@"; do I learned how to do this just a few days ago from a hint on the Bash Pitfalls page of Greg’s Wiki.1 What makes it nice is that it loops through the arguments without my having to remember whether the variable to use there is $@ or \$*. And it handles the quoting of the variable automatically, too.
The other trick was using cut with a here-string to get the new file name from the original. I wouldn’t have thought to use a here-string if not for Jason Snell’s recent post reminding me of how useful they can be. | 2022-05-18 10:15:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42983710765838623, "perplexity": 1452.5052252013134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521883.7/warc/CC-MAIN-20220518083841-20220518113841-00361.warc.gz"} |
http://artisanengineering.ca/2r2ph/are-long-haired-siamese-cats-hypoallergenic-bd16a8 | Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /nfs/c12/h06/mnt/217123/domains/artisanengineering.ca/html/wp-content/plugins/revslider/includes/operations.class.php on line 2364
Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /nfs/c12/h06/mnt/217123/domains/artisanengineering.ca/html/wp-content/plugins/revslider/includes/operations.class.php on line 2368
Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /nfs/c12/h06/mnt/217123/domains/artisanengineering.ca/html/wp-content/plugins/revslider/includes/output.class.php on line 3169
are long haired siamese cats hypoallergenic ����8�fv4>>���pz4��lj�>9�6������TYi �A�X^��{���Y��s���y���ߎ������;?e;��eJ����َ��c�협�,� � �f�툰��Ϧ�F&��ӓ�љ�0��g��َ�҇������I����W|�o. reed registered Most outdoor tracks are 400 meters (1312' 4") or 440 yards (1320' 0" or 402.336 meters). It takes four laps of this track to complete 1 mile. The track comprises two semi-circles with a radius of 36.5 meters each. While the best tracks (IAAF, Olympic, etc.) Mark locations for running lane lines that are 1.2 m apart across the width of the track at each straight section with 12-inch steel spikes. peacemaker dimensions. our backyard track is 1/4mile,, tri-oval,, we call it mini degga, to build a track and get around the corners, without pinching them off,, they need to be like 46-48% of the straights..ours is 380 feet downs the straights and around 225-240 radius in the corners,, and its … dimensions. 12 McLaren 720S Total track width should be at least 72 meters. 1/4 & 1/8 MILE PERMANENT TRACK 1) Unpack all of the equipment and immediately inspect for shipping damage. Drivers must be able to see the landing area with sufficient time to avoid collision. Re: acres for horse track I had the same problem trying to figure out the dimensions the first time I built a track at my farm. That depends on your track, but MOST school tracks are 1/4 mile, or 440 yards. If I assume you still want a track built to IAAF standards, except that it is only one lane, I get the following for the bounding rectangle: Length: 157.99 + 2 * 1.22 = 160.43 length, Width: 36.8 + 2 * 1.22 = 39.24 width. Today I ran a 1:14 1/4 miles and was wondering if that's a good time and … Home Decor. This assumes running on the inside lane. That's faster than … Lap Pool. Some facilities feature several ovals track of different sizes, often sharing part of the same front straightaway. Therefore, a staggered start with different starting points yields a finish of the same distance for all lanes. The radius on the turns is 36.8 meters. Measuring the distance around the track is easy with a little preparation. Generally speaking, betting handle is going to suffer when a lot of favorites win. Please like, shared and subscribe. %PDF-1.5 I don't do track or anything but I run as a hobby. Over the course of time mankind has chosen to run because it is fun, to deliver something or to obtain a health benefit. This calculator is for entertainment value only. Today running tracks vary in length from less than 100 yards to greater than 400 meters. stream Runners may need to verify or determine the distance of a running track. Designate an experienced crew member or foreman to interpret the plans and call out dimensions. The inner field can be used for field events. Starting area should be off to the side of the course. On an imperial track, one mile was precisely four laps. tell the calculator that the lane width is 350mm, then running in "lane 2" of this imaginary track will give you 1/4 mile per lap. 4 0 obj The drag strip should be divided into a timed, the track and a braking or shutdown area, the braking area can be divided in a primary and an emergency area. Today, tracks have a rubber surface; whereas, older tracks are usually cinder covered. r=h Get your answers by asking now. To run one mile, complete four full laps, plus a further 9.34 meters. <> The P1 has the ability to go from 0-60 miles per hour in only 2.6 seconds and can run a 1/4 mile track in only 9.8 seconds, finishing at speeds of 152.2 miles per hour. The "running line" would, therefore, fit in a rectangle 157.99 (i.e., 84.39 + 2 * 36.8) meters long and 73.6 (i.e., 2 * 36.8) meters wide. The track is measured in meters based on the distance of the innermost lane, which is called lane one. 4- Diagram of Track 5- Special Events and Cross Country . On a 400 meter track the start line is 9.34 meters further back. Are turkey trots going to be cancelled this year? A 160.43 by 39.24 meter rectangle is 6,295.2732 square meters, or 1.556 acres. A dragstrip is a facility for conducting automobile and motorcycle acceleration events such as drag racing. Answer by ikleyn(35822) (Show Source): You can put this solution on YOUR website!. Mathematics, 21.06.2019 16:00, sylvia47. Yes, a car that size has that much power, and it allows it to go from 0-60 miles per hour in just 2.9 seconds. Drives: Ralliart Sportback. The now defunct Ascot Speedway featured 1/2 mile and 1/4 mile dirt oval tracks, and Irwindale Speedway features 1/2 mile and 1/3 mile concentric paved oval tracks. This length is also approximately a quarter of a mile. Sometimes it is convenient to run on a track but how far is that lap around the track? Most indoor tracks are 200 meters (656' 2") or 220 yards (660' 0" or 201.168 meters). Over the course of time mankind has chosen to run because it is fun, to deliver something or to obtain a health benefit. The semi-circles are joined by two straights measuring 84.39 meters long. Thank you again! It comes equipped with a powerful 6.2L twin-turbocharged V8 motor that produces 1,200 horsepower. toyota Pick bed Titan and roger. A couple of the karts in this video even break the 10-second barrier, with the quickest managing a 9.3-second run at 138 mph. The finish line photocells should be 402,33m ±100mm (1/4 mile) or 201.16 m ±50 mm (1/8 mile) from the start line. Therefore, a staggered start with different starting points yields a … All Tracks are NOT 1/4 mile! So, the whole thing should fit inside a rectangle that is 177.52 (i.e., 158 + 2*9.76) meters long and 93.12 (73.6 + 2 * 9.76) meters wide. endobj Some facilities feature several ovals track of different sizes, often sharing part of the same front straightaway. Most running tracks are exactly the same size, thanks to their adherence to the IAAF Technical Specifications.That means that, as a general rule, you can assume that a trip around the track in Lane 1 is 400 meters, or about ¼ mile. Like a track coach familiar with that specific track. via wikipedia.org. a gallon. The run to the first corner should be as long and wide as possible. Add your answer and earn points. The track consists of two semicircles with a radius of 36.5 meters and two straight paths. If your own measurement is fairly close to one of these numbers, you can probably assume the track is of a standard size. From a position on the inside lane to a parallel point on the outside lane, there is a difference in distance. The terminal speed measurement should take place before the finish line preferably, but a speed trap that straddles the finish line is acceptable. please help , thanks ? Is this safe? Those 7 lanes took a lot of room, didn't they? This 1/4 mile calculator provides the 60 foot, 330 foot and 660 foot incremental of the quarter mile run. However, some tracks are different lengths. Runners may need to verify or determine the distance of a running track. Calculate GEAR, RPM, MPH, TIRE DIAMETER. That gives you total distance on the curves of 229 m. There is the white reverse 200m start line, the blue (should be green) 800m start line, the black (should be red) 4x200 beginning of the second zone triangle and the blue end of 4x400 (second and third) passing zone. The Porsche 918 Spyder may not be as fast as the 919 Hybrid, but it can hold its own and is actually street legal. . New questions in Mathematics. Running Lanes. What is Elena’s running speed? Often, the track surrounds a football field, so the layout for field events must be adapted. That means that, as a general rule, you can assume that a trip around the track in Lane 1 is 400 meters, or about ¼ mile. Running tracks are oval-shaped and divided into individual lanes. The older imperial track style is 440 yards per lap, 2 meters longer than the metric track. plz help i will mark u brainlyest. I am looking forward to just creating a gravel trail for now. Step 2. IN. There is also a 1/4 mile track feature and a Peak and Valley graph for different programs. On a metric track, a mile is four laps plus another 9.34 meters. Running Track Lane Distances Most running tracks are exactly the same size, thanks to their adherence to the IAAF Technical Specifications . Measure the width of the lanes. Elena runs on this track, completing each lap in 1/20 of an hour. Exterior. However, due to safety concerns, certain sanctioning bodies (notably the NHRA for its Top Fuel and Funny Car classes) have shortened races to 1,000 feet. Explore. identified. With lanes designed to be 400m in length from start to finish, 400m Running Tracks are the most commonly used track size that can easily accommodate for competitive sprint lengths of 100m, 200m, and 400m. Track and Field fields differ substantially. Check the contents and match up to the invoice and packing list. High school tracks are measured in the left innermost lane. Most outdoor tracks are 400 meters around the innermost lane. Jenna and Steve worked together on solving the problem. And even if the track is usually empty when you run there, take a mask in case others show up and you want to show proper social distancing with everyone else. Today, most half-mile tracks are speed-favoring, which helps favorites win over 40 percent of the races. Damages should be immediately reported to the carrier and noted on the carriers receipt. ]N!��VW+�ka�&�ފ��������$#�� g�腑������5�K�r� There are 43,560 square feet in one acre so a 1/4 mile square is 1742400/43560 = 40 acres. Answers: 3 Get Other questions on the subject: Mathematics. I plan on exercising by running barefoot. <>/ExtGState<>/XObject<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/MediaBox[ 0 0 612 792] /Contents 4 0 R/Group<>/Tabs/S/StructParents 0>> Well, I'm in Minnesota for work again. Article from bootcampat50.blogspot.co.uk. Most run about 110 yds on the straightaways and 110 yds on each of the turns which give you a … Track length should be between ¼ mile and 1 mile. have lanes that are 1.22m to 1.25m wide, most high school and college tracks have lane widths measured in inches (e.g., 42"). The now defunct Ascot Speedway featured 1/2 mile and 1/4 mile dirt oval tracks, and Irwindale Speedway features 1/2 mile and 1/3 mile concentric paved oval tracks. Let's use a mile to keep it simple: 1 Mile= 5280 ft., divide that by 4 = 1320 ft. What is the longest running race distance? This script gives runners and track officials the offset distances for a given lane on a multi-lane running track. The inner field of the track usually has natural grass or an artificial surface. Field of Play. endobj The table shows values for functions f(x) and g(x) . ASBA's buyer's guide and guidelines are for reference purposes. So, the running surface is 9.76 (i.e., 1.22 * 8) meters wide. Explain your thinking. Find an answer to your question “A circular running track is 1/4 mile long.Elena runs on this track completing each lap of 1/20 of an hour. If you aren't sure, ask the facility staff for information about the length of the track. Track though play dimensions of mere. Nowadays, most tracks are 400m, which is only 437.445 yards. How long should I avoid drinking before a 10K? 115 votes, 224 comments. Given below is the 1 / 4 mile elapsed time calculator to calculate 1 / 4 mile ET from horsepower. 1911 dimensions . The inner field of the track usually has natural grass or an artificial surface. 16 Porsche 918 Spyder. You can make changes to fit your property. endobj The "running line" would, therefore, fit in a rectangle 157.99 (i.e., 84.39 + 2 * 36.8) meters long and 73.6 (i.e., 2 * 36.8) meters wide. @jon.nah.productions ENTER THE LANE WIDTH of lanes on the track (they should all be the same width). The difference in length of each lane should be 20 mm max. The difference between 4 laps on a 400 meter track (1600 meters) and 4 laps on a 1/4 mile track (1609 meters) is only about 2 seconds at 6 minute per mile pace. Kawasaki ZX-6R The 2009 ZX-6R is the most powerful standard 600 MCN has ever tested with a beasty 115bhp at the back wheel. I am aiming for adding said quarter-mile running track to my private property. Track dimensions are going to vary. Find out the length of the running track. The standard distance of a drag race is 1,320 feet, 402 m, or 1/4 mile( +- 0,2% FIA & NHRA rules). It is done over a measured distance of about 0.40 km track. But before I started, I wanted to make sure that I even had enough land to work with. This is 402.341 meters per lap. �s%���v��@^�z��n{��)�O/��ތD��8�q{k����p,v{yq3��|���%�ɕ3L@�"b�M8>���o you about kimber 1/ 4 Mile ET's . �����-) 2) The first track function is to layout the track. A track is a great tool if you are working on running faster. According to one rules document that I found, a standard 400m track is going to have curves with radius of 36.5 meters. Sometimes it is convenient to run on a track but how far is that lap around the track? 4 … Not to be used in place of real testing. The treadmill console is user friendly with a vibrant display. (i.e) from starting point to end line. Tracks are designed so the finish line is the same for everybody, but it is only the start line in lane 1 for races which are multiples of 400 m (standard outdoor track). Of course, you don't have the follow the standards. With a 1/4-mile and a 200-meter banked competition track, Chelsea Piers Fitness offers world-class indoor running facilities & instruction for all levels. This script gives runners and track officials the offset distances for a given lane on a multi-lane running track. 1 0 obj x��\mS�H�N�a>]YW�м�\�REl�eo�,�ۺK�K��-Yp.�����dْ@�W�8ָ{��gt�{�����s�����|~~�q�N��=������_�g���?\^��/o��?/&c��l{kw_ You should verify the size of the track first from someone who knows. Jun 24, 2013 - Indoor Running Track for Home Gym This is awesome! Jenna said that Will ran about$\frac12$mile because$1 \frac23 \times \frac14$is equal to about$\frac12$. Mrs. Gray gave a homework assignment with a fraction problem: Will ran$1 \frac23$laps of a$\frac14$mile track. Is running 20 miles a week for just regular fitness good? To ensure access to the most up-to-date information available for running track construction, please refer to: Running Tracks: A Construction and Maintenance Manual (2019). If you run shuttle hurdles without making double marks; in the reverse direction (1st and 3rd leg) move the start line one foot down the track toward the normal start line (that would finish on the common finish line) and have the hurdlers finish that leg one foot beyond the normal start line. Probably no more than 1.5 acres, if that. Just a note, the track you are running on, if it was built in the last 25 years, is probably 400 meters which is 2+ yds short of a 1/4 mile. In lane 4, there are four marks on top of one another--the significant part of each mark still shows. What is Elena's running speed ...” in Mathematics if you're in doubt about the correctness of the answers or there's no answer, then try to use the smart search and find answers to the similar questions. This length is also approximately a quarter of a mile. The track is in an oval shape. Is a baker cyst on the back of the knee from arthritis, ligament damage or just overuse. 1 4 mile track dimensions. Question 1138289: A circular running track is quarter mile long Elena runs on this track completing each lap and 1/20 of an hour what is Elena's running speed include the unit of measurement. One mile is 5280 feet is 1/4 mile is 5280/4 = 1320 feet. Trassig's running track repair kits are the perfect solution to any cracks, tears, or holes in your poured in place rubber running track surfacing system. Minnesota for work again the guy who said you can probably assume track. Work with of 35 yd feet is 1/4 mile is four laps had land. Than the metric track, one mile is 5280/4 = 1320 ft facility for conducting automobile and motorcycle events... V8 motor that produces 1,200 horsepower & instruction for all levels real testing or artificial. Distance for all levels and 1/4 mile running track dimensions are four marks on top of one --! To have curves with radius of 36.5 meters each drag in only 9.9,. ( they should all be the same size, thanks to their adherence to mile. Tracks have a rubber track then you will want a tread that is shorter take place the! 1320 ft corner should be immediately reported to the carrier ’ s representative is fairly close one. Corvettethe Chevrolet Corvette C8 is the new kid on the outside lane, is. Facility staff for information about the length of each mark still shows throwing events and a 200-meter competition! Of favorites win out my time any jumps in this area must be adapted but most school are. Meters longer than the metric track, but a speed trap 1/4 mile running track dimensions straddles the line... 24, 2013 - indoor running facilities & instruction for all levels the side the. Place before the finish line is 9.34 meters 1:14 1/4 miles and wondering. Damage should be off to the carrier ’ s representative has an area of the equipment and immediately inspect shipping... Damage 1/4 mile running track dimensions just overuse ft., divide that by 4 = 1320.... And 1 mile depends on your tempo efforts for every race distance offers world-class indoor running track jumps in video. Displayed at all times includes speed, Incline, time, distance Traveled,,... Size, thanks to their adherence to the invoice and packing list a! = 40 acres the significant part of the track ( they should all be the same distance for lanes! Even break the 10-second barrier, with the quickest managing a 9.3-second run at 138 mph 5280/4 1320... Tested with a beasty 115bhp at the back wheel part of each mark still shows also a 1/4 square! Is shorter close to one rules document that I even had enough 1/4 mile running track dimensions work. The distance around the track ( they should all be the same front.... Track length should be 20 mm max size is only 400 meters, ligament damage or just.... M, or 1/8 mile meters long laps = 1.5 mile yshore Blvd width of lanes on the,... Wide as possible conducting automobile and motorcycle acceleration events such as drag.. 72 meters than the metric track, completing each lap in 1/20 of an hour someone knows... And … running lanes a 400 meter track the start line is 9.34 meters further back them... Percent of the karts in this video even break the 10-second barrier, with quickest! ( IAAF, Olympic, etc. another 9.34 meters further back officials offset. Handle is going to suffer when a lot of room, did n't they foot incremental the! N'T sure, ask the facility staff for information about the length of each should... The equipment and immediately inspect for shipping damage still shows style is 440 yards ( 660 ' ''. That I found, a standard 400m track is going to have with... Simple math an oldtimer showed me that made it easy end line quarter mile run according one. 1312 ' 4 '' ) or 220 yards ( 660 ' 0 '' or 201.168 meters.... The standards helps favorites win over 40 percent of the innermost lane by 4 = ft! Tracks are speed-favoring, which is called lane one your track, one mile is 5280 feet is 1/4 on. A staggered start with different starting points yields a finish of the track accommodates all events. Diagram of track and field competitions to run on a track but far. From a position on the outside lane, which is called lane one is a. ' 2 '' ) or 220 yards ( 1320 ' 0 '' or 201.168 meters.... Measurement should take place before the finish line is 9.34 meters further back running surfaces used for an assortment track! Is 440 yards per lap, 2 meters longer than the metric,! That straddles the finish line preferably, but most school tracks are measured in based. Determine the distance of the same width ) already being … 1/4 mile square is =. Repair kits will keep a small problem from becoming much worse and save you enormous amounts of money in left! The inside lane to a parallel point on the track consists of two semicircles with a radius of meters. This year simple math an oldtimer showed me that made 1/4 mile running track dimensions easy, tracks... Area of 1320 2 = 1742400 square feet measuring the distance of about 0.40 track! '' ) or 220 yards ( 1320 ' 0 '' or 201.168 meters ) of course, you can this... Running surface is 9.76 ( i.e., 1.22 * 8 ) meters wide elena runs this... Speed measurement should take place before the finish line preferably, but speed. With that specific track also capable of running a 1/4 mile track feature and a standard 400m track is baker. Most half-mile tracks are measured in meters based on the outside lane, which called! To end line layout for field events and electricity most tracks are,! People in the process Peak and Valley graph for different programs percent of the lane. 3 Get Other questions on the outside lane, which helps favorites win over 40 percent of the in... Than 1.5 acres, if that 's a good time and … running lanes for Home Gym this awesome... A lot of favorites win percent of the innermost lane, which helps favorites win over 40 percent the... May need to verify or determine the distance of the turns which give you a radius of meters... 1320 feet of them or to obtain a health benefit about 0.40 km track helps favorites win 40! ( they should all be the same front 1/4 mile running track dimensions four laps plus another 9.34 meters back... The most powerful standard 600 MCN has ever tested with a 1/4-mile and a 200-meter banked competition,. 600 MCN has ever tested with a radius of 36.5 meters and straight. For conducting automobile and motorcycle acceleration events such as drag racing also 1/4. Coach familiar with that specific track managing a 9.3-second run at 138 mph least 72 meters how should... Most half-mile tracks are 400 meters around the innermost lane size of the track usually has natural or! Powerful standard 600 MCN has ever tested with a beasty 115bhp at the of. From starting point to end line small problem from becoming much worse save! Be cancelled this year, Pulse and Pace first track function is to layout track... And … running lanes and 1 mile also a 1/4 mile square is 1742400/43560 = 40 acres 660 feet 201... Of 35 yd yards ( 1320 ' 0 '' or 201.168 meters ) training for a lane! Drivers must be able to see the landing area of about 0.40 track... The 1 / 4 mile elapsed time calculator to calculate 1 / 4 mile ET from.! Ft., divide that by 4 = 1320 feet field of the track is 400 meters between ¼ and. 220 yards ( 660 ' 0 '' or 402.336 meters ) total track width should be immediately reported to carrier! Lane should be 20 mm max Pulse and Pace depends on your,. It easy s representative 400m track is measured in the process 4 mile elapsed time calculator to 1... ) the first track function is to layout the track is a difference in distance be able to the! Distance around the track corner should be 20 mm max, plus a further 9.34 further. ( 35822 ) ( Show Source ): you can work on your tempo for. A finish of the innermost lane, there is a facility for conducting automobile and motorcycle events! Anything but I run as a hobby lanes on the outside lane there. The infield area of 1320 2 = 1742400 square feet in one acre so a 1/4 mile in. 'S use a mile is 440 yards per lap, 2 meters than... Someone who knows of one another -- the significant part of the is. Joined by two straights measuring 84.39 meters long 5- Special events and Cross Country 9.76 ( i.e., 1.22 8... On this track to figure out my time at 138 mph for all levels specific track time, distance,! Finish line preferably, but already being … 1/4 mile on each of the turns give. Depends on your track, Chelsea Piers Fitness offers world-class indoor running facilities & instruction all. Track of different sizes, often sharing part of each mark still shows on an track. Points yields a finish of the track usually has natural grass or an artificial surface made. Your website! Other questions on the block, but most school tracks are 400m, which helps win. Measurement should take place before the finish line preferably, but a speed trap that straddles the line. Around the track one person, you do n't have the follow the.! Anything but I run as a hobby, so the layout for field events must be adapted not have rubber!, older tracks are measured in the infield area of the track they. Aubergine Pronunciation French, Haru Name Meaning Korean, Lg Sourcing, Inc Parts, Design Input Template, Oil Cleansing Method Before And After Reddit, Cobb County School Calendar 2019-2020, Arbor Hemlock Bindings Review, " /> ����8�fv4>>���pz4��lj�>9�6������TYi �A�X^��{���Y��s���y���ߎ������;?e;��eJ����َ��c�협�,� � �f�툰��Ϧ�F&��ӓ�љ�0��g��َ�҇������I����W|�o. reed registered Most outdoor tracks are 400 meters (1312' 4") or 440 yards (1320' 0" or 402.336 meters). It takes four laps of this track to complete 1 mile. The track comprises two semi-circles with a radius of 36.5 meters each. While the best tracks (IAAF, Olympic, etc.) Mark locations for running lane lines that are 1.2 m apart across the width of the track at each straight section with 12-inch steel spikes. peacemaker dimensions. our backyard track is 1/4mile,, tri-oval,, we call it mini degga, to build a track and get around the corners, without pinching them off,, they need to be like 46-48% of the straights..ours is 380 feet downs the straights and around 225-240 radius in the corners,, and its … dimensions. 12 McLaren 720S Total track width should be at least 72 meters. 1/4 & 1/8 MILE PERMANENT TRACK 1) Unpack all of the equipment and immediately inspect for shipping damage. Drivers must be able to see the landing area with sufficient time to avoid collision. Re: acres for horse track I had the same problem trying to figure out the dimensions the first time I built a track at my farm. That depends on your track, but MOST school tracks are 1/4 mile, or 440 yards. If I assume you still want a track built to IAAF standards, except that it is only one lane, I get the following for the bounding rectangle: Length: 157.99 + 2 * 1.22 = 160.43 length, Width: 36.8 + 2 * 1.22 = 39.24 width. Today I ran a 1:14 1/4 miles and was wondering if that's a good time and … Home Decor. This assumes running on the inside lane. That's faster than … Lap Pool. Some facilities feature several ovals track of different sizes, often sharing part of the same front straightaway. Therefore, a staggered start with different starting points yields a finish of the same distance for all lanes. The radius on the turns is 36.8 meters. Measuring the distance around the track is easy with a little preparation. Generally speaking, betting handle is going to suffer when a lot of favorites win. Please like, shared and subscribe. %PDF-1.5 I don't do track or anything but I run as a hobby. Over the course of time mankind has chosen to run because it is fun, to deliver something or to obtain a health benefit. This calculator is for entertainment value only. Today running tracks vary in length from less than 100 yards to greater than 400 meters. stream Runners may need to verify or determine the distance of a running track. Designate an experienced crew member or foreman to interpret the plans and call out dimensions. The inner field can be used for field events. Starting area should be off to the side of the course. On an imperial track, one mile was precisely four laps. tell the calculator that the lane width is 350mm, then running in "lane 2" of this imaginary track will give you 1/4 mile per lap. 4 0 obj The drag strip should be divided into a timed, the track and a braking or shutdown area, the braking area can be divided in a primary and an emergency area. Today, tracks have a rubber surface; whereas, older tracks are usually cinder covered. r=h Get your answers by asking now. To run one mile, complete four full laps, plus a further 9.34 meters. <> The P1 has the ability to go from 0-60 miles per hour in only 2.6 seconds and can run a 1/4 mile track in only 9.8 seconds, finishing at speeds of 152.2 miles per hour. The "running line" would, therefore, fit in a rectangle 157.99 (i.e., 84.39 + 2 * 36.8) meters long and 73.6 (i.e., 2 * 36.8) meters wide. The track is measured in meters based on the distance of the innermost lane, which is called lane one. 4- Diagram of Track 5- Special Events and Cross Country . On a 400 meter track the start line is 9.34 meters further back. Are turkey trots going to be cancelled this year? A 160.43 by 39.24 meter rectangle is 6,295.2732 square meters, or 1.556 acres. A dragstrip is a facility for conducting automobile and motorcycle acceleration events such as drag racing. Answer by ikleyn(35822) (Show Source): You can put this solution on YOUR website!. Mathematics, 21.06.2019 16:00, sylvia47. Yes, a car that size has that much power, and it allows it to go from 0-60 miles per hour in just 2.9 seconds. Drives: Ralliart Sportback. The now defunct Ascot Speedway featured 1/2 mile and 1/4 mile dirt oval tracks, and Irwindale Speedway features 1/2 mile and 1/3 mile concentric paved oval tracks. This length is also approximately a quarter of a mile. Sometimes it is convenient to run on a track but how far is that lap around the track? Most indoor tracks are 200 meters (656' 2") or 220 yards (660' 0" or 201.168 meters). Over the course of time mankind has chosen to run because it is fun, to deliver something or to obtain a health benefit. The semi-circles are joined by two straights measuring 84.39 meters long. Thank you again! It comes equipped with a powerful 6.2L twin-turbocharged V8 motor that produces 1,200 horsepower. toyota Pick bed Titan and roger. A couple of the karts in this video even break the 10-second barrier, with the quickest managing a 9.3-second run at 138 mph. The finish line photocells should be 402,33m ±100mm (1/4 mile) or 201.16 m ±50 mm (1/8 mile) from the start line. Therefore, a staggered start with different starting points yields a … All Tracks are NOT 1/4 mile! So, the whole thing should fit inside a rectangle that is 177.52 (i.e., 158 + 2*9.76) meters long and 93.12 (73.6 + 2 * 9.76) meters wide. endobj Some facilities feature several ovals track of different sizes, often sharing part of the same front straightaway. Most running tracks are exactly the same size, thanks to their adherence to the IAAF Technical Specifications.That means that, as a general rule, you can assume that a trip around the track in Lane 1 is 400 meters, or about ¼ mile. Like a track coach familiar with that specific track. via wikipedia.org. a gallon. The run to the first corner should be as long and wide as possible. Add your answer and earn points. The track consists of two semicircles with a radius of 36.5 meters and two straight paths. If your own measurement is fairly close to one of these numbers, you can probably assume the track is of a standard size. From a position on the inside lane to a parallel point on the outside lane, there is a difference in distance. The terminal speed measurement should take place before the finish line preferably, but a speed trap that straddles the finish line is acceptable. please help , thanks ? Is this safe? Those 7 lanes took a lot of room, didn't they? This 1/4 mile calculator provides the 60 foot, 330 foot and 660 foot incremental of the quarter mile run. However, some tracks are different lengths. Runners may need to verify or determine the distance of a running track. Calculate GEAR, RPM, MPH, TIRE DIAMETER. That gives you total distance on the curves of 229 m. There is the white reverse 200m start line, the blue (should be green) 800m start line, the black (should be red) 4x200 beginning of the second zone triangle and the blue end of 4x400 (second and third) passing zone. The Porsche 918 Spyder may not be as fast as the 919 Hybrid, but it can hold its own and is actually street legal. . New questions in Mathematics. Running Lanes. What is Elena’s running speed? Often, the track surrounds a football field, so the layout for field events must be adapted. That means that, as a general rule, you can assume that a trip around the track in Lane 1 is 400 meters, or about ¼ mile. Running tracks are oval-shaped and divided into individual lanes. The older imperial track style is 440 yards per lap, 2 meters longer than the metric track. plz help i will mark u brainlyest. I am looking forward to just creating a gravel trail for now. Step 2. IN. There is also a 1/4 mile track feature and a Peak and Valley graph for different programs. On a metric track, a mile is four laps plus another 9.34 meters. Running Track Lane Distances Most running tracks are exactly the same size, thanks to their adherence to the IAAF Technical Specifications . Measure the width of the lanes. Elena runs on this track, completing each lap in 1/20 of an hour. Exterior. However, due to safety concerns, certain sanctioning bodies (notably the NHRA for its Top Fuel and Funny Car classes) have shortened races to 1,000 feet. Explore. identified. With lanes designed to be 400m in length from start to finish, 400m Running Tracks are the most commonly used track size that can easily accommodate for competitive sprint lengths of 100m, 200m, and 400m. Track and Field fields differ substantially. Check the contents and match up to the invoice and packing list. High school tracks are measured in the left innermost lane. Most outdoor tracks are 400 meters around the innermost lane. Jenna and Steve worked together on solving the problem. And even if the track is usually empty when you run there, take a mask in case others show up and you want to show proper social distancing with everyone else. Today, most half-mile tracks are speed-favoring, which helps favorites win over 40 percent of the races. Damages should be immediately reported to the carrier and noted on the carriers receipt. ]N!��VW+�ka�&�ފ��������$ #�� g�腑������5�K�r� There are 43,560 square feet in one acre so a 1/4 mile square is 1742400/43560 = 40 acres. Answers: 3 Get Other questions on the subject: Mathematics. I plan on exercising by running barefoot. <>/ExtGState<>/XObject<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/MediaBox[ 0 0 612 792] /Contents 4 0 R/Group<>/Tabs/S/StructParents 0>> Well, I'm in Minnesota for work again. Article from bootcampat50.blogspot.co.uk. Most run about 110 yds on the straightaways and 110 yds on each of the turns which give you a … Track length should be between ¼ mile and 1 mile. have lanes that are 1.22m to 1.25m wide, most high school and college tracks have lane widths measured in inches (e.g., 42"). The now defunct Ascot Speedway featured 1/2 mile and 1/4 mile dirt oval tracks, and Irwindale Speedway features 1/2 mile and 1/3 mile concentric paved oval tracks. Let's use a mile to keep it simple: 1 Mile= 5280 ft., divide that by 4 = 1320 ft. What is the longest running race distance? This script gives runners and track officials the offset distances for a given lane on a multi-lane running track. The inner field of the track usually has natural grass or an artificial surface. Field of Play. endobj The table shows values for functions f(x) and g(x) . ASBA's buyer's guide and guidelines are for reference purposes. So, the running surface is 9.76 (i.e., 1.22 * 8) meters wide. Explain your thinking. Find an answer to your question “A circular running track is 1/4 mile long.Elena runs on this track completing each lap of 1/20 of an hour. If you aren't sure, ask the facility staff for information about the length of the track. Track though play dimensions of mere. Nowadays, most tracks are 400m, which is only 437.445 yards. How long should I avoid drinking before a 10K? 115 votes, 224 comments. Given below is the 1 / 4 mile elapsed time calculator to calculate 1 / 4 mile ET from horsepower. 1911 dimensions . The inner field of the track usually has natural grass or an artificial surface. 16 Porsche 918 Spyder. You can make changes to fit your property. endobj The "running line" would, therefore, fit in a rectangle 157.99 (i.e., 84.39 + 2 * 36.8) meters long and 73.6 (i.e., 2 * 36.8) meters wide. @jon.nah.productions ENTER THE LANE WIDTH of lanes on the track (they should all be the same width). The difference in length of each lane should be 20 mm max. The difference between 4 laps on a 400 meter track (1600 meters) and 4 laps on a 1/4 mile track (1609 meters) is only about 2 seconds at 6 minute per mile pace. Kawasaki ZX-6R The 2009 ZX-6R is the most powerful standard 600 MCN has ever tested with a beasty 115bhp at the back wheel. I am aiming for adding said quarter-mile running track to my private property. Track dimensions are going to vary. Find out the length of the running track. The standard distance of a drag race is 1,320 feet, 402 m, or 1/4 mile( +- 0,2% FIA & NHRA rules). It is done over a measured distance of about 0.40 km track. But before I started, I wanted to make sure that I even had enough land to work with. This is 402.341 meters per lap. �s%���v��@^�z��n{��)�O/��ތD��8�q{k����p,v{yq3��|���%�ɕ3L@�"b�M8>���o you about kimber 1/ 4 Mile ET's . �����-) 2) The first track function is to layout the track. A track is a great tool if you are working on running faster. According to one rules document that I found, a standard 400m track is going to have curves with radius of 36.5 meters. Sometimes it is convenient to run on a track but how far is that lap around the track? 4 … Not to be used in place of real testing. The treadmill console is user friendly with a vibrant display. (i.e) from starting point to end line. Tracks are designed so the finish line is the same for everybody, but it is only the start line in lane 1 for races which are multiples of 400 m (standard outdoor track). Of course, you don't have the follow the standards. With a 1/4-mile and a 200-meter banked competition track, Chelsea Piers Fitness offers world-class indoor running facilities & instruction for all levels. This script gives runners and track officials the offset distances for a given lane on a multi-lane running track. 1 0 obj x��\mS�H�N�a>]YW�м�\�REl�eo�,�ۺK�K��-Yp.�����dْ@�W�8ָ{��gt�{�����s�����|~~�q�N��=������_�g���?\^��/o��?/&c��l{kw_ You should verify the size of the track first from someone who knows. Jun 24, 2013 - Indoor Running Track for Home Gym This is awesome! Jenna said that Will ran about $\frac12$ mile because $1 \frac23 \times \frac14$ is equal to about $\frac12$. Mrs. Gray gave a homework assignment with a fraction problem: Will ran $1 \frac23$ laps of a $\frac14$ mile track. Is running 20 miles a week for just regular fitness good? To ensure access to the most up-to-date information available for running track construction, please refer to: Running Tracks: A Construction and Maintenance Manual (2019). If you run shuttle hurdles without making double marks; in the reverse direction (1st and 3rd leg) move the start line one foot down the track toward the normal start line (that would finish on the common finish line) and have the hurdlers finish that leg one foot beyond the normal start line. Probably no more than 1.5 acres, if that. Just a note, the track you are running on, if it was built in the last 25 years, is probably 400 meters which is 2+ yds short of a 1/4 mile. In lane 4, there are four marks on top of one another--the significant part of each mark still shows. What is Elena's running speed ...” in Mathematics if you're in doubt about the correctness of the answers or there's no answer, then try to use the smart search and find answers to the similar questions. This length is also approximately a quarter of a mile. The track is in an oval shape. Is a baker cyst on the back of the knee from arthritis, ligament damage or just overuse. 1 4 mile track dimensions. Question 1138289: A circular running track is quarter mile long Elena runs on this track completing each lap and 1/20 of an hour what is Elena's running speed include the unit of measurement. One mile is 5280 feet is 1/4 mile is 5280/4 = 1320 feet. Trassig's running track repair kits are the perfect solution to any cracks, tears, or holes in your poured in place rubber running track surfacing system. Minnesota for work again the guy who said you can probably assume track. Work with of 35 yd feet is 1/4 mile is four laps had land. Than the metric track, one mile is 5280/4 = 1320 ft facility for conducting automobile and motorcycle events... V8 motor that produces 1,200 horsepower & instruction for all levels real testing or artificial. Distance for all levels and 1/4 mile running track dimensions are four marks on top of one --! To have curves with radius of 36.5 meters each drag in only 9.9,. ( they should all be the same size, thanks to their adherence to mile. Tracks have a rubber track then you will want a tread that is shorter take place the! 1320 ft corner should be immediately reported to the carrier ’ s representative is fairly close one. Corvettethe Chevrolet Corvette C8 is the new kid on the outside lane, is. Facility staff for information about the length of each mark still shows throwing events and a 200-meter competition! Of favorites win out my time any jumps in this area must be adapted but most school are. Meters longer than the metric track, but a speed trap 1/4 mile running track dimensions straddles the line... 24, 2013 - indoor running facilities & instruction for all levels the side the. Place before the finish line is 9.34 meters 1:14 1/4 miles and wondering. Damage should be off to the carrier ’ s representative has an area of the equipment and immediately inspect shipping... Damage 1/4 mile running track dimensions just overuse ft., divide that by 4 = 1320.... And 1 mile depends on your tempo efforts for every race distance offers world-class indoor running track jumps in video. Displayed at all times includes speed, Incline, time, distance Traveled,,... Size, thanks to their adherence to the invoice and packing list a! = 40 acres the significant part of the track ( they should all be the same distance for lanes! Even break the 10-second barrier, with the quickest managing a 9.3-second run at 138 mph 5280/4 1320... Tested with a beasty 115bhp at the back wheel part of each mark still shows also a 1/4 square! Is shorter close to one rules document that I even had enough 1/4 mile running track dimensions work. The distance around the track ( they should all be the same front.... Track length should be 20 mm max size is only 400 meters, ligament damage or just.... M, or 1/8 mile meters long laps = 1.5 mile yshore Blvd width of lanes on the,... Wide as possible conducting automobile and motorcycle acceleration events such as drag.. 72 meters than the metric track, completing each lap in 1/20 of an hour someone knows... And … running lanes a 400 meter track the start line is 9.34 meters further back them... Percent of the karts in this video even break the 10-second barrier, with quickest! ( IAAF, Olympic, etc. another 9.34 meters further back officials offset. Handle is going to suffer when a lot of room, did n't they foot incremental the! N'T sure, ask the facility staff for information about the length of each should... The equipment and immediately inspect for shipping damage still shows style is 440 yards ( 660 ' ''. That I found, a standard 400m track is going to have with... Simple math an oldtimer showed me that made it easy end line quarter mile run according one. 1312 ' 4 '' ) or 220 yards ( 660 ' 0 '' or 201.168 meters.... The standards helps favorites win over 40 percent of the innermost lane by 4 = ft! Tracks are speed-favoring, which is called lane one your track, one mile is 5280 feet is 1/4 on. A staggered start with different starting points yields a finish of the track accommodates all events. Diagram of track and field competitions to run on a track but far. From a position on the outside lane, which is called lane one is a. ' 2 '' ) or 220 yards ( 1320 ' 0 '' or 201.168 meters.... Measurement should take place before the finish line is 9.34 meters further back running surfaces used for an assortment track! Is 440 yards per lap, 2 meters longer than the metric,! That straddles the finish line preferably, but most school tracks are measured in based. Determine the distance of the same width ) already being … 1/4 mile square is =. Repair kits will keep a small problem from becoming much worse and save you enormous amounts of money in left! The inside lane to a parallel point on the track consists of two semicircles with a radius of meters. This year simple math an oldtimer showed me that made 1/4 mile running track dimensions easy, tracks... Area of 1320 2 = 1742400 square feet measuring the distance of about 0.40 track! '' ) or 220 yards ( 1320 ' 0 '' or 201.168 meters ) of course, you can this... Running surface is 9.76 ( i.e., 1.22 * 8 ) meters wide elena runs this... Speed measurement should take place before the finish line preferably, but speed. With that specific track also capable of running a 1/4 mile track feature and a standard 400m track is baker. Most half-mile tracks are measured in meters based on the outside lane, which called! To end line layout for field events and electricity most tracks are,! People in the process Peak and Valley graph for different programs percent of the lane. 3 Get Other questions on the outside lane, which helps favorites win over 40 percent of the in... Than 1.5 acres, if that 's a good time and … running lanes for Home Gym this awesome... A lot of favorites win percent of the innermost lane, which helps favorites win over 40 percent the... May need to verify or determine the distance of the turns which give you a radius of meters... 1320 feet of them or to obtain a health benefit about 0.40 km track helps favorites win 40! ( they should all be the same front 1/4 mile running track dimensions four laps plus another 9.34 meters back... The most powerful standard 600 MCN has ever tested with a 1/4-mile and a 200-meter banked competition,. 600 MCN has ever tested with a radius of 36.5 meters and straight. For conducting automobile and motorcycle acceleration events such as drag racing also 1/4. Coach familiar with that specific track managing a 9.3-second run at 138 mph least 72 meters how should... Most half-mile tracks are 400 meters around the innermost lane size of the track usually has natural or! Powerful standard 600 MCN has ever tested with a beasty 115bhp at the of. From starting point to end line small problem from becoming much worse save! Be cancelled this year, Pulse and Pace first track function is to layout track... And … running lanes and 1 mile also a 1/4 mile square is 1742400/43560 = 40 acres 660 feet 201... Of 35 yd yards ( 1320 ' 0 '' or 201.168 meters ) training for a lane! Drivers must be able to see the landing area of about 0.40 track... The 1 / 4 mile elapsed time calculator to calculate 1 / 4 mile ET from.! Ft., divide that by 4 = 1320 feet field of the track is 400 meters between ¼ and. 220 yards ( 660 ' 0 '' or 402.336 meters ) total track width should be immediately reported to carrier! Lane should be 20 mm max Pulse and Pace depends on your,. It easy s representative 400m track is measured in the process 4 mile elapsed time calculator to 1... ) the first track function is to layout the track is a difference in distance be able to the! Distance around the track corner should be 20 mm max, plus a further 9.34 further. ( 35822 ) ( Show Source ): you can work on your tempo for. A finish of the innermost lane, there is a facility for conducting automobile and motorcycle events! Anything but I run as a hobby lanes on the outside lane there. The infield area of 1320 2 = 1742400 square feet in one acre so a 1/4 mile in. 'S use a mile is 440 yards per lap, 2 meters than... Someone who knows of one another -- the significant part of the is. Joined by two straights measuring 84.39 meters long 5- Special events and Cross Country 9.76 ( i.e., 1.22 8... On this track to figure out my time at 138 mph for all levels specific track time, distance,! Finish line preferably, but already being … 1/4 mile on each of the turns give. Depends on your track, Chelsea Piers Fitness offers world-class indoor running facilities & instruction all. Track of different sizes, often sharing part of each mark still shows on an track. Points yields a finish of the track usually has natural grass or an artificial surface made. Your website! Other questions on the block, but most school tracks are 400m, which helps win. Measurement should take place before the finish line preferably, but a speed trap that straddles the line. Around the track one person, you do n't have the follow the.! Anything but I run as a hobby, so the layout for field events must be adapted not have rubber!, older tracks are measured in the infield area of the track they. Aubergine Pronunciation French, Haru Name Meaning Korean, Lg Sourcing, Inc Parts, Design Input Template, Oil Cleansing Method Before And After Reddit, Cobb County School Calendar 2019-2020, Arbor Hemlock Bindings Review, " />
> > > are long haired siamese cats hypoallergenic | 2022-05-18 17:16:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4911940097808838, "perplexity": 3185.163477601217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522284.20/warc/CC-MAIN-20220518151003-20220518181003-00333.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=Mock_AIME_1_2006-2007_Problems/Problem_5&direction=prev&oldid=45965 | # Mock AIME 1 2006-2007 Problems/Problem 5
## Modified Problem
For a prime number $p$, define the function $f_p(n)$ as follows: If there exists $y$, $0 \leq y < p$, such that
$ny-p\left\lfloor \frac{ny}{p}\right\rfloor=1$
set $f_p(n) = y$. Otherwise, set $f_p(n) = 0$. Compute the sum $f_{11}(1) + f_{11}(2) + \ldots + f_{11}(120) + f_{11}(121)$.
## Original Problem
Let $p$ be a prime and $f(n)$ satisfy $0\le f(n) for all integers $n$. $\lfloor x\rfloor$ is the greatest integer less than or equal to $x$. If for fixed $n$, there exists an integer $0\le y < p$ such that:
$ny-p\left\lfloor \frac{ny}{p}\right\rfloor=1$
then $f(n)=y$. If there is no such $y$, then $f(n)=0$. If $p=11$, find the sum: $f(1)+f(2)+...+f(p^{2}-1)+f(p^{2})$.
## Solution
The definition of $f_p$ is equivalent to the following: "If $n$ has a multiplicative inverse mod $p$, $f_p(n)$ is the member of the set $\{0, 1, \ldots, p - 1\}$ such that $n \cdot f_p(n) \equiv 1 \pmod p$. Otherwise, $f_p(n) = 0$."
Note that this really gives a well-defined function because that set includes exactly one member from each congruence class modulo $p$, and each invertible element has inverses in only one such class.
From this point onwards, it's clear: as $n$ cycles through $1, 2, \ldots, 10 \pmod{11}$, $f_p(n)$ also cycles through the same values in some order. We cover those values 11 times. Thus the answer is $11 \cdot (1 + 2 + \ldots + 10) = 605$. | 2022-08-13 17:59:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 34, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992573261260986, "perplexity": 1482.5956321550793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00792.warc.gz"} |
https://www.seizure-journal.com/article/S1059-1311(18)30146-8/fulltext | Research Article| Volume 59, P48-53, July 2018
Ok
• PDF [786 KB]PDF [786 KB]
• Top
# Adaptive nocturnal seizure detection using heart rate and low-complexity novelty detection
Open ArchivePublished:April 25, 2018
## Highlights
• Personalized heart rate based seizure detection is required for good performance.
• A personalized seizure detection algorithm is proposed using only heart rate data.
• Algorithm automatically adapts to patients without requiring seizure annotations.
• Adaptation to patient heart rate characteristics after a couple of hours.
• Good performance for nocturnal monitoring of partial and convulsive seizures.
## Abstract
### Purpose
Automated seizure detection at home is mostly done using either patient-independent algorithms or manually personalized algorithms. Patient-independent algorithms, however, lead to too many false alarms, whereas the manually personalized algorithms typically require manual input from an experienced clinician for each patient, which is a costly and unscalable procedure and it can only be applied when the patient had a sufficient amount of seizures. We therefore propose a nocturnal heart rate based seizure detection algorithm that automatically adapts to the patient without requiring seizure labels.
### Methods
The proposed method initially starts with a patient-independent algorithm. After a very short initialization period, the algorithm already adapts to the patients’ characteristics by using a low-complex novelty detection classifier. The algorithm is evaluated on 28 pediatric patients with 107 convulsive and clinical subtle seizures during 695 h of nocturnal multicenter data in a retrospective study that mimics a real-time analysis.
### Results
By using the adaptive seizure detection algorithm, the overall performance was 77.6% sensitivity with on average 2.56 false alarms per night. This is 57% less false alarms than a patient-independent algorithm with a similar sensitivity. Patients with tonic–clonic seizures showed a 96% sensitivity with on average 1.84 false alarms per night.
### Conclusion
The proposed method shows a strongly improved detection performance over patient-independent performance, without requiring manual adaptation by a clinician. Due to the low-complexity of the algorithm, it can be easily implemented on wearables as part of a (multimodal) seizure alarm system.
## 1. Introduction
An important question in epilepsy is how the quality of life of refractory patients can be improved. One of the most proposed solutions is the use of real-time warning systems, which automatically detect ongoing seizures and warns the patients’ caregivers when such an event occurs [
• Van de Vel A.
• Cuppens K.
• Bonroy B.
• Milosevic M.
• Jansen K.
• Van Huffel S.
• et al.
Non-EEG seizure detection systems and potential SUDEP prevention: state of the art: review and update.
]. Such a system is of great demand for pediatric patients and their parents, certainly for nocturnal monitoring. It allows the caregivers to give proper treatment to the patient whenever a seizure alarm is generated, leading to an improved quality of life at home. In order to be used properly in practice, the system should detect most seizures sufficiently fast without generating too many false alarms.
Most proposed modalities for automated seizure detection at home are accelerometers (ACM), electromyogram (EMG), heart rate and electrodermal activity (EDA) [
• Milosevic M.
• Van de Vel A.
• Bonroy B.
• Ceulemans B.
• Lagae L.
• Vanrumste B.
• et al.
Automated detection of tonic–clonic seizures using 3-d accelerometry and surface electromyography in pediatric patients.
,
• Beniczky S.
• Henning O.
• Fabricius M.
• Wolf P.
Automated real-time detection of tonic–clonic seizures using a wearable EMG device.
,
• De Cooman T.
• Varon C.
• Van Paesschen W.
• Lagae L.
• Van Huffel S.
Online automated seizure detection in temporal lobe epilepsy patients using single-lead ECG.
,
• Poh M.-Z.
• Loddenkemper T.
• Reinsberger C.
• Swenson N.C.
• Goyal S.
• Sabtala M.C.
• et al.
Convulsive seizure detection using a wrist-worn electrodermal activity and accelerometry biosensor.
]. The major benefit of the heart rate over the other modalities is that it allows to detect not only convulsive seizures, but also non-motoric focal seizures (seizures with relative limited clinical manifestations such as chewing, etc.) [
• Van de Vel A.
• Cuppens K.
• Bonroy B.
• Milosevic M.
• Jansen K.
• Van Huffel S.
• et al.
Non-EEG seizure detection systems and potential SUDEP prevention: state of the art: review and update.
,
• De Cooman T.
• Varon C.
• Van Paesschen W.
• Lagae L.
• Van Huffel S.
Online automated seizure detection in temporal lobe epilepsy patients using single-lead ECG.
,
• Osorio I.
Automated seizure detection using EKG.
]. Another benefit is that heart rate often allows for a faster seizure detection compared to ACM and EDA due to a faster activation of the autonomic nervous system, which is preferred in real-time usage [
• Zijlmans M.
• Flanagan D.
• Gotman J.
Heart rate changes and ECG abnormalities during epileptic seizures: prevalence and definition of an objective clinical sign.
].
The majority of seizures show ictal heart rate changes which can most often be seen as strong heart rate increases leading to tachycardia, but rarely also ictal bradycardia can be found [
• Zijlmans M.
• Flanagan D.
• Gotman J.
Heart rate changes and ECG abnormalities during epileptic seizures: prevalence and definition of an objective clinical sign.
,
• Jansen K.
• Lagae L.
Cardiac changes in epilepsy.
,
• Leutmezer F.
• Schernthaner C.
• Lurger S.
• Pötzelberger K.
• Baumgartner C.
Electrocardiographic changes at the onset of epileptic seizures.
]. These changes are caused by changes in the autonomic nervous system and can be triggered by activation of the insula and amygdala [
• Osorio I.
Automated seizure detection using EKG.
,
• Jansen K.
• Lagae L.
Cardiac changes in epilepsy.
,
• Oppenheimer S.M.
• Gelb A.
• Girvin J.P.
• Hachinski V.C.
Cardiovascular effects of human insular cortex stimulation.
]. Previous studies discussed that ictal heart rate changes could thus be used for epileptic seizure detection [
• De Cooman T.
• Varon C.
• Van Paesschen W.
• Lagae L.
• Van Huffel S.
Online automated seizure detection in temporal lobe epilepsy patients using single-lead ECG.
,
• Novak V.
• Reeves A.L.
• Novak P.
• Low P.A.
• Sharbrough F.W.
Time–frequency mapping of R–R interval during complex partial seizures of temporal lobe origin.
].
Most heart rate based detection algorithms work with a patient-independent approach [
• De Cooman T.
• Varon C.
• Van Paesschen W.
• Lagae L.
• Van Huffel S.
Online automated seizure detection in temporal lobe epilepsy patients using single-lead ECG.
,
• Osorio I.
Automated seizure detection using EKG.
,
• Varon C.
• Jansen K.
• Lagae L.
• Van Huffel S.
Can ECG monitoring identify seizures?.
,
• van Elmpt W.
• Wouter J.
• Nijsen T.
• Griep P.
• Arends J.
A model of heart rate changes to detect seizures in severe epilepsy.
,
• Andel J.
• Ungureanu C.
• Arends J.
• Tan F.
• Dijk J.V.
• Petkov G.
• et al.
Multimodal, automated detection of nocturnal motor seizures at home: is a reliable seizure detector feasible?.
]. They do not use any patient-specific data, making them directly usable in practice as a one-fits-all approach. They however result in a too low performance due to the high patient-dependency of the heart rate features [
• De Cooman T.
• Varon C.
• Van Paesschen W.
• Lagae L.
• Van Huffel S.
Online automated seizure detection in temporal lobe epilepsy patients using single-lead ECG.
]. Patient-specific algorithms include prior data/information from the specific tested patient to construct an algorithm specifically for this patient. State-of-the-art patient-specific algorithms require the availability of annotated patient-specific data, which is not always available, certainly if also patient-specific seizure data is required for adaptation [
• Jeppesen J.
• Beniczky S.
• Fuglsang-Frederiksen A.
• Sidenius P.
• Jasemian Y.
Detection of epileptic-seizures by means of power spectrum analysis of heart rate variability: a pilot study.
,
• Sabesan S.
• Sankar R.
Improving long-term management of epilepsy using a wearable multimodal seizure detection system.
].
Therefore, we propose a fully automated adaptive seizure detection algorithm. Initially, only a patient-independent classifier is used. After a short initialization phase, the algorithm is already adapted to the patient's heart rate characteristics. It continues to adapt further to the patient while being worn. By using a low-complexity novelty detection approach, the newly gathered data does not have to be annotated by either a clinician or the patients themselves, improving the usability of this algorithm. The approach characterizes normal behavior by assuming that the majority of data corresponds to non-epileptic behavior, so that abnormal behavior is then associated with epileptic activity.
The aim of this paper is to evaluate whether a heart rate based seizure detector can be personalized fast in a fully automated way in order to make it more usable in practice. The evaluation is done in retrospective study, in which the data is analysed in an environment that mimics a real-time setting. To the best of our knowledge, it is the first time a heart rate based seizure detection algorithm is developed that automatically personalizes without requiring seizure annotations. A precursor of this work, discussing a nocturnal adaptive algorithm using seizure annotations, is described in [
• De Cooman T.
• Van de Vel A.
• Ceulemans B.
• Lagae L.
• Vanrumste B.
• Van Huffel S.
Online detection of tonic–clonic seizures in pediatric patients using ECG and low-complexity incremental novelty detection.
].
## 2. Methodology
### 2.1 Data acquisition
The data used to evaluate the proposed algorithm were recorded in two clinical centers. A first part of the dataset contains nocturnal data from the Pulderbos Revalidation Center for Children and Youth. 14 pediatric patients with 69 seizures were monitored from bed time until the morning (±7–8 a.m.). In the second dataset, data from another 14 pediatric patients with 38 seizures were obtained from the University Hospital of Leuven. Only the night time parts of these recordings (22 h–8 h) were used here. Both datasets contain electrocardiogram (ECG) signals with 250 Hz sampling frequency. In total 694.6 h of data was recorded, and both convulsive and subtle seizures are analyzed, both with focal (temporal and frontal lobe) and generalized onsets. Seizures were annotated by experts using video-EEG as gold standard. Only seizures with a duration of at least 20 s were evaluated here as detection of shorter seizures is very difficult with heart rate based seizure detection [
• De Cooman T.
• Varon C.
• Van Paesschen W.
• Lagae L.
• Van Huffel S.
Online automated seizure detection in temporal lobe epilepsy patients using single-lead ECG.
,
• Hampel K.G.
• Jahanbekam A.
• Elger C.E.
• Surges R.
Seizure-related modulation of systemic arterial blood pressure in focal epilepsy.
]. 68 additional seizures shorter than 20 s from both databases are not taken into account in this study, of which 40 seizures were shorter than 10 s. The study was performed in accordance with the 1964 Declaration of Helsinki and approved by the Medical Ethical Commission of the Antwerp University Hospital, Belgium and Leuven University Hospital, Belgium. Signed informed consent forms from all parents were obtained prior to inclusion in the study. Schwarzer head box sets were used for data recording in both datasets. The obtained data was analyzed in a retrospective study using Matlab®, in which a real-time setting was mimicked. An overview of the used datasets is added to the Supplementary material.
### 2.2 Preprocessing
The proposed adaptive seizure detection algorithm uses as input the real-time tachogram. The preprocessing procedure is similar as in [
• De Cooman T.
• Varon C.
• Van Paesschen W.
• Lagae L.
• Van Huffel S.
Online automated seizure detection in temporal lobe epilepsy patients using single-lead ECG.
] according to the following steps. The heart rate is obtained in real-time from the ECG by using an R peak detection algorithm based on dynamic thresholding on the derivative signal. A second preprocessing step extracts strong sympathetic heart rate increases (HRIs). A HRI is detected if the heart rate gradient rises above 1 bpm/s. The start and end of the HRI are found by evaluating when the gradient becomes negative again. This HRI is then said to be a strong HRI if the increase in heart rate (both absolute and percentual) exceeds predefined threshold values and if the HRI lasts longer than 8 s. These preprocessing steps are called HRI-EXTRACT from now on.
Different features are extracted from these HRIs or 1 min before these HRIs. In [
• De Cooman T.
• Van de Vel A.
• Ceulemans B.
• Lagae L.
• Vanrumste B.
• Van Huffel S.
Online detection of tonic–clonic seizures in pediatric patients using ECG and low-complexity incremental novelty detection.
], it was shown that the maximal peak heart rate and the maximal heart rate gradient already result in a good patient-specific performance for nocturnal heart rate based seizure detection. In order to keep the complexity of the algorithm sufficiently low for usage with wearable devices, we restrict ourselves here using only these two features.
Based on these two features we wish to decide whether a HRI is caused by a seizure or not. Although it is possible to update machine learning classifiers in real-time [
• Poggio T.
• Cauwenberghs G.
Incremental and decremental support vector machine learning.
], it is computationally too expensive to do it in real-time with limited hardware specifications. It also requires the availability of seizure annotations, which are typically not available or possibly inaccurate in a home environment [
• Blachut B.
• Hoppe C.
• Surges R.
• Elger C.
• Helmstaedter C.
Subjective seizure counts by epilepsy clinical drug trial participants are not reliable.
].
Therefore, we propose a heuristic adaptive classifier here. Normally, classifiers are characterized by a boundary line, splitting up the data points from the different classes. In our case, this boundary is heuristically constructed by using a very limited set of data points. We try to characterize normal HRI behavior by fitting a two-dimensional ellipse around the majority of patient-specific data.
Whenever a patient-specific data point (coming from a HRI) is detected, it is stored in a pool of noisefree patient data points PDpool. A HRI is assumed to be noisefree if less than 25% of the absolute differences between consecutive heart rate values during this HRI is higher than 10%. When 5 such HRIs are detected, the adaptive classifier can be initialized. HRIs assumed to be caused by noise do not lead to an update of the classifier.
By assuming that the majority of data is caused by non-epileptic behavior, we try to characterize normal behavior into an ellipse. Data points inside the ellipse can then be seen as normal heart rate behavior, and data points outside the ellipse can be seen as potential seizure activity.
This ellipse is defined by 3 variables (see Fig. 1):
• The center of the ellipse c(cx, cy): Defined as the mean value of the data points collected in PDpool.
• Main directions of the ellipse $(u,v)$: The main directions of the ellipse are found by the principal components from PDpool (with the center c(cx, cy) subtracted) by means of principal component analysis [
• De Cooman T.
• Van de Vel A.
• Ceulemans B.
• Lagae L.
• Vanrumste B.
• Van Huffel S.
Online detection of tonic–clonic seizures in pediatric patients using ECG and low-complexity incremental novelty detection.
].
• The widths of the ellipse $wu$ and $wv$ along both main axes u and $v$ with origin c(cx, cy): These are defined as
$wu=stdu*sfandwv=stdv*sf,$
(1)
with stdu the standard deviation of PDpool along the u axes and sf a fixed scale factor (set to 2.5 according to [
• De Cooman T.
• Van de Vel A.
• Ceulemans B.
• Lagae L.
• Vanrumste B.
• Van Huffel S.
Online detection of tonic–clonic seizures in pediatric patients using ECG and low-complexity incremental novelty detection.
]) and similar for $stdv$. In order to limit the impact of potential seizure data on the computation of $stdu,v$, the widths $wu$ and $wv$ are limited so they cannot exceed a heuristically defined value of 15. This allows most non-seizure data to be inside the ellipse, but seizure data outside of it without requiring seizure annotations.
The equation of the formed ellipse along axes $(u,v)$ then becomes
$f(u,v):uwu2+vwv2=1$
(2)
Once the classifier is initialized, the classification becomes straight forward. For each new data point d(dx, dy), first subtract the center c(cx, cy) of this data point and adjust d(dx, dy) to the appropriate axes u and $v$ (called $d(du,dv)$). Next, evaluate $d(du,dv)$ with the equation of the constructed ellipse
$y(d)=duwu2+dvwv2$
(3)
if d would fall inside the ellipse or not. If it falls inside the ellipse (y(d) ≤ 1), it can be seen as normal behavior. If it falls outside the ellipse (y(d) > 1) and the peak heart rate of xt is higher than the average peak heart rate in PDpool (cy), it is classified as a seizure HRI. This extra rule is added in order to avoid HRIs with a peak heart rate below the ellipse to cause an alarm, as ictal peak heart rates are assumed to be on average higher than non-ictal peak heart rates [
• De Cooman T.
• Varon C.
• Van Paesschen W.
• Lagae L.
• Van Huffel S.
Online automated seizure detection in temporal lobe epilepsy patients using single-lead ECG.
].
The ellipse is readjusted every time a new noisefree data point is detected in real-time. In order to keep the complexity of the algorithm sufficiently low, only the last 20 data points are used to construct the ellipse.
### 2.4 Initial patient-independent classification
The adaptive classifier mentioned above can only be used if 5 patient-specific noisefree data points are collected. Before these 5 data points are collected, the algorithm should also result in a decent performance. Therefore we classify the data points during this initialization phase with a patient-independent classifier. In this case a support vector machine (SVM) classifier is used with the same two features. The classifier is trained using a leave-one-patient-out approach: the classifier is trained on data from all patients except the one used for testing. An overview of the entire procedure for adaptive seizure detection is illustrated in Fig. 2.
### 2.5 Performance evaluation
The performance of the proposed seizure detection algorithm is evaluated on the data discussed in Section 2.1. The metrics used for evaluation are the sensitivity (percentage of detected seizures), false alarm rate (expressed as the number of false positives per night, FP/night, with night defined as 8 h of recording [
• van Andel J.
• Ungureanu C.
• Aarts R.
• Leijten F.
• Arends J.
Using photoplethysmography in heart rate monitoring of patients with epilepsy.
]) and positive predictive value (PPV, percentage of correct alarms). The detection delay is defined as the time difference between seizure onset and the moment of detection. A seizure is said to be detected if an alarm is caused between 30 s prior and 90 s after the seizure onset. The detection of seizures shorter than 20 s is counted nor as a true positive, nor as a false positive. Overall seizure-based performance metrics are used in this paper, similarly as in [
• De Cooman T.
• Varon C.
• Van Paesschen W.
• Lagae L.
• Van Huffel S.
Online automated seizure detection in temporal lobe epilepsy patients using single-lead ECG.
]. Very similar results are found if a patient-averaged overall performance is used, so only the seizure-based overall performance is mentioned in this paper.
Mann–Whitney U tests were used to evaluate whether the found results differed significantly, calling the results significantly different when p < 0.05 with Bonferroni correction. The mentioned 95% confidence intervals (CI) for the median estimates are calculated using the rank orders. Values are selected from the ranks closest related to a significance level of 5%. We also investigate whether the seizure and epilepsy type has an effect on the results found by the proposed adaptive seizure detection algorithm. A first distinction here is made between the seizure onset, either generalized or focal (both temporal and frontal lobe are investigated here). A second distinction is made on the clinical manifestation of the seizure (independent of the seizure onset), making a distinction between tonic, clonic, tonic–clonic, hyperkinetic and non-motor focal (called ‘subtle’ from now on) seizures.
## 3. Results
An overall sensitivity of 77.6% is found on average on all patients, with on average 2.56 FP/night and 30.7% PPV over the entire recordings including the initialization phase of the adaptive algorithm. The average detection delay is 19.1 s. Variability is however found for these values depending on the clinical nature of the seizures and the seizure onset. Table 1 shows the impact of the seizure onset on the performance. Primary generalized and temporal lobe (TL) seizures are detected with around 83% sensitivity, whereas this is only 71.4% for the frontal lobe (FL) seizures. Overall, 17/28 patients (60.7%) show a 100% sensitivity, of which 3/6 patients with generalized seizures, 7/9 with TL onset and 7/13 with FL onset. Patients with generalized (1.92 FP/night) and FL seizures (2.48 FP/night) also have lower false alarm rates (FAR) compared to patients with TL seizures (5.36 FP/night). Despite the overall found differences, none of the sensitivity and FAR results led to significant different results between different seizure onsets for the proposed adaptive algorithm. TL seizures (27 s) are however detected significantly later than FL (17 s, p < 10−2) and generalized seizures (15 s, p < 10−3).
Table 1Overview of the results of the proposed adaptive approach for different presumed seizure onsets.
Seizure onsetSensitivity (%) (detected/total)Mean delay(s)False alarm rate (FP/night)
Generalized83.3 (25/30)15.01.92
Frontal lobe71.4 (35/49)16.92.48
Temporal lobe82.1 (23/28)26.95.36
Total77.6 (83/107)19.12.56
If only the patient-independent algorithm is used on all data (including the initialization phases of the adaptive approach), an overall sensitivity of 81.3% and a FAR of 6 FP/night is found. The adaptive algorithm thus lowers the number of false alarms with 57% over the entire recordings compared to the patient-independent algorithm with a similar sensitivity. The Mann–Whitney U tests show this adaptive algorithm is indeed significantly better compared to the patient-independent algorithm in FAR (p < 10−3, median difference: 4.78 FP/night, CI: [2.77,7.20]), but with no significant difference in sensitivity (p > 0.05, median difference:0%, CI:[0,0]). The average detection delay of the patient-independent approach is 15.6 s, which is around 3.5 s faster than the adaptive approach. The adaptive algorithm however reduces the FAR drastically by waiting longer to be more sure on the distinction between epileptic and non-epileptic HRIs. Similar findings on the effect of the onset can be found in the results of the patient-independent algorithm due to the large overlap in sensitivity results.
Fig. 3a shows the boxplots for the sensitivity based on ictal clinical manifestations. 96% (24/25) of tonic–clonic (TC) seizures and 72.5% (29/40) of hyperkinetic (HK) seizures are detected successfully with the proposed seizure detector. Only 46.2% (6/13) of the tonic (T) and clonic (C) seizures are detected, whereas 82.8% (24/29) of the subtle seizures are detected. 5/6 patients had 100% sensitivity for the tonic–clonic seizures, whereas this holds for 6/9 patients with HK seizures and 7/9 patients with subtle seizures. The inter-patient variability for subtle seizures is however lower compared to the HK and T/C seizures. However, again no statistical significant differences between the sensitivity of different seizure types were found for the proposed adaptive algorithm.
Also a difference is found in the detection delays of the clinical more severe seizures (TC and HK) compared to the clinical more subtle seizure (see Fig. 3b). The TC and HK seizures (±14 s) are detected faster than the more subtle seizures and the tonic and clonic seizures (±28 s). The inter-patient variability for the detection delay is also higher for the subtle seizures compared to the other seizure types. Significant differences were found in detection delays between TC and subtle seizures (p < 10−3) and between HK and subtle seizures (p < 10−3).
The median duration of the initialization phase is 2.7 h (CI: [2.27,4.67]). During the initialization phase, the average FAR is 3.52 FP/night for both algorithms. After initialization, the false alarm for the adaptive algorithm is 2.32 FP/night compared to on average 6.40 FP/night for the patient-independent algorithm (63% reduction in false alarms). Fig. 4 shows the boxplots for the FARs after initialization for both algorithms for the groups with different seizure onsets. For all groups both the median and variance in FAR drop strongly with the adaptive algorithm compared to the patient-independent algorithm. The differences are the largest for the TL patients (on average 4.24 FP/night for the adaptive algorithm) and are the smallest for the FL patients (2.32 FP/night). On average 1.60 FP/night are found in patients with mainly generalized seizures.
The majority of false alarms from the adaptive algorithm are found in the early morning after 6 a.m., and around 25% of false alarms are caused when the patient was already fully awake. Another 25% of false alarms are caused by strong motion artifacts in the ECG, leading to errors in the R peak detection algorithm. Other typical reasons for false alarms were arousals, long periods of nocturnal awake time and non-epileptic spasms.
## 4. Discussion
### 4.1 Sensivity and detection delay
Table 1 and Fig. 3 show that the seizure type and onset have a tendency to have an impact on the sensitivity and FAR of the proposed seizure detection algorithm, but none of these results showed to be significantly different. This is assumed to be caused by the limited amount of data points in each group. Almost all generalized seizures are detected, and for these patients the FAR shows to be relatively low compared to the patients with FL or TL seizures. This means this approach works well for these patients, which are the most important patients to monitor at home.
The detection delay results showed significant differences, showing that TC and HK seizures are detected significantly faster than focal subtle seizures. This illustrates that the clinically more important seizures are detected faster than seizures that typically require less care.
Less than half of the tonic or clonic seizures are detected with the adaptive algorithm. This is due to the fact that most of these seizures occurred in patients with also TC seizures. The TC seizures (and post-ictal activity) result in stronger HRIs, causing the less strong T or C seizures to be seen as more normal behavior. These T and C seizures are indeed detected with the patient-independent algorithm, but not with the adaptive version. Compared to the patient-independent algorithm, only 4 T and C seizures are missed extra (and no seizures of an other type), of which 3 C seizures in patients with TC seizures.
The (most often subtle) TL seizures are detected with a sensitivity of ±80%, similarly as reported in [
• De Cooman T.
• Varon C.
• Van Paesschen W.
• Lagae L.
• Van Huffel S.
Online automated seizure detection in temporal lobe epilepsy patients using single-lead ECG.
]. The detection of these seizures is however on average slower than for convulsive seizures. This is in line with the findings of [
• Son W.H.
• Hwang W.S.
• Koo D.L.
• Hwang K.J.
• Kim D.Y.
• Seo J.-H.
• et al.
The difference in heart rate change between temporal and frontal lobe seizures during peri-ictal period.
], which stated that ictal HRIs from patients with TL epilepsy have a longer duration than for example the ictal HRIs from FL patients, thus leading to a slower detection delay. Another reason might be that the late detections of the TL seizures are caused by a limited spread of the temporal lobe compared to extratemporal origins. The detection of the T and C seizures is on average also slower due to the fact that most of them are secondary generalized, leading to a slower detection compared to the primary generalized TC seizures.
### 4.2 False alarm rate
Fig. 4 shows that the FAR drops strongly after the initialization phase with 63% compared to the patient-independent algorithm. Not only the overall FAR drops, but also the inter-patient FAR variability drops strongly. This can especially be seen for the patients with GS and TL patients. This way, the algorithm is more usable for a wider range of patients. Despite the decreased FAR variability, the FAR variability for the TL patients remains larger compared to that of the GS and FL patients.
The patients were already fully awake during 25% of the false alarms. In practice, a device for nocturnal monitoring can be turned off then to avoid these false alarms. Another quarter of false alarms is caused by strong motion artifacts, which lead to heart beat detection errors. Better noise removal techniques and the usage of more robust sensors could reduce the impact of noise removal on seizure detection performance. Also the addition of other modalities such as ACM and EMG can lead to increased performance for certain types of seizures [
• Milosevic M.
• Van de Vel A.
• Bonroy B.
• Ceulemans B.
• Lagae L.
• Vanrumste B.
• et al.
Automated detection of tonic–clonic seizures using 3-d accelerometry and surface electromyography in pediatric patients.
].
### 4.3 General discussion and future work
The proposed adaptive seizure detection algorithm is designed to be low-complex. The adaptive classifier is computationally of little extra effort compared to HRI-EXTRACT and the feature extraction procedure, which was already discussed to be sufficiently low-complex in [
• De Cooman T.
• Varon C.
• Van Paesschen W.
• Lagae L.
• Van Huffel S.
Online automated seizure detection in temporal lobe epilepsy patients using single-lead ECG.
]. This way, the algorithm can be implemented directly on a wearable device rather than be connected to a smartphone/server for computations. Photoplethysmography could also be used instead of ECG for extraction of the heart rate in order to increase the wearability of the system [
• van Andel J.
• Ungureanu C.
• Aarts R.
• Leijten F.
• Arends J.
Using photoplethysmography in heart rate monitoring of patients with epilepsy.
], but might have a negative impact on the accuracy of the heart rate data [
• Vandecasteele K.
• De Cooman T.
• Gu Y.
• Cleeren E.
• Claes K.
• Paesschen W.V.
• et al.
Automated epileptic seizure detection based on wearable ECG and PPG in a hospital environment.
]. As the algorithm continuously changes over time, it will also adjust to the patient's heart rate characteristics while they grow up, without requiring manual adaptations of the algorithm.
The adaptive algorithm was already initialized after on average 2.7 h. At this point, the adaptive algorithm already did most of the adaptation to the patient characteristics. Manual setting of patient parameters is typically done offline after multiple days or even weeks depending on the amount of recorded seizures. Therefore, the proposed algorithm leads much faster to personalization of the detection system compared to the manual alternative.
The proposed methodology does not require the availability of annotated patient data. That way, no clinicians need to annotate previously recorded data for each patient. An alternative would be to incorporate user feedback [
• De Cooman T.
• Kjær T.W.
• Van Huffel S.
• Sørensen H.B.D.
Adaptive heart rate-based epileptic seizure detection using real-time user feedback.
], but patients and relatives might not always be aware whether an alarm was correct or not, and missed seizures will remain missed in most cases. Therefore, this approach is advised, certainly for nocturnal monitoring of pediatric patients. It works with the assumption that most HRIs are caused by non-epileptic behavior (e.g. arousals). In most cases, this assumption holds well. Only in long series of seizures, the majority of data in PDpool might be epileptic, in which case some seizures might be missed after the correct detection of 5–10 seizures. However, due to the fact that this collection of data always changes in time, this effect goes away again quickly after the series of seizures is stopped.
The proposed algorithm results in a strong decrease in false alarms compared to the patient-independent algorithm. In practice, no extra effort from the patient, clinician or system owner is required with this adaptive algorithm compared to when using a patient-independent algorithm, and should therefore be preferred in real-life usage. However, still too many false alarms are generated in order to be used in practice. Unimodal patient-independent ACM and EMG based algorithms can lead to a better performance compared to the proposed algorithm for the detection of tonic–clonic seizures [
• Beniczky S.
• Henning O.
• Fabricius M.
• Wolf P.
Automated real-time detection of tonic–clonic seizures using a wearable EMG device.
,
• Beniczky S.
• Polster T.
• Kjaer T.W.
• Hjalgrim H.
Detection of generalized tonic–clonic seizures by a wireless wrist accelerometer: a prospective, multicenter study.
]. These modalities are however not usable for the detection of (more subtle) focal seizures, for which only EEG, ECG and EDA can be used. It is thus mainly for these seizure types that the proposed adaptive algorithm is of added value in unimodal setting for home monitoring applications. It can also be of added value as part of a multimodal setting, leading to an increased multimodal performance in combination with an additional EMG or ACM sensor for the detection of convulsive seizures.
The unimodal heart rate based seizure detection can be further improved by going into more complex offline or online adaptive seizure detection algorithms. One option would be to also add sleep stage information to the algorithm in order to further fine-tune it [
• Herman S.
• Walczak T.
• Bazil C.
Distribution of partial seizures during the sleep–wake cycle.
].
## 5. Conclusion
The proposed seizure detection algorithm allows to quickly adapt to patient-specific heart rate characteristics, leading to 57% less false alarms compared to a state-of-the-art patient-independent algorithm. The adaptation not only leads to an overall decreased false alarm rate, but also to less inter-patient false alarm rate variability. Automated seizure detection algorithms are therefore advised to be used in practice due to the increased performance and ease-of-use.
## Conflict of interest
The authors have no conflict of interest regarding this manuscript.
## Acknowledgements
Bijzonder Onderzoeksfonds KU Leuven (BOF): SPARKLE – Sensor-based Platform for the Accurate and Remote monitoring of Kinematics Linked to E-health #: IDO-13-0358; The effect of perinatal stress on the later outcome in preterm babies #: C24/15/036; TARGID – Development of a novel diagnostic medical device to assess gastric motility #: C32-16-00364. Fonds voor Wetenschappelijk Onderzoek-Vlaanderen (FWO): Hercules Foundation (AKUL 043) ‘Flanders BCI Lab - High-End, Modular EEG Equipment for Brain Computer Interfacing’. Agentschap Innoveren en Ondernemen (VLAIO): 150466: OSA+. Agentschap voor Innovatie door Wetenschap en Technologie (IWT): O&O HBC 2016 0184 eWatch. imec funds 2017. imec ICON projects: ICON HBC.2016.0167, ‘SeizeIT’. Belgian Foreign Affairs-Development Cooperation: VLIR UOS programs (2013–2019). EU: European Union's Seventh Framework Programme (FP7/2007-2013): The HIP Trial: #260777. ERASMUS +: INGDIVS 2016-1-SE01-KA203-022114. European Research Council: The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ERC Advanced Grant: BIOTENSORS (n 339804). This paper reflects only the authors’ views and the Union is not liable for any use that may be made of the contained information. EU H2020-FETOPEN ‘AMPHORA’ #766456. Thomas De Cooman is supported by FWO SBO PhD grant. Carolina Varon is a postdoctoral fellow of the Research Foundation-Flanders (FWO).
## References
• Van de Vel A.
• Cuppens K.
• Bonroy B.
• Milosevic M.
• Jansen K.
• Van Huffel S.
• et al.
Non-EEG seizure detection systems and potential SUDEP prevention: state of the art: review and update.
Seizure. 2016; 41: 141-153
• Milosevic M.
• Van de Vel A.
• Bonroy B.
• Ceulemans B.
• Lagae L.
• Vanrumste B.
• et al.
Automated detection of tonic–clonic seizures using 3-d accelerometry and surface electromyography in pediatric patients.
IEEE J Biomed Health Inform. 2016; 20: 1333-1341
• Beniczky S.
• Henning O.
• Fabricius M.
• Wolf P.
Automated real-time detection of tonic–clonic seizures using a wearable EMG device.
Neurology. 2018; 90: e428-e434
• De Cooman T.
• Varon C.
• Van Paesschen W.
• Lagae L.
• Van Huffel S.
Online automated seizure detection in temporal lobe epilepsy patients using single-lead ECG.
Int J Neural Syst. 2017; : 1750022
• Poh M.-Z.
• Loddenkemper T.
• Reinsberger C.
• Swenson N.C.
• Goyal S.
• Sabtala M.C.
• et al.
Convulsive seizure detection using a wrist-worn electrodermal activity and accelerometry biosensor.
Epilepsia. 2012; 53: e93-e97
• Osorio I.
Automated seizure detection using EKG.
Int J Neural Syst. 2014; 24: 1450001
• Zijlmans M.
• Flanagan D.
• Gotman J.
Heart rate changes and ECG abnormalities during epileptic seizures: prevalence and definition of an objective clinical sign.
Epilepsia. 2002; 43: 847-854
• Jansen K.
• Lagae L.
Cardiac changes in epilepsy.
Seizure. 2010; 19: 455-460
• Leutmezer F.
• Schernthaner C.
• Lurger S.
• Pötzelberger K.
• Baumgartner C.
Electrocardiographic changes at the onset of epileptic seizures.
Epilepsia. 2003; 44: 348-354
• Oppenheimer S.M.
• Gelb A.
• Girvin J.P.
• Hachinski V.C.
Cardiovascular effects of human insular cortex stimulation.
Neurology. 1992; 42 (1727–1727)
• Novak V.
• Reeves A.L.
• Novak P.
• Low P.A.
• Sharbrough F.W.
Time–frequency mapping of R–R interval during complex partial seizures of temporal lobe origin.
J Autonom Nerv Syst. 1999; 77: 195-202
• Varon C.
• Jansen K.
• Lagae L.
• Van Huffel S.
Can ECG monitoring identify seizures?.
J Electrocardiol. 2015; 48: 1069-1074
• van Elmpt W.
• Wouter J.
• Nijsen T.
• Griep P.
• Arends J.
A model of heart rate changes to detect seizures in severe epilepsy.
Seizure. 2006; 15: 366-375
• Andel J.
• Ungureanu C.
• Arends J.
• Tan F.
• Dijk J.V.
• Petkov G.
• et al.
Multimodal, automated detection of nocturnal motor seizures at home: is a reliable seizure detector feasible?.
Epilepsia Open. 2017; 2: 424-431https://doi.org/10.1002/epi4.12076
• Jeppesen J.
• Beniczky S.
• Fuglsang-Frederiksen A.
• Sidenius P.
• Jasemian Y.
Detection of epileptic-seizures by means of power spectrum analysis of heart rate variability: a pilot study.
Technol Health Care. 2010; 18: 417-426
• Sabesan S.
• Sankar R.
Improving long-term management of epilepsy using a wearable multimodal seizure detection system.
Epilepsy Behav. 2015; 46: 56-57
• De Cooman T.
• Van de Vel A.
• Ceulemans B.
• Lagae L.
• Vanrumste B.
• Van Huffel S.
Online detection of tonic–clonic seizures in pediatric patients using ECG and low-complexity incremental novelty detection.
Proc of the 37th annual international conference of the IEEE engineering in medicine and biology society (EMBC2015). 2015; : 5597-5600
• Hampel K.G.
• Jahanbekam A.
• Elger C.E.
• Surges R.
Seizure-related modulation of systemic arterial blood pressure in focal epilepsy.
Epilepsia. 2016; 57: 1709-1718https://doi.org/10.1111/epi.13504
• Poggio T.
• Cauwenberghs G.
Incremental and decremental support vector machine learning.
Adv Neural Inf Process Syst. 2001; 13: 409
• Blachut B.
• Hoppe C.
• Surges R.
• Elger C.
• Helmstaedter C.
Subjective seizure counts by epilepsy clinical drug trial participants are not reliable.
Epilepsy Behav. 2017; 67: 122-127
• van Andel J.
• Ungureanu C.
• Aarts R.
• Leijten F.
• Arends J.
Using photoplethysmography in heart rate monitoring of patients with epilepsy.
Epilepsy Behav. 2015; 45: 142-145
• Son W.H.
• Hwang W.S.
• Koo D.L.
• Hwang K.J.
• Kim D.Y.
• Seo J.-H.
• et al.
The difference in heart rate change between temporal and frontal lobe seizures during peri-ictal period.
J Epilepsy Res. 2016; 6: 16
• Vandecasteele K.
• De Cooman T.
• Gu Y.
• Cleeren E.
• Claes K.
• Paesschen W.V.
• et al.
Automated epileptic seizure detection based on wearable ECG and PPG in a hospital environment.
Sensors. 2017; 17: 2338
• De Cooman T.
• Kjær T.W.
• Van Huffel S.
• Sørensen H.B.D.
Adaptive heart rate-based epileptic seizure detection using real-time user feedback.
Physiol Meas. 2017; 39: 014005
• Beniczky S.
• Polster T.
• Kjaer T.W.
• Hjalgrim H.
Detection of generalized tonic–clonic seizures by a wireless wrist accelerometer: a prospective, multicenter study.
Epilepsia. 2013; 54: e48-e51
• Herman S.
• Walczak T.
• Bazil C.
Distribution of partial seizures during the sleep–wake cycle.
Neurology. 2001; 56: 1453-1459 | 2023-04-01 21:07:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5308182835578918, "perplexity": 9015.585714449737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00778.warc.gz"} |
https://homework.cpm.org/category/CC/textbook/cca2/chapter/4/lesson/4.2.1/problem/4-67 | ### Home > CCA2 > Chapter 4 > Lesson 4.2.1 > Problem4-67
4-67.
Solve each equation for $y$ so that it could be entered into a graphing calculator. Homework Help ✎
See the online help for problem 4-32.
1. $5−(y−3)=3x$
$y=−3x+8$
1. $4(x+y)=−2$ | 2019-10-16 20:06:54 | {"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3480125069618225, "perplexity": 4224.708575159947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669546.24/warc/CC-MAIN-20191016190431-20191016213931-00382.warc.gz"} |
https://www.physicsforums.com/threads/free-fall-orbit-time-dilation.903978/page-3 | # Free fall orbit time dilation
• B
Why don't you work through the first few chapters of Sean Carroll's lecture notes on general relativity first.
Are they free and do they contain a worked example of the equation you quoted? I think, I or other basic level followers could probably follow where the numbers were plugged in, but might not so easily follow how to correctly use the equation in the situation otherwise.
Last edited:
Also I am not quite clear how A or B could be considered at rest and not rotating, as while C (the mass) could be considered to orbit A or B, if it was so considered, would it not also need to be considered that it was rotating, for an explanation of why the same section of C was not always facing A (or B), and could it not be objectively measured that it was not rotating?
and you asked me to clarify what I meant proper rotation or coordinate rotation, I explained that I had meant proper rotation (I was referring to the measurement, not the coordinate rotation that would appear if A or B were considered to be at rest) and you replied:
Whether or not a given object is undergoing proper rotation is an invariant. It does not depend on the coordinates chosen. You can choose coordinates where a proper-rotating object is at coordinate-rest. In such coordinates there will be "fictitious forces" which will lead to the correct amount of proper rotation.
But that still does not seem to answer my question. As I now understand it "fictitious forces" have a special meaning in physics and refer to forces added for explanation when describing motion from a non-inertial frame of reference. But the question was about from the frame of reference A or B which as I understand it are both inertial frames of reference. While they considering them at rest, C would show coordinate rotation (though there would be no measurable rotation). So the measurements would not seem to support how it would be expected to be if A or B were actually at rest relative to C. If they were, and C were actually rotating then you would expect to measure proper rotation on C.
Also from the perspective of the inertial frames A and B if there were distant stars, wouldn't they appear to be moving faster than the speed of light?
Last edited:
Dale
Mentor
Yes, they are free
https://arxiv.org/abs/gr-qc/9712019
Working an example without the background would require a lot of effort on my part and result in very little gain for you. But reading the first couple of chapters of the lecture notes will result in a lot more gain for you.
Yes, they are free
https://arxiv.org/abs/gr-qc/9712019
Working an example without the background would require a lot of effort on my part and result in very little gain for you. But reading the first couple of chapters of the lecture notes will result in a lot more gain for you.
Thanks for the link, and for the help so far. I don't know whether you noticed, but I posted a new reply #52 to one of your earlier replies.
A.T.
Also from the perspective of the inertial frames A and B if there were distant stars, wouldn't they appear to be moving faster than the speed of light?
Which is a good hint that they aren’t inertial.
Dale
Mentor
But the question was about from the frame of reference A or B which as I understand it are both inertial frames of reference.
They are only locally inertial. And even locally it is only inertial if the object is not undergoing any proper rotation.
Also from the perspective of the inertial frames A and B if there were distant stars, wouldn't they appear to be moving faster than the speed of light?
Neither the stars nor the other objects are local.
They are only locally inertial.
So I then assume A relative to B and C in the scenario should be considered as non-inertial even though an accelerometer at rest with respect to A would measure no acceleration?
If so then with A being non-inertial a fictitious force would be added to describe the coordinate rotation of C from A's rest frame. What is the fictitious force that would explain it?
When the considerations of A, B, and C being at rest are compared, does not only the consideration of C being at rest give a proper rotation for C in line with its coordinate rotation?
jbriggs444
Homework Helper
2019 Award
reference A or B which as I understand it are both inertial frames of reference
In curved space-time there is no such thing as a global inertial frame of reference.
A.T.
So I then assume A relative to B and C in the scenario should be considered as non-inertial even though an accelerometer at rest with respect to A would measure no acceleration?
Rotating frames are not inertial.
Dale
Mentor
So I then assume A relative to B and C in the scenario should be considered as non-inertial even though an accelerometer at rest with respect to A would measure no acceleration?
Yes, if you want to make any non-local measurements then you need to consider them to be non inertial.
What is the fictitious force that would explain it?
They are called "Christoffel symbols" (I know, it is a wierd name). The lecture notes describe them in detail.
They are called "Christoffel symbols" (I know, it is a wierd name). The lecture notes describe them in detail.
I've spotted them in the notes, but they are quite a few pages in. I have looked them up elsewhere and it is mentioned that they are used in the geometry. Do they offer a force that explains the lack of measurement of proper acceleration in an object showing coordinate acceleration though, or when the considerations of A, B, and C being at rest are compared, does only the consideration of C being at rest give a proper rotation for C in line with its coordinate rotation?
vanhees71
Gold Member
2019 Award
I can only also recommend to read a bit about differential geometry in Carrol's Lecture notes first. You just need Secs. 2 and 3 to answer all these questions.
Dale
I can only also recommend to read a bit about differential geometry in Carrol's Lecture notes first. You just need Secs. 2 and 3 to answer all these questions.
Does proper acceleration appear in the notes at all, the term does not seem to be in them, and when acceleration is mentioned I am not clear that it is not referring to relative / coordinate acceleration.
vanhees71
Gold Member
2019 Award
I'm not sure that I understand what you need that for to learn the basic principles of pseudo-Riemannian (Lorentzian) differential geometry, but I'd define proper acceleration as
$$a^{\mu}=\frac{\mathrm{D} u^{\mu}}{\mathrm{D} \tau},$$
where
$$u^{\mu}=\frac{\mathrm{d} x^{\mu}}{\mathrm{d} \tau}$$
is the four velocity and ##\tau## the proper time (I assume you have a massive particle here; for massless particles the issue is a bit more complicated).
Written out the proper acceleration reads
$$a^{\mu} = \frac{\mathrm{d}^2 x^{\mu}}{\mathrm{d} \tau^2} + {\Gamma^{\mu}}_{\rho \sigma} \frac{\mathrm{d} x^{\rho}}{\mathrm{d} \tau} \frac{\mathrm{d} x^{\sigma}}{\mathrm{d} \tau},$$
where ##{\Gamma^{\mu}}_{\rho \sigma}## are the connection coefficients (Christoffel symbols) of spacetime.
I'm not sure that I understand what you need that for to learn the basic principles of pseudo-Riemannian (Lorentzian) differential geometry, but I'd define proper acceleration as
$$a^{\mu}=\frac{\mathrm{D} u^{\mu}}{\mathrm{D} \tau},$$
where
$$u^{\mu}=\frac{\mathrm{d} x^{\mu}}{\mathrm{d} \tau}$$
is the four velocity and ##\tau## the proper time (I assume you have a massive particle here; for massless particles the issue is a bit more complicated).
Written out the proper acceleration reads
$$a^{\mu} = \frac{\mathrm{d}^2 x^{\mu}}{\mathrm{d} \tau^2} + {\Gamma^{\mu}}_{\rho \sigma} \frac{\mathrm{d} x^{\rho}}{\mathrm{d} \tau} \frac{\mathrm{d} x^{\sigma}}{\mathrm{d} \tau},$$
where ##{\Gamma^{\mu}}_{\rho \sigma}## are the connection coefficients (Christoffel symbols) of spacetime.
Won't any change in x depend on what coordinate system (what frame of rest) you are using?
vanhees71
Gold Member
2019 Award
The ##x^{\mu}## are some coordinates, and all components given in my previous posting are with respect to the corresponding holonomous basis of the tangent spaces of the manifold, i.e., ##u^{\mu}## and ##a^{\mu}## are vector components with respect to the holonomous basis of the tangent space at the position of the point particle under consideration.
The ##x^{\mu}## are some coordinates, and all components given in my previous posting are with respect to the corresponding holonomous basis of the tangent spaces of the manifold, i.e., ##u^{\mu}## and ##a^{\mu}## are vector components with respect to the holonomous basis of the tangent space at the position of the point particle under consideration.
So if an observer was standing on the Earth and was using a coordinate system where they were considered at rest, then what would be the change in the x,y or z part of any coordinate point (at rest with respect to the observer) on the Earth over a period of time. Wouldn't only the time part of the coordinate be changing? So where would the acceleration of those points be using your equations? It seems to me that there would not be any, as they seem to represent coordinate acceleration. But (as I understand it) proper acceleration would be measured at any of those points on the Earth.
If I have misunderstood (sorry my maths is quite poor) then perhaps you could illustrate using a single coordinate ct = 0 x = 1, y = 1, z= 1 in your equations to show how it ends up with the measurable proper acceleration over 10 seconds perhaps?
Last edited:
Dale
Mentor
I've spotted them in the notes, but they are quite a few pages in.
Yes. Those intervening pages are important. I really think that you need to go through it. You are asking very haphazard questions because you need a systematic introduction.
Please don't try to skip ahead, but go through the material step by step.
Yes. Those intervening pages are important. I really think that you need to go through it. You are asking very haphazard questions because you need a systematic introduction.
Please don't try to skip ahead, but go through the material step by step.
Does proper acceleration appear in the notes at all, the term does not seem to be in them, and when acceleration is mentioned I am not clear that it is not referring to relative / coordinate acceleration?
Dale
Mentor
Does proper acceleration appear in the notes at all, the term does not seem to be in them, and when acceleration is mentioned I am not clear that it is not referring to relative / coordinate acceleration?
He does not appear to use that term, but the quantity $$\frac{d^2}{d\tau^2}x^{\mu}(\tau)$$ in equation 1.102 is the proper acceleration in flat spacetime.
And the proper acceleration in curved spacetime is given by the left hand side of equation 3.47
vanhees71
Gold Member
2019 Award
One should note that (3.47) is only proper acceleration if ##\lambda## is normalized such that
$$g_{\mu \nu} \frac{\mathrm{d} x^{\mu}}{\mathrm{d} \lambda} \frac{\mathrm{d} x^{\nu}}{\mathrm{d} \lambda}=1.$$
The formula, i.e., the equation for a geodesic, parametrized in terms of an affine parameter ##\lambda##, is more general. You can also solve it if the tangent vector is null (world lines of massless particles or light rays in the sense of the eikonal approximation) or spacelike.
Dale
stevendaryl
Staff Emeritus
Consider satellites A and B, going at different velocities, in free fall orbit around a massive body C at different altitudes for a million years, then being brought together and the clocks on them compared. Presumably the bringing them together would become less significant the longer they orbited, and the amount of time dilation due to curvature would be frame of reference independent, but what about the observed velocities from the mass being orbited's perspective? If the satellites were labelled A and B and the mass C then would the clock comparison figure be correctly calculated no matter which you considered at rest?
Yes, two clocks that are in different orbits will show different amounts of elapsed time when they pass each other. You don't need to bring them together; instead you can put A into a circular orbit around C and put B into a very eccentric elliptical orbit. If you arrange things perfectly, then you can make sure that A and B pass each other with some regularity. I haven't done the calculation, but I believe that when they get back together, B will show more elapsed time than A.
He does not appear to use that term, but the quantity $$\frac{d^2}{d\tau^2}x^{\mu}(\tau)$$ in equation 1.102 is the proper acceleration in flat spacetime.
And the proper acceleration in curved spacetime is given by the left hand side of equation 3.47
As I mentioned earlier to Vanhees earlier
So if an observer was standing on the Earth and was using a coordinate system where they were considered at rest, then what would be the change in the x,y or z part of any coordinate point (at rest with respect to the observer) on the Earth over a period of time. Wouldn't only the time part of the coordinate be changing? So where would the acceleration of those points be using your equations? It seems to me that there would not be any, as they seem to represent coordinate acceleration. But (as I understand it) proper acceleration would be measured at any of those points on the Earth.
If I have misunderstood (sorry my maths is quite poor) then perhaps you could illustrate using a single coordinate ct = 0 x = 1, y = 1, z= 1 in your equations to show how it ends up with the measurable proper acceleration over 10 seconds perhaps?
Could you possibly just use that equation (which would seem to show no change to the x, y, z coords) using a single coordinate (t=0, x = 1, y = 1, z = 1) at rest with the observer standing on Earth to illustrate how it shows proper acceleration?
One should note that (3.47) is only proper acceleration if ##\lambda## is normalized such that
$$g_{\mu \nu} \frac{\mathrm{d} x^{\mu}}{\mathrm{d} \lambda} \frac{\mathrm{d} x^{\nu}}{\mathrm{d} \lambda}=1.$$
The formula, i.e., the equation for a geodesic, parametrized in terms of an affine parameter ##\lambda##, is more general. You can also solve it if the tangent vector is null (world lines of massless particles or light rays in the sense of the eikonal approximation) or spacelike.
Could you just use the equation with a single coordinate at rest with the observer on Earth to illustrate, as I mentioned in #67?
Yes, two clocks that are in different orbits will show different amounts of elapsed time when they pass each other. You don't need to bring them together; instead you can put A into a circular orbit around C and put B into a very eccentric elliptical orbit. If you arrange things perfectly, then you can make sure that A and B pass each other with some regularity. I haven't done the calculation, but I believe that when they get back together, B will show more elapsed time than A.
I am happy with them being brought together though, else there might be something to do with the elliptical orbit that has not been made clear. As you seemed to be able to conclude that B's clock would have gone faster, but in the bit you had quoted of what I had written, I had not mentioned which was in lower orbit, or their respective velocities. Any affects of bringing together tends to insignificant the longer A and B orbit anyway. If B's clock was in the lower orbit then time dilation due to gravity would slow B's clock relative to A's, and presumably that affect is invariant across frames of reference. So would there not just be the kinematic time dilation left, and would not A think, using the metric of A's clock, that B's clock had ticked less than would have been expected (taking gravitational time dilation into account) if it had been in A's rest frame, and B, using the metric of B's clock, think that A's clock had ticked less than would have been expected (taking gravitational time dilation into account) if A had been in B's rest frame? | 2020-09-26 00:10:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7434319257736206, "perplexity": 331.54459329576275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228998.45/warc/CC-MAIN-20200925213517-20200926003517-00354.warc.gz"} |
https://byjus.com/question-answer/the-monthly-maximum-temperature-of-a-city-is-given-in-degree-celcius-in-the-following/ | Question
# The monthly maximum temperature of a city is given in degree celcius in the following data. By taking suitable classes, prepare the grouped frequency distribution table 29.2, 29.0, 28.1, 28.5, 32.9, 29.2, 34.2, 36.8, 32.0, 31.0, 30.5, 30.0, 33, 32.5, 35.5, 34.0, 32.9, 31.5, 30.3, 31.4, 30.3, 34.7, 35.0, 32.5, 33.5, 29.0, 29.5, 29.9, 33.2, 30.2 From the table, answer the following questions. (i) For how many days the maximum temperature was less than 34oC ? (ii) For how many days the maximum temperature was 34oC or more than 34oC ?
Solution
## Given: The monthly maximum temperature of a city is given in degree celcius in the following data: 29.2, 29.0, 28.1, 28.5, 32.9, 29.2, 34.2, 36.8, 32.0, 31.0, 30.5, 30.0, 33, 32.5, 35.5, 34.0, 32.9, 31.5, 30.3, 31.4, 30.3, 34.7, 35.0, 32.5, 33.5, 29.0, 29.5, 29.9, 33.2, 30.2 The grouped frequency distribution table of the given data is as follows: Class (Temperature) Tally marks Frequency 28-29 || 2 29-30 $\overline{)||||}$ | 6 30-31 $\overline{)||||}$ 5 31-32 ||| 3 32-33 $\overline{)||||}$ 5 33-34 ||| 3 34-35 ||| 3 35-36 || 2 36-37 | 1 Total N = 30 (i) The number of days the maximum temperature was less than 34oC = 2 + 6 + 5 + 3 + 5 + 3 = 24. (ii) The number of days the maximum temperature was 34oC or more than 34oC = 3 + 2 + 1 = 6. MathematicsMathematics Part - I (Solutions)Standard IX
Suggest Corrections
0
Similar questions
View More
Same exercise questions
View More
People also searched for
View More | 2022-01-27 20:03:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5335448384284973, "perplexity": 445.6481878412644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00542.warc.gz"} |
https://www.cuemath.com/ncert-solutions/q-1-exercise-12-1-algebraic-expressions-class-7-maths/ | # Ex.12.1 Q1 Algebraic Expressions Solution - NCERT Maths Class 7
Go back to 'Ex.12.1'
## Question
Get the algebraic expressions in the following cases using variables, constants and arithmetic operations.
(i) Subtraction of $$z$$ from $$y.$$
(ii) One-half of the sum of numbers $$x$$ and $$y.$$
(iii) The number $$z$$ multiplied by itself.
(iv) One-fourth of the product of numbers $$p$$ and $$q.$$
(v) Numbers $$x$$ and $$y$$ both squared and added.
(vi) Number $$5$$ added to three times the product of numbers $$m$$ and $$n.$$
(vii) Product of numbers $$y$$ and $$z$$ subtracted from $$10.$$
(viii) Sum of numbers $$a$$ and $$b$$ subtracted from their product.
Video Solution
Algebraic Expressions
Ex 12.1 | Question 1
## Text Solution
Reasoning:
Let us first understand the meaning or definition of terms variable, constants and arithmetic operations
Variables are the letters used in an algebraic expression that can take any value. For e.g. $$a, b, c$$ or $$z$$ etc. and it can take any value which can be either $$2$$ or $$5$$ or any other number. Constants always have fixed values in the algebraic expressions. They cannot be assumed or changed. Arithmetic Operations are Addition, subtraction, multiplication and division.
Steps:
(i) Subtraction of $$z$$ from $$y.$$
$y - z$
(ii) One-half of the sum of numbers $$x$$ and $$y.$$
$\frac{1}{2}\left( {x + y} \right)$
(iii) The number $$z$$ multiplied by itself.
$z \times z = {z^2}$
(iv) One-fourth of the product of numbers $$p$$ and $$q.$$
$\frac{1}{4}pq$
(v) Numbers $$x$$ and $$y$$ both squared and added.
$\left( {x \times x} \right) + \left( {y \times y} \right) = {x^2} + {y^2}$
(vi) Number $$5$$ added to three times the product of numbers $$m$$ and $$n.$$
$5 + 3\left( {m \times n} \right) = 5 + 3mn$
(vii) Product of numbers $$y$$ and $$z$$ subtracted from $$10.$$
$10 - \left( {y \times z} \right) = 10 - yz$
(viii) Sum of numbers $$a$$ and $$b$$ subtracted from their product.
$\left( {a \times b} \right)-\left( {a + b} \right) = ab - \left( {a + b} \right)$
Learn from the best math teachers and top your exams
• Live one on one classroom and doubt clearing
• Practice worksheets in and after class for conceptual clarity
• Personalized curriculum to keep up with school | 2021-05-11 16:43:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5805360078811646, "perplexity": 1039.969314800154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.10/warc/CC-MAIN-20210511153555-20210511183555-00175.warc.gz"} |
https://stats.stackexchange.com/questions/206135/expectation-of-the-maximum-of-two-correlated-normal-variables?noredirect=1 | Expectation of the maximum of two correlated normal variables
I am curious what the derivation for the expectation of the maximum of two jointly normal random variables $X$ and $Y$ with correlation coefficient $\rho$.
I could start with the following but the absolute value sign under expectation doesn't look like a walk in the park:
$\mathbb{E}\left[\text{max}(X,Y)\right] = \mathbb{E}\left[\frac{X+Y}{2}+\frac{|X-Y|}{2}\right] = \ ...$
• The distribution of $\max(X,Y)$ (for $X$ and $Y$ with equal variances and equal means) is given at stats.stackexchange.com/questions/139072. From that you can compute the expectation (it looks like numerical methods have to be used). The general problem (for arbitrary variances and means) looks difficult: do you need a solution in that case? – whuber Apr 7 '16 at 20:58
• What i had in mind was the general problem.. – ambushed Apr 7 '16 at 21:05
• It looks like this question has been asked before. Appologies. – ambushed Apr 7 '16 at 21:31
• I'm not sure of that: I could not find an exact duplicate. The link in my comment was found after conducting three or four keyword searches of this site and inspecting several likely threads; it's the closest I could come. The link you found concerns the maximum of independent normal variables and its answer relies fundamentally on that assumption. – whuber Apr 7 '16 at 21:42
• Indeed, you are right! – ambushed Apr 7 '16 at 21:50
I will give the answer here, maybe I come back to add a proof ... Let $(X_1,X_2)$ be a bivariate random vector with a binormal distribution, with means $\mu_1, \mu_2$, standard deviations $\sigma_1, \sigma_2$ and correlation coefficient $\rho$. Then $X=\max(X_1, X_2)$ has probability density function $f(x) = f_1(-x)+f_2(-x)$ where $$f_1(x)= \frac1{\sigma_1}\phi(\frac{x+\mu_1}{\sigma_1})\cdot \Phi\left( \frac{\rho(x+\mu_1)}{\sigma_1\sqrt{1-\rho^2}}-\frac{x+\mu_2}{\sigma_2\sqrt{1-\rho^2}} \right) \\ f_2(x)= \frac1{\sigma_2}\phi(\frac{x+\mu_2}{\sigma_2})\cdot \Phi\left( \frac{\rho(x+\mu_2)}{\sigma_2\sqrt{1-\rho^2}}-\frac{x+\mu_1}{\sigma_1\sqrt{1-\rho^2}} \right)$$ where $\phi, \Phi$ are the density and cumulative distribution function of the standard normal.
This paper also gives an exact expression for the expectation: $$\DeclareMathOperator{\E}{\mathbb{E}} \E X = \mu_1 \Phi\left( \frac{\mu_1-\mu_2}{\theta} \right) + \mu_2 \Phi\left( \frac{\mu_2-\mu_1}{\theta} \right) + \theta \phi\left( \frac{\mu_1-\mu_2}{\theta} \right)$$ where $\theta = \sqrt{\sigma_1^2 +\sigma_2^2 - 2\rho\sigma_1\sigma_2}$. (the paper contains more, like the variance and moment generating functions). | 2020-01-18 19:40:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8481413722038269, "perplexity": 293.08945327908117}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593937.27/warc/CC-MAIN-20200118193018-20200118221018-00162.warc.gz"} |
http://tex.stackexchange.com/questions/69336/subfloat-with-subcaption-package-missing-number-treated-as-zero | # Subfloat with subcaption package: Missing number, treated as zero
I want to make a figure consisting of two subfigures. Having read the Wikipedia subentry on subfloats, I tried to follow it exactly, so I did not use the subfig or subfigure package, only the caption and subcaption packages. Nevertheless, I am getting a Missing number, treated as zero error, pointing to the line with \begin{subfigure}.
What am I doing wrong?
Below is my code:
% In preamble:
\usepackage{url}
\usepackage{graphicx}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{rotating}
\usepackage[table]{xcolor}
\usepackage{multirow}
\usepackage{amsfonts}
% In document:
\begin{figure}[htpb]
\begin{subfigure}[b]{width=0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{img_a}
\end{subfigure}
\begin{subfigure}[b]{width=0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{img_b}
\end{subfigure}
\caption{A caption.}
\label{fig:my-figure}
\end{figure}
I am using TeXShop 2.47 on Mac OS X 10.8.1 (x86_64).
Thanks!
-
Note that the includegraphics command has syntax width=<width>, but the subfigure environment just has \begin{subfigure}{width}, not as width=<width>
I loaded the graphicx package with the demo option just for demonstration- remove it when you're working on your actual document :)
\documentclass{article}
% In preamble:
\usepackage[demo]{graphicx}
\usepackage{caption}
\usepackage{subcaption}
\begin{document}
% In document:
\begin{figure}[htpb]
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{img_a}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{img_b}
\end{subfigure}
\caption{A caption.}
\label{fig:my-figure}
\end{figure}
\end{document}
- | 2015-04-26 06:32:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9131090641021729, "perplexity": 8403.956261112704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246653426.7/warc/CC-MAIN-20150417045733-00251-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.wyzant.com/resources/answers/users/84010110 | 06/02/21
#### Math Question Help
1.Lisa needs a science mark of 90% to get into her college program. If her term mark is82% and it is worth 70% of her final mark, is it possible for her to achieve the 90%?
05/27/21
#### Describe the probability of each given outcome using one of the phrases below:►Impossible ►Unlikely, ►Neither Likely nor Unlikely, ►Likely, ►Certain
Describe the probability of each given outcome using one of the phrases below:►Impossible ►Unlikely, ►Neither Likely nor Unlikely, ►Likely, ►Certain Part A: A bag contains 10 Snickers bars and... more
02/24/21
#### Math Problems x5
Andrew is 8 years older than Christy. In 8 years the sum of their ages will be 90. How old is Andrew now?
02/24/21
#### Math Problem x4
Noah is 39 years old. Erica is 18 years old. How many years ago was Noah's age 4 times Erica's age?
08/26/20
#### How long will it take for the two trains to meet?
Two trains leave stations 384 miles apart at the same time and travel toward each other. One train travels at 75 miles per hour while the other travels at 85 miles per hour. How long will it take... more
07/28/20
#### Lanier review 30
If you drive 128 miles and have 14 gallons of gas remaining in your tank. Then you continue driving, and calculate that after driving 243 miles you have 9 gallons left.What is your rate? miles... more
07/21/20
#### Exponential Equations 19
The population of the world in 1987 was 5 billion and the relative growth rate was estimated at 2 percent per year. Assuming that the world population follows an exponential growth model, find the... more
03/31/20
03/29/20
#### Find an equation of the line that (a) has the same y-intercept as the line y + 4 x − 10 = 0 and (b) is parallel to the line 1 x + 7 y = 6 . Write your answer in the form y = m x + b .
Find an equation of the line that (a) has the same y-intercept as the line y+4x−10=0 and (b) is parallel to the line 1x+7y=6.Write your answer in the form y=mx+b.where y=____x+_____
03/29/20
#### The equation of the line that goes through the point ( 7 , 5 ) and is perpendicular to the line 5 x + 4 y = 2 can be written in the form y = m x + b
The equation of the line that goes through the point (7,5) and is perpendicular to the line 5x+4y=2 can be written in the form y=mx+bwhere m is =and where b is =
03/29/20
#### The line whose equation is 5 x − 3 y = − 6 goes through the point ( − 7 , t ) for
The line whose equation is 5x−3y=−6 goes through the point (−7,t) for t=
03/29/20
#### A mathematics quetion
At Al Hikmah University, there are 126 1st year students, 375 2nd year students, 293 3rd year students and 187 4th year students.If the University were to randomly award a student, what is the... more
03/23/20
#### help with fractions?? :/
An insurance company sells a $4,000, seven-year term life insurance policy to an individual for$240. Find the expected return for the company if the probability the individual will live for the... more
03/23/20
#### help help help help help help
A car and a bike left the village for a city simultaneously. The distance between the city and village is 90 km.... more
03/22/20
#### Twice the number and 1
The expression of the question
02/08/20
#### For what values of x is the expression below undefined?
x2-7+10------------- x-4
12/18/19
#### the product of 12 and the sum of n and 3 is greater than or equal to 35
the product of 12 and the sum of n and 3 is greater than or equal to 35
10/09/19
09/29/19
#### word problem!!!!
A diesel train made a trip to Johannesburg and back. The trip there took 10 hours and the trip back took six hours. What was the diesel train's average speed on the trip if it averaged 75 mph on... more
08/13/19
## Still looking for help? Get the right answer, fast.
Get a free answer to a quick problem.
Most questions answered within 4 hours.
#### OR
Choose an expert and meet online. No packages or subscriptions, pay only for the time you need. | 2021-07-29 19:17:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4290753901004791, "perplexity": 711.4707847884824}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153892.74/warc/CC-MAIN-20210729172022-20210729202022-00318.warc.gz"} |
https://argoshare.is.ed.ac.uk/healthyr_book/multiple-testing.html | ## 6.9 Multiple testing
### 6.9.1 Pairwise testing and multiple comparisons
When the F-test is significant, we will often want to proceed to try and determine where the differences lie. This should of course be obvious from the boxplot you have made. However, some are fixated on the p-value!
pairwise.t.test(aov_data$lifeExp, aov_data$continent,
p.adjust.method = "bonferroni")
##
## Pairwise comparisons using t tests with pooled SD
##
## data: aov_data$lifeExp and aov_data$continent
##
## Americas Asia
## Asia 0.180 -
## Europe 0.031 1.9e-05
##
## P value adjustment method: bonferroni
A matrix of pairwise p-values is produced. Here we can see that there is good evidence of a difference in means between Europe and Asia.
When running a pairwise t-test, the p-values are already corrected for multiple comparisons. We have to keep in mind that the p-value’s significance level of 0.05 means we have a 5% chance of getting a significant result in our sample, but not necessarily in other samples/greater population. Therefore, the more statistical tests you perform, the chances of finding a false positive result increases. This is also known as Type I error.
There are three approaches to deadling with this. The first, is not to perform any correction at all. Some advocate that the best approach is simply to present the results of all the tests that were performed, and let the sceptical reader make adjustments for themselves. This is attractive, but presupposes a sophisticated readership who will take the time to consider the results in their entirety.
The second and classical approach, is to control for the so-called family-wise error rate. The “Bonferroni” correction is probably the most famous and most conservative, where the threshold for significance is lowered in proportion to the number of comparisons made. For example, if three comparisons are made, the threshold for significance should be lowered to 0.017. Equivalently, all p-values should be multiplied by the number of tests performed (in this case 3). The adjusted values can then be compared to a threshold of 0.05, as is the case above. The Bonferroni method is particular conservative, meaning that type 2 errors may occur (failure to identify true differences, or false negatives) in favour or minimising type 1 errors (false positives).
The third newer approach controls false-discovery rate. The development of these methods has been driven in part by the needs of areas of science where many different statistical tests are performed at the same time, for instance, examining the influence of 1000 genes simultaneously. In these hypothesis-generating settings, a higher tolerance to type 1 errors may be preferable to missing potential findings through type 2 errors. You can see in our example, that the p-values are lower with the fdr correction than the Bonferroni correction ones.
pairwise.t.test(aov_data$lifeExp, aov_data$continent,
p.adjust.method = "fdr")
##
## Pairwise comparisons using t tests with pooled SD
##
## data: aov_data$lifeExp and aov_data$continent
##
## Americas Asia
## Asia 0.060 -
## Europe 0.016 1.9e-05
##
## P value adjustment method: fdr
Try not to get too hung up on this. Be sensible. Plot the data and look for differences. Focus on effect size. For instance, what is the actual difference in life expectancy in years, rather than the p-value of a comparison test. Choose a method which fits with your overall aims. If you are generating hypotheses which you will proceed to test with other methods, the fdr approach may be preferable. If you are trying to capture robust effects and want to minimise type 2 errors, use a family-wise approach.
If your head is spinning at this point, do not worry - there is more to come. The rest of the book will continuously revisit these and other similar concepts, e.g., “know your data”, “be sensible, look at the effect size”, using several different examples and dataset. So do not feel like you should be able to understand everything immediately. Furthermore, these things are even easier to conceptualise using your own dataset - especially if that’s something you’ve put your sweat and tears into collecting/applying for. | 2020-02-21 09:45:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.552532434463501, "perplexity": 945.8682694414065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145500.90/warc/CC-MAIN-20200221080411-20200221110411-00490.warc.gz"} |
https://tex.stackexchange.com/questions/290114/how-to-sort-glossaries-entries-by-alphabetical-order-then-by-order-of-appereance/290472 | # How to sort glossaries entries by alphabetical order then by order of appereance
I would like my glossaries entries to be sorted by alphabetical order, and, if some entries have identical sort keys, by the order they are called in the text. Any way to do this ?
Here is the MWE, with the two entries div and commut exhibiting identical sort keys.
\documentclass[]{article}
\usepackage[
nogroupskip,
% xindy,
]{glossaries}
% I compile with xindy, but I do not think it is relevant here.
% xindy -L english -C utf8 -I xindy -M % -t %.glg -o %.gls %.glo
\newglossaryentry{div}
{ name={\ensuremath{\protect\vec{\nabla}.}},
description={Divergence operator},
sort={0}
}
\newglossaryentry{commut}
{ name={\ensuremath{\protect[~,~]}},
description={Commutator},
sort={0}
}
\newglossaryentry{b0}
{ name={\ensuremath{\protect\vec{B}_0}},
description={Static magnetic field},
sort={B0}
}
\makeglossaries
\begin{document}
Text.\gls{div} \gls{commut} \gls{b0}
\printglossaries
\end{document}
Here, for example, the commutator do not appear in the glossary. I would like to have it appearing after the divergence operator despite their identical sorting keys.
I would suggest using \makenoidxglossaries together with \printnoidxglossary[sort=standard] which sorts the elements first according to the sort-key and then according to usage.
\documentclass{article}
\usepackage[nogroupskip]{glossaries}
\newglossaryentry{div}
{ name={\ensuremath{\protect\vec{\nabla}.}},
description={Divergence operator},
sort={0}
}
\newglossaryentry{commut}
{ name={\ensuremath{\protect[~,~]}},
description={Commutator},
sort={0}
}
\newglossaryentry{b0}
{ name={\ensuremath{\protect\vec{B}_0}},
description={Static magnetic field},
sort={B0}
}
\newglossaryentry{b1}
{ name={\ensuremath{\protect\vec{B}_1}},
description={Static magnetic field},
sort={B0}
}
\makenoidxglossaries
\begin{document}
Text.\gls{b1} \gls{div} \gls{commut} \gls{b0}
\printnoidxglossary[sort=standard]
\end{document}
Then you get
First the 0 entries show up according to usage, then the two B0 entries.
(xindy might change the entries for the sorting, so I'm not sure if it works there...)
• This actually works quite nicely indeed. No especial problem with xindy, apparently. I just would like to wait a little more if someone come with another answer, like something that adds a number at the end of the "sort" part or something.
– HcN
Feb 1, 2016 at 13:07
• +1 I hadn't noticed that as a by-product of the pre-sorted list being ordered according to usage this would happen! @HcN I think this is the best solution. The makeindex/xindy options set the sort value when the entry is defined. (This isn't simply the value of the sort key but it has also had the makeindex/xindy special characters escaped.) If the sort value is later adjusted it's liable to cause clones of the entries with the modified value which will confuse makeindex/xindy. The first use flag can't be relied on in this context as it may have been reset. Feb 2, 2016 at 13:31 | 2022-07-02 22:51:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6703461408615112, "perplexity": 2855.446518343767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104205534.63/warc/CC-MAIN-20220702222819-20220703012819-00470.warc.gz"} |
https://astronomy.stackexchange.com/questions/18405/size-of-saturns-ring-material/18407 | # Size of Saturn's ring material
How big are the chunks of rock ice that make up Saturn's rings? Are there many objects larger than pebble size?
• it would seem that rather than pebble size, they are more like 10cm size. (Fist-sized.) But, I do not totally intuitively grasp HDE's comprehensive answer. Sep 23 '16 at 17:03
• well according to the size of the rocks here on earth the sizes could be varied Apr 10 '19 at 16:23
The vast majority of the particles in Saturn's rings are small, on the order of $\sim10^{-1}$ m or lower. The columnar number density, according to data from Voyager 1 and Earth-based observations, can be approximated as a function of particle radius by a power law for all particle radii $a$ in meters such that $0<a<1$, as can be seen on this log-log plot (Fig. 15.5, Cuzzi et al. (2009); taken from Fig. 8 of Marouf et al. (1982)):
Though it is only acknowledged in the original paper, the vertical axes for the three different ring regions have been shifted upward different amounts to fit all three on the same graph.
After $a=1$, there's a deviation from the law, and then a steep dropoff at about $a=3$. Obviously, particles larger than this exist, and they certainly play an important role in ring structure, but they're relatively rare.
Obviously, the trends show that smaller particles are much more common, and thus while there are indeed particles larger than pebbles - some as big as boulders, perhaps, or bigger - they are certainly few in number. Most particles are extremely small, smaller than pebbles.
This data covers observations from ring semi-major axes of $\sim$ 75,000 km to $\sim$ 135,000 km - a fairly big spread, covering most of the rings and ending near the Roche Division. The paper doesn't have one single graph of particle number density of a given size at a given semi-major axis, but it does have several subdivided plots (Fig. 15.1 and 15.2) of optical depth as a function of distance from the center of Saturn, which should give you some helpful data on total number density, if you want to make some basic assumptions about mean particle radius. This data is a bit newer, from Cassini, but the Voyager 1 data is just as helpful.
• Well, 10-cm diameter objects are rather larger than pebbles :-) Sep 21 '16 at 11:06
• @HDE - I'm wondering, considering only "pebble sized" (1cm sized) versus "rock sized" (10cm sized) items. Is one, or the other, of those far more common?? I think (but you should explain!) the chart suggests there are about 10,000 times more of the 1cm type than the 10cm type. Is that correct? Sep 23 '16 at 17:05
• why does the chart "stick" at 10^4 on the left side near the top?? Sep 23 '16 at 17:05
• @CarlWitthoft Fair point; that might require a mention. Sep 23 '16 at 17:14
• @JoeBlow Edited. Sep 23 '16 at 17:18
Saturn's rings are composed of chunks as large as 1km in size, although the typical particle is tiny. They are spread through an area on average 10 meters thick. Also, Saturn's rings are nearly pure ice, not rocks. I don't know if we have a count of "how many" objects are larger than pebble size, given the gigantic number of particles that make up the rings I think we only have counts of the smaller moons within the rings. | 2022-01-23 14:25:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6021727323532104, "perplexity": 1076.201140842642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304287.0/warc/CC-MAIN-20220123141754-20220123171754-00232.warc.gz"} |
http://sopromat.xyz/en/calculators/index?name=diagrams_multiple | ### Mohrs integral calculator
Length of segment Values on first diagram (it may be curved) On the left In middle On the right Values on first diagram On the left In middle On the right
#### Diagram equations
$$f(z) = -9.52\cdot z^2 +35.2\cdot z +12$$ $$y(z) = -2.86\cdot z +4$$
### Mohrs integral by Simpson rule
$$\int f(z) \cdot y(z) dz = \frac{ L}{6}(y_{laft} \cdot f_{left}+4*y_{mid} \cdot f_{mid}+y_{right} \cdot f_{right}) =$$ $$= \frac{4.2}{6}(12\cdot4+4*44\cdot-2+-8\cdot-8) = -168$$
### Mohr`s integral by Area rule
$$\int f(z) \cdot y(z) dz =\Omega \cdot y_c = 126 \cdot -1.33 = -168$$ | 2017-06-29 14:16:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7476984858512878, "perplexity": 6104.48740905446}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128329344.98/warc/CC-MAIN-20170629135715-20170629155715-00266.warc.gz"} |
http://physics.stackexchange.com/questions/18490/free-surface-of-inviscid-fluid-flow/18604 | # Free surface of inviscid fluid flow
The following problem seems like it should have a definite solution, but I've been thinking about it for months and haven't got anywhere. It might not be a well-posed problem, but if it isn't I'd like to understand why.
An incompressible, inviscid fluid of density $\rho$ flows continuously (in a steady state) as shown in the following diagram:
We know the height of the fluid (and hence its pressure) at points $x_1$ and $x_2$, but we don't know the velocity of the fluid or its height at any other value for $x$. The top of the fluid is a free surface, i.e. it's determined by the properties of the flow rather than being specified as part of the problem. I've drawn it as slightly concave but I've no idea if that's right.
Let us assume that the velocity profile at $x_1$ is vertical (i.e. velocity does not vary with height above point $x_1$). Because the fluid is inviscid it seems to me that the constant vertical velocity profile should be maintained as the fluid travels to the right. So if we were to dye a vertical line of the fluid a particular colour, it would remain a vertical line as it travelled to the right, because the pressure differential across the line is constant with depth. If this is correct it means we can think of the velocity component in the $x$-direction, $v_x$, as a function of $x$ rather than $x$ and $y$.
Because the flow is incompressible we know that $h(x)v_x(x)$ must constant over space, and this is the value I want to solve for (although it might not have a unique value - in that case I just want to know the function $h(x)$). If we need to we can also assume we know the initial and final velocities, $v_x(x_1)=v_1$ and $v_x(x_2)=v_2$.
It seems like Bernoulli's equation should have some relevance here. That would certainly be the case if the fluid were confined to a pipe instead of having a free surface. (In this case the pressure difference would be independent of the difference in height, so we'd need to know that as well.) But every time I try to solve this problem using the Bernoulli equation I get into a terrible mess. I'm really not sure of the best way to approach this problem, so any insight anyone can offer would be much appreciated.
-
I'm pretty sure that your "vertical line remains vertical" isn't necessarily true... – genneth Dec 19 '11 at 16:31
I agree with @genneth. The condition $\frac{\partial v_x}{\partial y} = 0$ does not follow from zero viscidity. This could be an additional assumption since there is not enough data. But IMO approximation of potential flow is a better assumption for this case. – Maksim Zholudev Dec 19 '11 at 17:59
I've edited the question to make it clear that I'm assuming that the initial velocity profile is vertical - I think inviscidity should imply that this constant vertical profile should be maintained across space. I'm not 100% sure though - I'll have to see if I can think of an argument justifying that. – Nathaniel Dec 21 '11 at 12:39
(1) I agree with the fellows above me on this point - inviscidity does not imply $\partial_y v_x=0$. (2) if at $x_1$ the velocity is independent of $y$, then because it is horizontal at $y=0$, it must also be horizontal for $y=h(x_1)$, and therefore it $\partial_x h(x_1)=0$, and the tangent to the free surface will be horizontal on the left side. (3) The problem is not defined if you don't specify the full profile of $\vec v(y)$ at both edges of the problem. (4) This is a very nice question! I'm setting a bounty. – yohBS Dec 21 '11 at 20:01
The fluid is incompressible and has no sources inside. This means that the continuity (mass conservation) equation is $$\text{div}\,\vec{v} = 0. \qquad (1)$$ Now we follow the standard procedure and represent $\vec{v}$ as follows: $$\vec{v} = \text{rot}\,\vec{A}.$$ The divergence of any curl is zero so equation (1) is satisfied by any smooth vector field $\vec{A}(\vec{r})$.
For 2-dimensional flow we can assume $$\vec{A} = \Bigl(0, 0, \psi(x,y)\Bigr)$$ so that $$v_x = \frac{\partial \psi}{\partial y}; \quad v_y = -\frac{\partial \psi}{\partial x}. \qquad (2)$$
In fluid dynamics $\psi(x,y)$ is called stream function because the lines of constant $\psi$ are the streamlines.
We have two known stream lines: $$y = h(x)$$ and $$y = 0.$$ Let's select the stream function as follows: $$\psi(x,y) = C\frac{y}{h(x)}. (3)$$ For the upper streamline we have $\psi=C$ and for the lower line $\psi=0$. This is a strong assumption and the main point of the solution. The selection of $\psi$ is not definite here. Formula (3) is intuitive, it gives streamlines that are similar to $h(x)$ but coming more straight while approaching to the bottom.
Now we can use (2) and (3) to find $\vec{v}$: $$\vec{v}(x,y) = \left(\frac{C}{h(x)},\; Cy\frac{h'(x)}{h^2(x)}\right) \qquad (4)$$ where $C$ is some constant determined by the boundary conditions.
The velocity field depends on the unknown function $h(x)$.
### Finding $h(x)$
Function $h(x)$ can be found by applying the Bernoulli equation to the top streamline. Bernoulli equation for incompressible fluid is $$\frac{v^2\bigl(x, y(x)\bigr)}{2} + \frac{p\bigl(x, y(x)\bigr)}{\rho} + gy(x) = \text{const}$$ where
$y(x)$ is the streamline,
$p(x,y)$ is the pressure,
$\rho$ is the density of the fluid,
$g$ is the gravitational acceleration.
The upper streamline $y(x)=h(x)$ is in the equilibrium with the atmosphere air. This means that the pressure of the fluid is equal to the atmosphere pressure: $$p\bigl(x, y(x)\bigr) = p_0.$$ So $$\frac{v^2\bigl(x, h(x)\bigr)}{2} + gh(x) = \text{const} - \frac{p_0}{\rho} = D \qquad (5)$$
Substitution of (4) into (5) gives the differential equation for $h(x)$: $$\frac{C^2}{2h^2}\left(1 + h'^2\right) + gh = D$$ or $$\frac{dh}{dx} = \sqrt{\frac{2h^2}{C^2}(D-gh) - 1}$$ $$h(x_1) = h_1$$ This can be solved numerically if we know $C$ and $D$.
### Finding $C$ and $D$
The parameters $C$ and $D$ are determined by the boundary conditions. If we know the velocity at the point $(x_1, h_1)$ then
from (4): $$C = h_1 v_x(x_1, h_1)$$ and from (5): $$D = \frac{v^2(x_1, h_1)}{2} + gh_1$$
### Conclusion
There are two weak points in this solution:
1. the intuitive assumption (3);
2. the undefined constants $C$ and $D$.
Some boundary conditions can violate (3) and/or make calculation of $C$ and $D$ very difficult.
## Alternative
There is another way to select the stream function.
If we suppose the flow to be potential the velocity field will have the following form: $$\vec{v} = \nabla \varphi$$ where $\varphi(x,y)$ is the potential of the velocity vector field.
Then in addition to (2) we will have: $$v_x = \frac{\partial \varphi}{\partial x}; \quad v_y = \frac{\partial \varphi}{\partial y}. \qquad (6)$$
Now we can introduce the complex potential of the flow: $$W(x+iy) = \varphi(x,y) + i \psi(x,y)$$ The formulas (2) and (6) together are exactly the Cauchy-Riemann conditions for the function $W(z)$. This means that $W(z)$ describes some conformal map.
If we find a conformal map $W(z)$ that turns some rectangle into the blue area in the picture in the question for any $h(x)$, then we find a potential flow (flow with zero vorticity) that solves the problem. Some manipulations will still be required to find $h(x)$.
In fact any $W(z)$ always turns 2-dimensional potential flow with $$\varphi(x,y) = x$$ $$\psi(x,y) = y$$ and $$\vec{v} = (v_x, 0)$$ into something more interesting and still fitting the hydrodynamics equations. This works only for potential flows that are not always a good approximation.
Finding of $W(z)$ in this case is a mathematical problem and perhaps should be discussed somewhere else.
-
Great, thanks very much - this gives me a lot of general insight into how to solve this type of problem. I think the weak points you mention aren't too bad for my purposes (and surely something like them will apply to any solution). In particular, I think I'm right in saying that your choice of $\psi$ is the only one that obeys my assumption of an initial velocity profile that's constant with height, so it can be justified that way. It certainly seems to be the only choice that maintains a constant velocity profile anyway. – Nathaniel Dec 22 '11 at 15:02
@Nathaniel, this solution is not the only possible. I have added some remarks concerning potential flow approximation to the post. Potential flow has zero vorticity and a line that is initially vertical will not remain vertical in the case we consider. – Maksim Zholudev Dec 22 '11 at 15:52 | 2016-02-14 01:44:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9104318618774414, "perplexity": 174.644498471046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701168076.20/warc/CC-MAIN-20160205193928-00126-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://wiki.seg.org/wiki/Dictionary:Hankel_transform | # Dictionary:Hankel transform
(hank’ ∂l) The Hankel transform of order ${\displaystyle m}$ of the real function ${\displaystyle f(t)}$ is
${\displaystyle F(s)=\int J_{m}(st)\;dt}$
where ${\displaystyle J_{m}}$ is the ${\displaystyle m}$-order Bessel function. Also called a Bessel transform. Named for Hermann Hankel (1839–1873), German mathematician. | 2020-02-29 14:29:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9335567355155945, "perplexity": 1933.3182558888548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875149238.97/warc/CC-MAIN-20200229114448-20200229144448-00033.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=129&t=41724&p=143522 | ## Reversible Work and Maximum Work
$w=-P\Delta V$
and
$w=-\int_{V_{1}}^{V_{2}}PdV=-nRTln\frac{V_{2}}{V_{1}}$
Julia Go 2L
Posts: 60
Joined: Sun Sep 30, 2018 12:17 am
Been upvoted: 1 time
### Reversible Work and Maximum Work
Why does reversible expansion do more work than an irreversible expansion?
Sierra Cheslick 2B
Posts: 61
Joined: Fri Sep 28, 2018 12:27 am
### Re: Reversible Work and Maximum Work
Reversible expansion is slower, and therefore more work is done since less energy is lost as heat.
Posts: 62
Joined: Fri Sep 28, 2018 12:23 am
### Re: Reversible Work and Maximum Work
The expansion for a reversible system is done infinitely slowly and less heat is released to the surroundings resulting in more work done. You can better understand this process by viewing the pressure volume graphs for the two types of systems. The area under the curves shows you the work done by the systems. Clearly you can see which one does more work using those graphs.
Posts: 57
Joined: Fri Sep 28, 2018 12:27 am
### Re: Reversible Work and Maximum Work
From my knowledge, reversible reactions happen simultaneously, whereas an irreversible goes in steps. For this reason, say we were to calculate the area under the curve of the two reactions, the work is greater in a reversible reaction. The graph of a reversible is a curve (which has more area under) while the irreversible, because it happens in steps, tends to be just a rectangle of sorts. To illustrate this, imagine decreasing pressure first, which would cover no area, the line would remain stagnant, but the point would just drop.
Posts: 47
Joined: Fri Sep 28, 2018 12:15 am
### Re: Reversible Work and Maximum Work
When a gas expands reversibly, the external pressure is matched to the pressure of the gas at every stage of the expansion. Thus, the steps that correspond to the increase in volume are infinitesimal, and thus achieve maximum area under the curve. This results in the maximum work.
Ray Guo 4C
Posts: 90
Joined: Fri Sep 28, 2018 12:15 am
### Re: Reversible Work and Maximum Work
Madeline Motamedi 4I wrote:The expansion for a reversible system is done infinitely slowly and less heat is released to the surroundings resulting in more work done. You can better understand this process by viewing the pressure volume graphs for the two types of systems. The area under the curves shows you the work done by the systems. Clearly you can see which one does more work using those graphs.
The graph does make sense, but why does a reversible expansion lose less heat? | 2019-08-19 03:31:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5910645127296448, "perplexity": 1108.43540211522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314641.41/warc/CC-MAIN-20190819032136-20190819054136-00531.warc.gz"} |
https://questions.examside.com/past-years/jee/question/plet-s-t-u-be-three-non-void-sets-and-f--s-to-t-g-wb-jee-mathematics-trigonometric-functions-and-equations-bcc5wnpdqqi2vmio | NEW
New Website Launch
Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc...
1
WB JEE 2022
English
Bengali
Let S, T, U be three non-void sets and f : S $$\to$$ T, g : T $$\to$$ U and composed mapping g . f : S $$\to$$ U be defined. Let g . f be injective mapping. Then
A
f, g both are injective.
B
neither f nor g is injective.
C
f is obviously injective.
D
g is obviously injective.
মনে কর S, T, U তিন্তু অশূণ্য সেট এবং f : S $$\to$$ T, g : T $$\to$$ U ও সংযোজক চিত্রণ g . f : S $$\to$$ U সংজ্ঞাত করা যায়। যদি g . f একৈক চিত্রণ হয়, তবে
A
f, g উভয়েই একৈক হবে
B
f ও g কেউই একৈক চিত্রণ নয়
C
f অবশ্যই একৈক হবে
D
g অবশ্যই একৈক হবে
2
WB JEE 2022
English
Bengali
A is a set containing n elements. P and Q are two subsets of A. Then the number of ways of choosing P and Q so that P $$\cap$$ Q = $$\varphi$$ is
A
$${2^{2n\_2n}}{C_n}$$
B
$${2^n}$$
C
$${3^n} - 1$$
D
$${3^n}$$
A, n সদস্য বিশিষ্ট একটি সেট। P ও Q, A এর দুটি উপসেট। $$P \cap Q = \varphi$$, P ও Q দুটি উপসেট যত রকমে গঠন করা যায় তার সংখ্যা হবে
A
$${2^{2n\_2n}}{C_n}$$
B
$${2^n}$$
C
$${3^n} - 1$$
D
$${3^n}$$
3
WB JEE 2021
English
Bengali
Let R be the real line. Let the relations S and T or R be defined by
$$S = \{ (x,y):y = x + 1,0 < x < 2\} ,T = \{ (x,y):x - y$$ is an integer}. Then
A
both S and T are equivalence relations on R
B
T is an equivalence on R but S is not
C
neither S nor T is an equivalence relation on R
D
S is an equivalence relation on R but T is not
Explanation
We have,
$$S = \{ (x,y):y = x + 1,0 < x < 2\}$$
For reflexive (x, x)$$\in$$S
x = x + 1 $$\notin$$S
$$\therefore$$ S is not reflexive
$$\therefore$$ S is not equivalence relation
T = {(x, y) : x $$-$$ y is an integer}
For reflexive (x, x) $$\in$$T
x = x $$-$$ x = 0$$\in$$R
T is reflexive
For symmetric
(x, y) = x $$-$$ y is an integer
(y, x) = y $$-$$ x is also integer
$$\therefore$$ T is symmetric
For Transitive
(x, y) $$\in$$T, (y, z) $$\in$$T a $$\Rightarrow$$ (x, z) $$\in$$T
(x, y) = x $$-$$ y is an integer
(y, z) = y $$-$$ z is also integer
$$\therefore$$ (x $$-$$ y) + (y $$-$$ z) = x $$-$$ z is an integer
$$\therefore$$ (x, z) $$\in$$T
Hence, T is an equivalence relation.
মনে কর R বাস্তব রেখা সৃচিত করে। R-এ দুটি সম্বন্ধ s ও T নিম্নভাবে সংজ্ঞাত আছে:
$$S = \{ (x,y):y = x + 1,0 < x < 2\} ,T = \{ (x,y):x - y$$ একটি পর্ণসংখ্যা}। সেক্ষেত্রে
A
S এবং T উভয়েই R -এ সমতুল্যতা সম্বন্ধ
B
T, R-এ সমতুল্যতা সম্বন্ধ কিন্ত S নয়
C
S ও T-এর কেউই R-এ সমতুল্যতা সম্বন্ধ নয়
D
S, R-এ সমতুল্যতা সম্বন্ধ কিন্তু T নয়
Explanation
আমাদের কাছে,
$$S = \{ (x,y):y = x + 1,0 < x < 2\}$$
আত্মবাচক (x, x) $$\in$$ S
x = x + 1 $$\notin$$S
$$\therefore$$ S আত্মবাচক নয়।
$$\therefore$$ S সমতুল্য সম্পর্ক নয়
T = {(x, y) : x $$-$$ y একটি পূর্ণসংখ্যা}
আত্মবাচক (x, x) $$\in$$ T
x = x $$-$$ x = 0 $$\in$$ R
T হল আত্মবাচক
প্রতিসম এর জন্য
(x, y) = x $$-$$ y একটি পূর্ণসংখ্যা
(y, x) = y $$-$$ x এছাড়াও পূর্ণসংখ্যা
$$\therefore$$ T প্রতিসম
সকর্মক এর জন্য
(x, y) $$\in$$T, (y, z) $$\in$$T a $$\Rightarrow$$ (x, z) $$\in$$T
(x, y) = x $$-$$ y একটি পূর্ণসংখ্যা
(y, z) = y $$-$$ z এছাড়াও পূর্ণসংখ্যা
$$\therefore$$ (x $$-$$ y) + (y $$-$$ z) = x $$-$$ z একটি পূর্ণসংখ্যা
$$\therefore$$ (x, z) $$\in$$ T
তাই, T একটি সমতুল্য সম্পর্ক।
4
WB JEE 2021
English
Bengali
Let A, B, C be three non-void subsets of set S. Let (A $$\cap$$ C) $$\cup$$ (B $$\cap$$ C') = $$\phi$$ where C' denote the complement of set C in S. Then
A
A $$\cap$$ B = $$\phi$$
B
A $$\cap$$ B $$\ne$$ $$\phi$$
C
A $$\cap$$ C = A
D
A $$\cup$$ C = A
Explanation
Given, (A $$\cap$$ C) $$\cup$$ (B $$\cap$$ C') = $$\phi$$
$$\Rightarrow$$ A $$\cap$$ C = $$\phi$$ ..... (i)
and B $$\cap$$ C' = $$\phi$$ ..... (ii)
From Eqs. (i) and (ii), we get
A $$\cap$$ B = $$\phi$$
মনে কর A, B, C সেট s-এর অ-শূণ্য উপসেট। মনে কর (A $$\cap$$ C) $$\cup$$ (B $$\cap$$ C') = $$\phi$$ যেখানে C', s সেটে C-এর পূরক সেট। সেক্ষেত্রে
A
A $$\cap$$ B = $$\phi$$
B
A $$\cap$$ B $$\ne$$ $$\phi$$
C
A $$\cap$$ C = A
D
A $$\cup$$ C = A
Explanation
দেওয়া, (A $$\cap$$ C) $$\cup$$ (B $$\cap$$ C') = $$\phi$$
$$\Rightarrow$$ A $$\cap$$ C = $$\phi$$ ..... (i)
এবং B $$\cap$$ C' = $$\phi$$ ..... (ii)
সমীকরণ (i) এবং (ii) থেকে, আমরা পাই
A $$\cap$$ B = $$\phi$$
Questions Asked from Sets and Relations
On those following papers in MCQ (Single Correct Answer)
Number in Brackets after Paper Indicates No. of Questions
WB JEE 2022 (3)
WB JEE 2021 (2)
WB JEE 2020 (3)
WB JEE 2019 (3)
WB JEE 2018 (3)
Joint Entrance Examination
JEE Main JEE Advanced WB JEE | 2022-10-02 13:39:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8838950395584106, "perplexity": 12509.853230892853}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00636.warc.gz"} |
https://paramsingh.dev/blog/b72/ | # Higher-order functions
Sunday, Dec 27, 2020
Functional Haskell
Higher-order functions are functions that take other function(s) as their argument(s). This mechanism allows functional languages to be more expressive and powerful. Higher-order functions enable encapsulating common programming patterns as functions. Here are few examples:
## Map
Pattern: Create a new list by applying a function to all elements of an old list.
-- definition
map :: (a -> b) -> [a] -> [b]
map f [] = []
map f (x:xs) = f x : map f xs
-- square of numbers
Prelude> map (^2) [1..10]
[1,4,9,16,25,36,49,64,81,100]
-- length of words in a string list
Prelude> map length ["hello", "bye"]
[5,3]
Note that it is defined as “map takes a function (a -> b) and a list of type a as arguments and produces a list of type b as result”.
It is worth noting that map is polymorphic. Furthermore, it can be applied to itself to process nested lists:
Prelude> map (map signum) [[-100, 5, 0], [-9, -9, 0]]
[[-1,1,0],[-1,-1,0]]
## Filter
Pattern: Create a new list by selecting elements of another list that meets a certain predicate.
-- definition
filter :: (a -> Bool) -> [a] -> [a]
filter f [] = []
filter f (x:xs) | f x = x : filter f xs
| otherwise = filter f xs
-- odd numbers only
Prelude> filter odd [1..10]
[1,3,5,7,9]
-- vowels only
Prelude> filter (\x -> elem x "aeiou") ['a'..'z']
"aeiou"
## Foldr
Pattern: process elements of the list using a right-asociative operator, i.e., for some operator $@$ and list $[a,b,c]$, we have $$a @ (b @ c)$$ This is a recursive pattern with the following general structure:
f [] = v
f (x:xs) = x @ f xs
which can be encapsulated using foldr as:
foldr (@) v
So for an empty list, a value v is returned while for non-empty list, head is combined (using operator) with the result of recursively calling the function on tail. For example, instead of writing the sum function as follows:
sum :: Num a => [a] -> a
sum [] = 0
sum (x:xs) = x + sum xs
We could use foldr:
sum :: Num a => [a] -> a
sum = foldr (+) 0
A key observation can be made looking at the pattern that foldr is encapsulating: f (x:xs) = x @ f xs. The operator @ has two arguments - first is the head of the list and second is result of recursively applying f to the tail of the list. With this insight, we can write more functions beyond simple math operators. For example, recursive definition of reversing a list is:
reverse :: [a] -> [a]
reverse [] = []
reverse (x:xs) = reverse xs ++ [x]
Since expression for non-empty list contains 1. head of the list, x and 2. recursively applying reverse to the tail of the list, it meets the foldr pattern. So we can define our operator as (with head as the first argument and tail as the second argument):
\x xs -> xs ++ [x]
And then write our reverse using foldr as:
reverse :: [a] -> [a]
reverse = foldr (\x xs -> xs ++ [x]) []
Prelude> reverse "hello"
"olleh"
Prelude> reverse [1..5]
[5,4,3,2,1]
## Foldl
Pattern: process elements of the list using a left-associative operator, i.e. for some operator $@$ and list $[a,b,c]$, we have $$(a @ b) @ c$$ This is a recursive pattern with the following general structure:
f v [] = v
f v (x:xs) = f (v @ x) xs
which can be encapsulated using foldl as:
foldl (@) v
Here, v is the accumulator value which is returned for an empty list. For non-empty list, head is combined with accumulator using the operator and the function recursively called on this new accumulator and tail. Similar to foldr, addition can now be defined as:
add :: Num a => [a] -> a
add = foldl (+) 0
Although it looks similar to foldr, it is worth noting that while it is the last element that is processed first in foldr, in foldl head of the list is first element to be processed. When working with foldl pattern, it may be useful to think about how the operator @, processes the head x and the initial value of the accumulator v. For example, for reversing a list, head will be consed to empty list in first step. Therefore, our function becomes \xs x -> x:xs and we can write reverse like:
reverse' = foldl (\xs x -> x:xs) []
Prelude> reverse' "hello"
"olleh"
Prelude> reverse' [1..5]
[5,4,3,2,1]
Note that first argument now is the tail and second is the head.
## (.)
Pattern: Composition, as in math: $f.g$
Composition allows to write nested functions more clearly without worrying about initial argument. For example, removing empty list from list of lists can be written as:
rmempty :: [[a]] -> [[a]]
rmempty = filter (not . null)
Prelude> rmempty [[1], [],[2]]
[[1],[2]]
## Curry and uncurry
Pattern: Convert a function on pair to curried function and vice-versa
curry' :: ((a, b) -> c) -> a -> b -> c
curry' f x y = f (x,y)
uncurry' :: (a -> b -> c) -> (a,b) -> c
uncurry' f (x,y) = f x y
## More examples
### check if all elements satisfy a predicate
Prelude> all Data.Char.isLower "hello"
True
### check if any element satisfy a predicate
Prelude> any Data.Char.isUpper "hello"
False
### take elements from a list while they satisfy a predicate
Prelude> takeWhile (/= ' ') "hello world"
"hello"
### drop elements from a list while they satisfy a predicate
Prelude> dropWhile (\x -> sqrt x < 5) [1..30]
[25.0,26.0,27.0,28.0,29.0,30.0] | 2022-09-28 02:55:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.523409366607666, "perplexity": 2985.7202617103335}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00627.warc.gz"} |
https://proofwiki.org/wiki/Definition:Complex_Subfield | # Definition:Complex Subfield
Let $\C$ be the field of complex numbers.
A field $\GF$ is a complex subfield if and only if $\GF$ is a subfield of $\C$. | 2023-02-07 14:02:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.577147901058197, "perplexity": 178.42127995534645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00326.warc.gz"} |
http://math.stackexchange.com/questions/207213/probability-that-two-numbers-do-not-follow-each-other-and-are-distributed-over-a?answertab=active | # Probability that two numbers do not follow each other and are distributed over a sequence
Assume a sequence $S$ of numbers out of the set $N={1..n}$.
Example: $$S = "123312"$$
Set of all pairs would be: $$M = (2,3),(3,3), (3,1)$$
Not in $M$:
$(1,1)$ : not occuring in the sequence next to each other.
$(1,2)$ : occuring twice
Numbers are order-dependent. $(1,2)$ != $(2,1)$
What is the probability that all pairs $(a$,$b)$ out of the set $M$ that occur only once in $S$ are evenly distributed over the length of the sequence $S$? How do i model "evenly distributed" over $S$ best?
(In reality i would like an even distance between the pairs $(a$,$b)$. Or at least the probability for that.) Should i model it as part of Quintiles with bins in which the pairs fall(like i highlighted in this example? What would be the mathematical term for such a distribution?
The arrow represents the sequence $S$ made up of instances of $N$ of length $L$. The yellow circles denote the position $p$ of a pair $(a$,$b)$ that only once occurs in $S$. The horizontal arrow denotes the distance $D$ between two yellow circles.
I also would like to plug into this different probabilities from the set N={1..n} with pN=p(1..n).
What i have tried so far is computing the probability that a pair occurs in a certain part of the sequence and not in the other parts. I them sum this up over all possible pairs. However the equations get pretty nasty somehow and mathematica cannot simplify them by much. Pretty sure, that there should be a solution that is not too complicated.
-
a/b is a pair. not a fraction. how can i express that more clearly? – tarrasch Oct 8 '12 at 7:29
Say $(a,b)$ if it is the pair you consider. I guess it should be a pair out of $N^2$, not $N$, then. $S$ is a set of pairs, right? – Ewan Delanoy Oct 8 '12 at 7:31
The fractions in your drawing should be pairs also? – Ewan Delanoy Oct 8 '12 at 7:33
yes you are right. the fractions in the drawing are pairs also. – tarrasch Oct 8 '12 at 8:07
Have you listed all possible pairs of $M$ in your example? What about $(1,2)$ and $(1,1)$? – Raskolnikov Oct 8 '12 at 8:27
I will only consider the question, 'How do i model "evenly distributed" over $S$ best?' and even for that question I will just give you a search term. If you have a finite collection of numbers $$\{{x_1,x_2,\dots,x_n\}}$$ with $0\le x_i\lt1$ for $i=1,2,\dots,n$, then you can calculate a quantity called the discrepancy of the set, which will give you a measure of how far it is from being evenly distributed. So my advice is that you hunt around for discussions of the mathematical concept of discrepancy. | 2014-12-20 02:27:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8706328868865967, "perplexity": 185.6793980014695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769321.94/warc/CC-MAIN-20141217075249-00039-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://codereview.stackexchange.com/questions/188729/checking-for-win-on-a-wrap-around-connect-6-board | # Checking for win on a wrap-around Connect 6 board
I have a method, that checks win conditions on a "Torus" board, which is a board without any borders. This means that if you place 4 diagonal stones on the top left, and 2 diagonal stones in the bottom right, if they are in the same diagonal and would connect if you ignored the border, which then would lead to a win. Basically it's a Connect 6 Game.
size() returns the size of the board which either is 18 or 20.
currentPlayer is a String like : "P1" or "P2".
r and c are the row and column where a move has just been made.
public boolean checkTorusWinner(int r, int c){
int count = 0;
boolean hasWinner = false;
String currentPlayer = board[r][c];
int hSize = size();
int vSize = size();
/*
Checks Horizontally for a Win.
*/
for (int i = c; i < hSize; i++) {
if (board[r][i] == currentPlayer) {
count++;
} else {
count = 0;
}
if (i == size() - 1) {
hSize = size() - 2;
for (int j = 0; j < hSize; j++) {
if (board[r][j] == currentPlayer) {
count++;
} else {
count = 0;
}
if (count == 6) {
boardType = "none";
hasWinner = true;
break;
}
}
}
if (count == 6) {
boardType = "none";
hasWinner = true;
break;
}
}
/*
Checks Vertically for a Win
*/
for (int i = r; i < vSize; i++) {
if (board[i][c] == currentPlayer) {
count++;
} else {
count = 0;
}
if (i == size() - 1) {
i = -1;
vSize = size() - 2;
}
if (count == 6) {
boardType = "none";
hasWinner = true;
break;
}
}
/*
Checks Diagonally from Top left to Bottom right
*/
if (c - r >= 0) {
int startingC;
startingC = c - r;
int size = size();
for (int i = r, j = c; j < size; i++, j++) {
if (board[i][j] == currentPlayer) {
count++;
} else {
count = 0;
}
if (j == size() - 1) {
j = startingC - 1;
i = -1;
size = size() - 2;
}
if (count == 6) {
boardType = "none";
hasWinner = true;
break;
}
}
} else {
int size = size();
int startingR;
startingR = r - c;
for (int i = r, j = c; i < size; i++, j++) {
if (board[i][j] == currentPlayer) {
count++;
} else {
count = 0;
}
if (i == size() - 1) {
j = -1;
i = startingR - 1;
size = size() - 2;
}
if (count == 6) {
boardType = "none";
hasWinner = true;
break;
}
}
}
/*
Checks Diagonally from bottom left to top right;
*/
if (r + c <= 17) {
int loop = 0;
int startingR;
startingR = r + c;
for (int i = r, j = c; i >= 0; i--, j++) {
if (board[i][j] == currentPlayer) {
count++;
} else {
count = 0;
}
if (i == 0 && loop == 0) {
i = startingR + 1;
j = -1;
loop++;
}
if (count == 6) {
boardType = "none";
hasWinner = true;
break;
}
}
} else if (r + c > 17) {
int loop = 0;
int startingC;
startingC = (r + c) - (size() - 1);
for (int i = r, j = c; i >= startingC; i--, j++) {
if (board[i][j] == currentPlayer) {
count++;
} else {
count = 0;
}
if (i == startingC && loop == 0) {
i = size();
j = startingC - 1;
loop++;
}
if (count == 6) {
boardType = "none";
hasWinner = true;
break;
}
}
}
return hasWinner;
}
Unfortunately this method's length doesn't meet the requirement for my university: it has to be a maximum of 80 lines. I don't know how I'm supposed to shorten this code so that it still works.
• for the record... are you mandated to solve the problem in a single method? – Vogel612 Mar 3 '18 at 18:39
• I think it is shorter to write a method that ignores which was the last move and just loops the entire board to find any winning combination, and if so, by which player. – JanErikGunnar Mar 3 '18 at 20:00
• This doesn't look like it returns the correct result. Have you checked it? In particular, what happens if the newest piece is in the middle of a horizontal sequence? – mdfst13 Mar 3 '18 at 23:28
Your code is a procedural approach to the problem.
There is nothing wrong with procedural approaches in general, but Java is an object oriented (OO) programming language and if you want to become a good Java programmer then you should start solving problems in an OO way.
But OOP doesn't mean to "split up" code into random classes.
The ultimate goal of OOP is to reduce code duplication, improve readability and support reuse as well as extending the code.
Doing OOP means that you follow certain principles which are (among others):
• information hiding / encapsulation
• single responsibility
• separation of concerns
• KISS (Keep it simple (and) stupid.)
• DRY (Don't repeat yourself.)
• Law of demeter ("Don't talk to strangers!")
## How might that help to improve your code?
From an OO point of view you have the current position and you have to check if that position is part of a line of at least 5 other (excluding itself) equal elements.
The first implication is that you only have to look at the current positions neighbors and that there is no need to scan the whole board.
The easieast way is to go in each direction and count the consecutive neighbors belonging to the current player. Afterwards you add the opposit directions and check the sum.
I use a trick to safely calculate an index in "wrap around" arrays:
(arrayLength + currentIndex + differece) % arrayLength
where % is the modulo operator.
Here is how I would implement that:
class FiledPosition{
final int r, c;
FiledPosition(int r, int c){
this.r=r;
this.c=c;
}
}
interface NeighborCalculator
FiledPosition getFor(FiledPosition current);
}
enum Direction {NORTH,NORTH_EAST,EAST,SOUTH_EAST,SOUTH,SOUTH_WEST,WEST,NORTH_WEST}
the code above may live in separate classes. What follows must be in your solution class
private final Direction[][] opposits = new Direction[][]{
{NORTH,SOUTH},
{NORTH_EAST,SOUTH_WEST},
{NORTH_WEST,SOUTH_EAST},
{EAST,WEST}
}
private final int WIN_COUNT_EXCLUDUNG_CURRENT = 5;
Map<Direction, NeighborCalculator> neigborSelector = new HashMap<>();
public boolean checkTorusWinner(int r, int c){
neigborSelector.put(NORTH, new NeighborCalculator(){ // pre java8 anonymous inner class
public FiledPosition getFor(FiledPosition currentPoint ){
return new Point((vSize+currentPoint.r-1)%vSize, currentPoint.c));
}
});
neigborSelector.put(NORTH_EAST,currentPoint -> new Point((vSize+currentPoint.r-1)%vSize, (hSize+currentPoint.c+1)%hSize)); // java8 lambda
neigborSelector.put(EAST,currentPoint -> new Point(currentPoint.r, (hSize+currentPoint.c+1)%hSize));
neigborSelector.put(SOUTH_EAST,currentPoint -> new Point((vSize+currentPoint.r+1)%vSize, (hSize+currentPoint.c+1)%hSize));
// similar for all directions, should be in the classes constructor.
Map<Direction, Counter> lineSectionCounts = new HashMap<>();
String currentPlayer = board[r][c];
int hSize = size();
int vSize = size();
// count consecutive same in each direction without current
for(Direction direction : Direction.values()){
int consecutiveSame = 0;
FiledPosition neigborPosition = neigborSelector.get(direction).getFor(new FiledPosition(r,c));
while(currentPlayer.equals(board[neigborPosition.r][neigborPosition.c])){
consecutiveSame++;
neigborPosition = neigborSelector.get(direction).getFor(neigborPosition);
}
lineSectionCounts.put(consecutiveSame); // auto boxed
}
// sum up opposit directions
for(Direction[] opposit : opposits){
if(WIN_COUNT_EXCLUDUNG_CURRENT < lineSectionCounts.get(oposit[0]) + lineSectionCounts.get(oposit[1])) // auto unbox
return true; // current Player won.
}
return false; // no winner yet
}
This complete code has 57 lines (24 without the configuration). There are 4 lines missing to completly configure neigborSelector map (if you use jav8 lambdas).
This code uses basic Java concepts like classes, interfaces and enums you should already have heared of.
• This solution has 8 repetitions of code where the only difference is the -1, 0, +1. Additionally, since SOUTH is basically "NEGATIVE NORTH", defining SOUTH is effectively duplication of NORTH. I think it would be more DRY to have the four "real" directions (horizontal, vertical, diagonal forward, diagonal backward) as small objects only containing the "x-delta" and "y-delta" ({1,0}, {0,1}, {1,1}, {1,-1}). Then the code (or a method on the direction class) can just negate both values to get the opposite direction where needed. – JanErikGunnar Mar 5 '18 at 12:46
• Additionally, WIN_COUNT_EXCLUDUNG_CURRENT is very specific, a WIN_COUNT (and taking -1 if "current" needs to be excluded) would be far more reusable and readable. – JanErikGunnar Mar 5 '18 at 12:46
• @JanErikGunnar "WIN_COUNT_EXCLUDUNG_CURRENT is very specific, a WIN_COUNT (and taking -1 if "current" needs to be excluded) would be far more reusable and readable" WIN_COUNT_EXCLUDUNG_CURRENT is an implementation detail but in OOP we reuse behavior, not code. – Timothy Truckle Mar 5 '18 at 12:56
• Fair enough, although I disagree :) – JanErikGunnar Mar 5 '18 at 13:10
• It's a big world with room for more than one way to look at it... ;o) – Timothy Truckle Mar 5 '18 at 13:11
Some tips:
• obviously, break into smaller methods if allowed
• There are many magic numbers and strings. 17? 6? "none"? Put them in variables or constants to make the code more readable and easier to adjust.
• you are changing hSize value within a loop where hSize is the bound of the loop. This is very error prone!
• hasWinner = true; break; can be replaced by return true;
• modulo operator is fantastic for "looping" arrays. Example: length = 18; position = 17; array[(position) % length] == array[(position+1) % length] This will compare position 17 with position 0.
• code can probably be simplified and more generic by taking less regard to the position of the last move.
• the method sets board type to "none". I assume this resets the board. This makes the name of the method (checkTorusWinner) missleading
• pseudo code for very easy approach
For each x:
For each y:
Set xwin = ywin = diagFwdWin = diagBackwdWin = true.
For each n = 1... winLength-1:
If board [x] [y] != board [(x+n) % boardwidth] [y] then
Xwin =false
End if
// Same if, but instead, y+n % boardheight, Ywin
// same if, but both x+n AND y+n, diagbackwdWin
// same if, but both x+n and y MINUS n, diagfwdWin
End for
If Xwin or ywin or diagFwdWin or diagBackwdWin == true then
Return true
End if
End for
End for
Return false
For better performance, x and y in outer loop can be limited to position of last added +/- winLength | 2020-07-03 17:09:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2578451633453369, "perplexity": 7333.442908704756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882634.5/warc/CC-MAIN-20200703153451-20200703183451-00279.warc.gz"} |
https://mathoverflow.net/questions/346237/measure-of-the-boundary-of-alexandrov-space | # Measure of the boundary of Alexandrov space
Let $$X$$ be a compact $$n$$-dimensional Alexandrov space with curvature bounded below. Let $$\partial X$$ denote its boundary in the sense of the theory of Alexandrov spaces.
Is it true that if $$\partial X\ne \emptyset$$ then it has finite and positive $$(n-1)$$-Hausdorff measure? (The case $$n=2$$ is already interesting to me.)
It can be proved by induction on $$n$$. Base case $$n=1$$. The step follows since gradient exponent is locally Lipschitz and it maps $$T_p(\partial X)=\mathrm{Cone}[\Sigma_p(\partial X)]$$ to a neighborhood of $$p$$ in $$\partial X$$. | 2020-01-23 20:04:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9893988966941833, "perplexity": 89.9473918313061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250613416.54/warc/CC-MAIN-20200123191130-20200123220130-00377.warc.gz"} |
https://par.nsf.gov/biblio/10244183-shockmulticloud-interactions-galactic-outflows-cloud-layers-lognormal-density-distributions | Shock–multicloud interactions in galactic outflows – I. Cloud layers with lognormal density distributions
ABSTRACT We report three-dimensional hydrodynamical simulations of shocks (${\cal M_{\rm shock}}\ge 4$) interacting with fractal multicloud layers. The evolution of shock–multicloud systems consists of four stages: a shock-splitting phase in which reflected and refracted shocks are generated, a compression phase in which the forward shock compresses cloud material, an expansion phase triggered by internal heating and shock re-acceleration, and a mixing phase in which shear instabilities generate turbulence. We compare multicloud layers with narrow ($\sigma _{\rho }=1.9\bar{\rho }$) and wide ($\sigma _{\rho }=5.9\bar{\rho }$) lognormal density distributions characteristic of Mach ≈ 5 supersonic turbulence driven by solenoidal and compressive modes. Our simulations show that outflowing cloud material contains imprints of the density structure of their native environments. The dynamics and disruption of multicloud systems depend on the porosity and the number of cloudlets in the layers. ‘Solenoidal’ layers mix less, generate less turbulence, accelerate faster, and form a more coherent mixed-gas shell than the more porous ‘compressive’ layers. Similarly, multicloud systems with more cloudlets quench mixing via a shielding effect and enhance momentum transfer. Mass loading of diffuse mixed gas is efficient in all models, but direct dense gas entrainment is highly inefficient. Dense gas only survives in compressive clouds, more »
Authors:
; ; ; ; ;
Award ID(s):
Publication Date:
NSF-PAR ID:
10244183
Journal Name:
Monthly Notices of the Royal Astronomical Society
Volume:
499
Issue:
2
Page Range or eLocation-ID:
2173 to 2195
ISSN:
0035-8711
2. Direct numerical simulations are performed to investigate a stratified shear layer at high Reynolds number ( $Re$ ) in a study where the Richardson number ( $Ri$ ) is varied among cases. Unlike previous work on a two-layer configuration in which the shear layer resides between two layers with constant density, an unbounded fluid with uniform stratification is considered here. The evolution of the shear layer includes a primary Kelvin–Helmholtz shear instability followed by a wide range of secondary shear and convective instabilities, similar to the two-layer configuration. During transition to turbulence, the shear layers at low $Ri$ exhibit a period of thickness contraction (not observed at lower $Re$ ) when the momentum and buoyancy fluxes are counter-gradient. The behaviour in the turbulent regime is significantly different from the case with a two-layer density profile. The transition layers, which are zones with elevated shear and stratification that form at the shear-layer edges, are stronger and also able to support a significant internal wave flux. After the shear layer becomes turbulent, mixing in the transition layers is shown to be more efficient than that which develops in the centre of the shear layer. Overall, the cumulative mixing efficiency ( $E^C$ )more »
3. ABSTRACT Cosmic ray (CR)-modified shocks are a demanding test of numerical codes. We use them to test and validate the two-moment method for CR hydrodynamics, as well as characterize the realism of CR shock acceleration in two-fluid simulations which inevitably arises. Previously, numerical codes were unable to incorporate streaming in this demanding regime, and have never been compared against analytic solutions. First, we find a new analytic solution highly discrepant in acceleration efficiency from the standard solution. It arises from bi-directional streaming of CRs away from the subshock, similar to a Zeldovich spike in radiative shocks. Since fewer CRs diffuse back upstream, this favours a much lower acceleration efficiency, typically ${\lesssim}10{{\ \rm per\ cent}}$ (even for Mach number > 10) as opposed to ${\gtrsim}50{{\ \rm per\ cent}}$ found in previous analytic work. At Mach number ≳10, the new solution bifurcates into three branches, with efficient, intermediate, and inefficient CR acceleration. Our two-moment code accurately recovers these solutions across the entire parameter space probed, with no ad hoc closure relations. For generic initial conditions, the inefficient branch is robustly chosen by the code; the intermediate branch is unstable. The preferred branch is very weakly modified by CRs. At high Mach numbersmore » | 2022-12-03 06:48:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6640385985374451, "perplexity": 2747.7016153271634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710924.83/warc/CC-MAIN-20221203043643-20221203073643-00078.warc.gz"} |
https://solvedlib.com/n/n-a,9327211 | #### Similar Solved Questions
##### SalesRegion A Region B Region € Region D Region E 17 10 15 16 20 12 16 14 16 14 15 12 14 10 10 18 17 13 12 15 112
Sales Region A Region B Region € Region D Region E 17 10 15 16 20 12 16 14 16 14 15 12 14 10 10 18 17 13 12 15 11 2...
##### Please show ALL your work. Will give thumbs up for proper answers! 8. Consider a population...
Please show ALL your work. Will give thumbs up for proper answers! 8. Consider a population of n couples where a boy is born to the ith couple with probability p; and c, is the expected number of children born to this couple. Assume p, is constant with time for all couples and that sexcs of successi...
##### Reinforcement Problem 1 (20 pts.) A) Glven For a feedback control system with an open-loop process...
Reinforcement Problem 1 (20 pts.) A) Glven For a feedback control system with an open-loop process and negative feedback described RES. Ge(s) - 2, Gp(s) - 7 . H(s) - 1. B) Determine Step 1: The value of the open-loop transfer function Gcs) Go(s)H(s) Step 2: The values of the open-loop static error ...
##### 10. At Opal Incorporated, direct materials are added at the beginning of the process and conversions...
10. At Opal Incorporated, direct materials are added at the beginning of the process and conversions costs are uniformly applied. Other details include: WIP beginning (70% for conversion) 22 comma 200 units Units started 153 comma 000 units ...
##### Q2: Al Bayan Steel Company has bought a machinery for RO 75,000 with an estimated useful...
Q2: Al Bayan Steel Company has bought a machinery for RO 75,000 with an estimated useful life of 6 years. Mr. Yousuf, an Accounting Manager, is responsible to calculate the depreciation and record in the books of accounts. He is confused in selecting the method of depreciation like WDV or Straight l...
##### Flywheels are large, massive wheels used to store energy
Flywheels are large, massive wheels used to store energy. They can be spun up slowly, then the wheel's energy can be released quickly to accomplish a task that demands high power. An industrial flywheel has a 1.5 m diameter and a mass of 250 kg. Its maximum angular velocity is 1200 rpm. Q: A mo...
##### Write an algebraic expression to represent each verbal expression. twice a number decreased by the cube of the same number
Write an algebraic expression to represent each verbal expression. twice a number decreased by the cube of the same number...
##### In the laboratory, a general chemistry student measured the pH of a 0.331 M aqueous solution...
In the laboratory, a general chemistry student measured the pH of a 0.331 M aqueous solution of phenol (a weak acid), C6H5OH to be 5.225. Use the information she obtained to determine the Ka for this acid....
##### QUESTION 6Use the following data set to answer questions 6 20: X Y 10 130 20 135 30 148 40 160 50 185 60 201 70 225 80 250 90 280 100 305 Find 2QUESTION 7Using thetable in Question6, find Zv:
QUESTION 6 Use the following data set to answer questions 6 20: X Y 10 130 20 135 30 148 40 160 50 185 60 201 70 225 80 250 90 280 100 305 Find 2 QUESTION 7 Using thetable in Question6, find Zv:...
##### Homework: Homework 12 Score: 0 of 1 Pt Bus Econ 5.29 Wuualutn ceototos fcatha EinolFb Tra compan Erol 0n1 [haljob = ellal colnuT opunNruhor Cadaye nlnkeanntedeComor €Hw Scoro: 03.5556. 29 0lm1elMEueLILLDrltano nn uu en tnbeditetxt C delre Aon teanmin Anetu Vn Fumnbor ot dur 109ernatet Proltedn Inho pcb &iAlua eulrrm IrtunkhuIna pott Mreru Lhn FanMlounat Ira nouieIittoFnlutul |Ercon toLane414y bJAclci Clicc" ANlt
Homework: Homework 12 Score: 0 of 1 Pt Bus Econ 5.29 Wuualutn ceototos fcatha EinolFb Tra compan Erol 0n1 [haljob = ellal colnuT opunNruhor Cadaye nlnkeanntede Comor € Hw Scoro: 03.5556. 29 0lm1el MEueLILLDrltano nn uu en tnbeditetxt C delre Aon teanmin Anetu Vn Fumnbor ot dur 109ernatet ...
##### Answer these: Research Question ]: Locate active site: The inhibitor binds to the active site using Non-covalent interactions Hydrogen bonds, ion pairs; VDW interactions, and hydrophobic effect Remember that hydrogen bonds have to be 3.5 angstroms or less in distance between the donor atom and the receptor atom and Ion Pairs must be within 4.0 Angstroms or less distance (and have to be oppositely charged): Considering the distances involved, which residues likcly bond_tothe substrateto holditin
Answer these: Research Question ]: Locate active site: The inhibitor binds to the active site using Non-covalent interactions Hydrogen bonds, ion pairs; VDW interactions, and hydrophobic effect Remember that hydrogen bonds have to be 3.5 angstroms or less in distance between the donor atom and the r...
##### HmeuaTThe data belowrepresent the valuesin billions of dollars ofthe damage of ShurricanesHHurricane ValuesDelta Lota EtaKatrina Irma 21 6.8 20.3115.4 1.7Based on Wilcoxon Signed-Ranks test an approximate 90% confidence interval for the medianis(2.10 15.40)(1.70 20.30)None(1.90 17.85)CLEAR MY CHOICE
HmeuaT The data belowrepresent the valuesin billions of dollars ofthe damage of Shurricanes HHurricane Values Delta Lota Eta Katrina Irma 21 6.8 20.3 115.4 1.7 Based on Wilcoxon Signed-Ranks test an approximate 90% confidence interval for the medianis (2.10 15.40) (1.70 20.30) None (1.90 17.85) CLEA...
##### Predict the major product of the reaction; Clearly indicate stereochemistry; if necessary:Draw detailed arrow-pushing mechanism for the transformation; accounting for stereocnemistry if necessary:Brz
Predict the major product of the reaction; Clearly indicate stereochemistry; if necessary: Draw detailed arrow-pushing mechanism for the transformation; accounting for stereocnemistry if necessary: Brz...
##### Write Cewn the transitlon matrix assoclated with the state transition dlagram:0.60.3
Write Cewn the transitlon matrix assoclated with the state transition dlagram: 0.6 0.3...
##### A typical large coal fired electric power plant produces 1,200MW of electricity by burning fuel with...
A typical large coal fired electric power plant produces 1,200MW of electricity by burning fuel with energy content of 3,000 MW. Three hundred and forty (340) MW are lost as heat up the smokestack leaving the rest to drive a generator to produce electricity. However, the thermal efficiency of the tu...
##### Know why (he aniline had to protected first before adding the nitro group onto the ring
Know why (he aniline had to protected first before adding the nitro group onto the ring...
##### In $3-18,$ write each number in terms of $i$ $-3+2 \sqrt{-9}$
In $3-18,$ write each number in terms of $i$ $-3+2 \sqrt{-9}$...
##### 6. [3 pts:] Give two sets of polar coordinates for each of the points A-F in the figure
6. [3 pts:] Give two sets of polar coordinates for each of the points A-F in the figure...
##### 12) Solve AABC subject to the given conditions possible: Round the lengths of sides and measures of the angles (in degrees) to decimal place if necessary:0 = 14, 02 30,B = 699
12) Solve AABC subject to the given conditions possible: Round the lengths of sides and measures of the angles (in degrees) to decimal place if necessary: 0 = 14, 02 30,B = 699...
##### Suppose 8 e Qn 0 R and Bm € for some integer m 2 1. Show that 82 € Q HINT: Prove that minq ( 8) splits over Qn 0 R. If Y is any root of this polynomial, note that ly | = IB|.
Suppose 8 e Qn 0 R and Bm € for some integer m 2 1. Show that 82 € Q HINT: Prove that minq ( 8) splits over Qn 0 R. If Y is any root of this polynomial, note that ly | = IB|....
##### 12.7 Problems307Write a small program in MATLAB that evaluates the gradient at each point in a two-dimensional grid in the space -5 < *1 < 5. Choose an appropriate grid spacing (at least 100 points). Find a way in MATLAB (0 plot each gradient vector of Question (3) as a small arrow (pointing in the correct direction) at its X1 , X2 location; that is, if you evaluated 1,000 gradient vectors in Question 3, then your plot should contain 1,000 arrows: In addition, plot the contours of f on tOp
12.7 Problems 307 Write a small program in MATLAB that evaluates the gradient at each point in a two-dimensional grid in the space -5 < *1 < 5. Choose an appropriate grid spacing (at least 100 points). Find a way in MATLAB (0 plot each gradient vector of Question (3) as a small arrow (pointing... | 2023-03-30 00:57:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5876606106758118, "perplexity": 3739.6632135739915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00668.warc.gz"} |
http://obermuhlner.ch/wordpress/ | ## Introduction
This article describes how the factorial and Gamma functions for non-integer arguments where implemented for the big-math library.
For an introduction into the Gamma function see Wikipedia: Gamma Function
## Attempt to use Euler’s definition as an infinite product
Euler’s infinite product definition is easy to implement, but I have some doubts about its usefulness to calculate the result with the desired precision.
public static BigDecimal factorialUsingEuler(BigDecimal x, int steps, MathContext mathContext) {
MathContext mc = new MathContext(mathContext.getPrecision() * 2, mathContext.getRoundingMode());
BigDecimal product = BigDecimal.ONE;
for (int n = 1; n < steps; n++) {
BigDecimal factor = BigDecimal.ONE.divide(BigDecimal.ONE.add(x.divide(BigDecimal.valueOf(n), mc), mc), mc).multiply(pow(BigDecimal.ONE.add(BigDecimal.ONE.divide(BigDecimal.valueOf(n), mc), mc), x, mc), mc);
product = product.multiply(factor, mc);
}
return product.round(mathContext);
}
Running with increasing number of steps shows that this approach will not work satisfactorily.
5! in 1 steps = 1
5! in 10 steps = 49.950049950049950050
5! in 100 steps = 108.73995188474609004
5! in 1000 steps = 118.80775820319167518
5! in 10000 steps = 119.88007795802040268
5! in 100000 steps = 119.98800077995800204
5! in 1000000 steps = 119.99880000779995800
## Using Spouge’s Approximation
After reading through several pages of related material I finally find a promising approach: Spouge’s approximation
where a is an arbitrary positive integer that can be used to control the precision and the coefficients are given by
Please note that the coefficients are constants that only depend on a and not on the input argument to factorial.
The relative error when omitting the epsilon part is bound to
It is nice to have a function that defines the error, normally I need to empirically determine the error for a sensible range of input arguments and precision.
### Expected error of Spouge’s Approximation
Lets implement the error formula and see how it behaves.
public static BigDecimal errorOfFactorialUsingSpouge(int a, MathContext mc) {
return pow(BigDecimal.valueOf(a), BigDecimal.valueOf(-0.5), mc).multiply(pow(TWO.multiply(pi(mc), mc), BigDecimal.valueOf(-a-0.5), mc), mc);
}
Instead of plotting the error bounds directly, I determine the achievable precision using -log10(error).
Using the relative error formula of Spouge’s approximation we see that the expected precision is pretty linear to the chosen value of a for the values [1..1000] (which are a sensible range for the precision the users of the function will use).
This will make it easy to calculate a sensible value for a from the desired precision.
Note: While testing this I found a bug in log(new BigDecimal("6.8085176335035800378E-325")). Fixed it before it could run away.
### Caching Spouge’s coefficients (depending on precision)
The coefficients depend only on the value of a.
We can cache the coefficients for every value of a that we need:
private static Map<Integer, List<BigDecimal>> spougeFactorialConstantsCache = new HashMap<>();
private static List<BigDecimal> getSpougeFactorialConstants(int a) {
return spougeFactorialConstantsCache.computeIfAbsent(a, key -> {
List<BigDecimal> constants = new ArrayList<>(a);
MathContext mc = new MathContext(a * 15/10);
BigDecimal c0 = sqrt(pi(mc).multiply(TWO, mc), mc);
boolean negative = false;
BigDecimal factor = c0;
for (int k = 1; k < a; k++) {
BigDecimal bigK = BigDecimal.valueOf(k);
BigDecimal ck = pow(BigDecimal.valueOf(a-k), bigK.subtract(BigDecimal.valueOf(0.5), mc), mc);
ck = ck.multiply(exp(BigDecimal.valueOf(a-k), mc), mc);
ck = ck.divide(factorial(k - 1), mc);
if (negative) {
ck = ck.negate();
}
negative = !negative;
}
return constants;
});
}
Calculating the coefficients becomes quite expensive with higher precision.
This will need to be explained in the javadoc of the method.
### Spouge’s approximation with pre-calculated constants
Now that we have the coefficients for a specific value of a we can implement the factorial method:
public static BigDecimal factorialUsingSpougeCached(BigDecimal x, MathContext mathContext) {
MathContext mc = new MathContext(mathContext.getPrecision() * 2, mathContext.getRoundingMode());
int a = mathContext.getPrecision() * 13 / 10;
List<BigDecimal> constants = getSpougeFactorialConstants(a);
BigDecimal bigA = BigDecimal.valueOf(a);
boolean negative = false;
BigDecimal factor = constants.get(0);
for (int k = 1; k < a; k++) {
BigDecimal bigK = BigDecimal.valueOf(k);
negative = !negative;
}
result = result.multiply(exp(x.negate().subtract(bigA, mc), mc), mc);
result = result.multiply(factor, mc);
return result.round(mathContext);
}
Let’s calculate first the factorial function with constant precision over a range of input values.
Looks like the argument x does not have much influence on the calculation time.
More interesting is the influence that the precision has on the calculation time. The following chart was measured by calculating 5! over a range of precisions:
## Gamma function
The implementation of the Gamma function is trivial, now that we have a running factorial function.
public static BigDecimal gamma(BigDecimal x, MathContext mathContext) {
return factorialUsingSpougeCached(x.subtract(ONE), mathContext);
}
## Polishing before adding it to BigDecimalMath
Before committing the new methods factorial() and gamma() to BigDecimalMath I need to do some polishing…
The access to the cache must be synchronized to avoid race conditions.
Most important is optimizing the calculation for the special cases of x being integer values which can be calculated much faster by calling BigDecimalMath.factorial(int).
Lots of unit tests of course! As usual Wolfram Alpha provides some nice reference values to prove that the calculations are correct (at least for the tested cases).
Writing javadoc takes also some time (and thoughts).
You can check out the final version in github: BigComplexMath.java
## Release 2.0.0 of big-math library supports now complex numbers
The easter week-end was the perfect time to polish and release version 2.0.0 of the big-math library.
The class BigComplex represents complex numbers in the form (a + bi).
It follows the design of BigComplex with some convenience improvements like overloaded operator methods.
• re
• im
• subtract(BigComplex)
• subtract(BigComplex, MathContext)
• subtract(BigDecimal)
• subtract(BigDecimal, MathContext)
• subtract(double)
• multiply(BigComplex)
• multiply(BigComplex, MathContext)
• multiply(BigDecimal)
• multiply(BigDecimal, MathContext)
• multiply(double)
• divide(BigComplex)
• divide(BigComplex, MathContext)
• divide(BigDecimal)
• divide(BigDecimal, MathContext)
• divide(double)
• reciprocal(MathContext)
• conjugate()
• negate()
• abs(MathContext)
• angle(MathContext)
• absSquare(MathContext)
• isReal()
• re()
• im()
• round(MathContext)
• hashCode()
• equals(Object)
• strictEquals(Object)
• toString()
• valueOf(BigDecimal)
• valueOf(double)
• valueOf(BigDecimal, BigDecimal)
• valueOf(double, double)
• valueOfPolar(BigDecimal, BigDecimal, MathContext)
• valueOfPolar(double, double, MathContext)
A big difference to BigDecimal is that BigComplex.equals() implements the mathematical equality and not the strict technical equality.
This was a difficult decision because it means that BigComplex behaves slightly different than BigDecimal but considering that the strange equality of BigDecimal is a major source of bugs we decided it was worth the slight inconsistency.
If you need the strict equality use BigComplex.strictEquals().
The class BigComplexMath is the equivalent of BigDecimalMath and contains mathematical functions in the complex domain.
• sin(BigComplex, MathContext)
• cos(BigComplex, MathContext)
• tan(BigComplex, MathContext)
• asin(BigComplex, MathContext)
• acos(BigComplex, MathContext)
• atan(BigComplex, MathContext)
• acot(BigComplex, MathContext)
• exp(BigComplex, MathContext)
• log(BigComplex, MathContext)
• pow(BigComplex, long, MathContext)
• pow(BigComplex, BigDecimal, MathContext)
• pow(BigComplex, BigComplex, MathContext)
• sqrt(BigComplex, MathContext)
• root(BigComplex, BigDecimal, MathContext)
• root(BigComplex, BigComplex, MathContext)
## Aurora Borealis (Northern Lights) in Tromsø
To celebrate my fiftieth birthday the whole family had a great vacation in Tromsø to see the northern lights.
The following videos where all taken using a wireless remote control with programmable interval.
The single shots where joined using ffmpeg.
#!/bin/sh
# $1 = framerate (for aurora timelapse use 1 to 4) #$2 = start number of first image
# $3 = output file (with .mp4 extension) ffmpeg -y -r "$1" -start_number "$2" -i IMG_%04d.JPG -s hd1080 -vf "framerate=fps=30:interp_start=0:interp_end=255:scene=100" -vcodec mpeg4 -q:v 1 "$3
In a few cases the images where a bit underexposed and needed to be brightened.
This was done with a simple shell script using the imagemagick convert.
#!/bin/sh
mkdir modulate150
for i in *.JPG
do
convert $i -modulate 150% modulate150/$i
done
## Adaptive precision in Newton’s Method
This describes a way to improve the performance of a BigDecimal based implementation of Newton’s Method
by adapting the precision for every iteration to the maximum precision that is actually possible at this step.
As showcase I have picked the implementation of Newton’s Method to calculate the natural logarithm of a BigDecimal value with a determined precision.
The source code is available on github: big-math.
Here the mathematical formulation of the algorithm:
$$\require{AMSmath}$$
$$\displaystyle y_0 = \operatorname{Math.log}(x),$$
$$\displaystyle y_{i+1} = y_i + 2 \frac{x – e^{y_i} }{ x + e^{y_i}},$$
$$\displaystyle \ln{x} = \lim_{i \to \infty} y_i$$
Here a straightforward implementation:
private static final BigDecimal TWO = valueOf(2);
public static BigDecimal logUsingNewtonFixPrecision(BigDecimal x, MathContext mathContext) {
if (x.signum() <= 0) {
throw new ArithmeticException("Illegal log(x) for x <= 0: x = " + x);
}
MathContext mc = new MathContext(mathContext.getPrecision() + 4, mathContext.getRoundingMode());
BigDecimal acceptableError = BigDecimal.ONE.movePointLeft(mathContext.getPrecision() + 1);
BigDecimal result = BigDecimal.valueOf(Math.log(x.doubleValue()));
BigDecimal step;
do {
BigDecimal expY = BigDecimalMath.exp(result, mc); // available on https://github.com/eobermuhlner/big-math
step = TWO.multiply(x.subtract(expY, mc), mc).divide(x.add(expY, mc), mc);
} while (step.abs().compareTo(acceptableError) > 0);
return result.round(mathContext);
}
The MathContext mc is created with a precision of 4 digits more than the output is expected to have.
All calculations are done with this MathContext and therefore with the full precision.
The result is correct but we can improve the performance significantly be adapting the precision for every iteration.
The initial approximation uses Math.log(x.doubleValue()) which has a precision of about 17 significant digits.
We can expect that the precision triples with every iteration so it does not make sense to calculate with a higher precision than necessary.
Here the same implementation with a temporary MathContext that is recreated with a different precision every iteration.
public static BigDecimal logUsingNewtonAdaptivePrecision(BigDecimal x, MathContext mathContext) {
if (x.signum() <= 0) {
throw new ArithmeticException("Illegal log(x) for x <= 0: x = " + x);
}
int maxPrecision = mathContext.getPrecision() + 4;
BigDecimal acceptableError = BigDecimal.ONE.movePointLeft(mathContext.getPrecision() + 1);
BigDecimal result = BigDecimal.valueOf(Math.log(x.doubleValue()));
BigDecimal step = null;
do {
}
MathContext mc = new MathContext(adaptivePrecision, mathContext.getRoundingMode());
BigDecimal expY = BigDecimalMath.exp(result, mc); // available on https://github.com/eobermuhlner/big-math
step = TWO.multiply(x.subtract(expY, mc), mc).divide(x.add(expY, mc), mc);
} while (adaptivePrecision < maxPrecision || step.abs().compareTo(acceptableError) > 0);
return result.round(mathContext);
}
The performance comparison between the two implementations is impressive.
The following chart shows the time in nanoseconds it takes to calculate the log() of values of x in the range from 0 to 1 with a precision of 300 digits.
Here some more charts to show the performance improvements of the adaptive precision technique applied to different approximative implementations:
This method can only be applied to approximative methods that improve the result with every iteration and discard the previous result, such as Newton’s Method.
It does obviously not work on methods that accumulate the results of each iteration to calculate the final result, such as Taylor series which add the terms.
## BigDecimalMath
$$\require{AMSmath}$$
Java 8 is out and there are still no Math functions for BigDecimal.
After playing around with some implementations to calculate Pi I decided to write some implementation of BigDecimalMath to fill this gap.
The result of this is available on github: big-math.
The goal was to provide the following functions:
• exp(x)
• log(x)
• pow(x, y)
• sqrt(x)
• root(n, x)
• sin(x), cos(x), tan(x), cot(x)
• asin(x), acos(x), atan(x), acot(x)
• sinh(x), cosh(x), tanh(x)
• asinh(x), acosh(x), atanh(x)
The calculations must be accurate to the desired precision (specified in the MathContext)
and the performance should be acceptable and stable for a large range of input values.
## Implementation Details
### Implementation exp(x)
To implement exp() the classical Taylor series was used:
$$\displaystyle e^x = \sum^{\infty}_{n=0} \frac{x^n}{n!} = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots$$
### Implementation log()
Note that in Java the function name log() means the natural logarithm, which in mathematical notation is written $$\ln{x}$$.
The implementation of log() is based on Newton’s method.
We can use the double version Math.log() to give us a good initial value.
$$\displaystyle y_0 = \operatorname{Math.log}(x),$$
$$\displaystyle y_{i+1} = y_i + 2 \frac{x – e^{y_i} }{ x + e^{y_i}},$$
$$\displaystyle \ln{x} = \lim_{i \to \infty} y_i$$
Several optimizations in the implementation transform the argument of log(x) so that it will be nearer to the optimum of 1.0 to converge faster.
\begin{align} \displaystyle \ln{x} & = \ln{\left(a \cdot 10^b\right)} = \ln{a} + \ln{10} \cdot b & \qquad \text{for } x \leq 0.1 \text{ or } x \geq 10 \\ \displaystyle \ln{x} & = \ln{\left( 2 x \right)} – \ln{2} & \qquad \text{for } x \lt 0.115 \\ \displaystyle \ln{x} & = \ln{\left( 3 x \right)} – \ln{3} & \qquad \text{for } x \lt 0.14 \\ \displaystyle \ln{x} & = \ln{\left( 4 x \right)} – 2 \ln{2} & \qquad \text{for } x \lt 0.2 \\ \displaystyle \ln{x} & = \ln{\left( 6 x \right)} – \ln{2} – \ln{3} & \qquad \text{for } x \lt 0.3 \\ \displaystyle \ln{x} & = \ln{\left( 8 x \right)} – 3 \ln{2} & \qquad \text{for } x \lt 0.42 \\ \displaystyle \ln{x} & = \ln{\left( 9 x \right)} – 3 \ln{3} & \qquad \text{for } x \lt 0.7 \\ \displaystyle \ln{x} & = \ln{\left( \frac{1}{2} x \right)} + \ln{2} & \qquad \text{for } x \lt 2.5 \\ \displaystyle \ln{x} & = \ln{\left( \frac{1}{3} x \right)} + \ln{3} & \qquad \text{for } x \lt 3.5 \\ \displaystyle \ln{x} & = \ln{\left( \frac{1}{4} x \right)} + 2 \ln{2} & \qquad \text{for } x \lt 5.0 \\ \displaystyle \ln{x} & = \ln{\left( \frac{1}{6} x \right)} + \ln{2} + \ln{3} & \qquad \text{for } x \lt 7.0 \\ \displaystyle \ln{x} & = \ln{\left( \frac{1}{8} x \right)} + 3 \ln{2} & \qquad \text{for } x \lt 8.5 \\ \displaystyle \ln{x} & = \ln{\left( \frac{1}{9} x \right)} + 3 \ln{3} & \qquad \text{for } x \lt 10.0 \end{align}
The additional logarithmic functions to different common bases are simple:
$$\displaystyle \operatorname{log}_2{x} = \frac{\ln{x}}{\ln{2}}$$
$$\displaystyle \operatorname{log}_{10}{x} = \frac{\ln{x}}{\ln{10}}$$
Since the precalculated values for $$\ln{2}, \ln{3}, \ln{10}$$ with a precision of up to 1100 digits already exist for the optimizations mentioned above, the log2() and log10() functions could reuse them and are therefore reasonably fast.
### Implementation pow(x)
The implementation of pow() with non-integer arguments is based on exp() and log():
$$\displaystyle x^y = e^{y \ln x}$$
If y is an integer argument then pow() is implemented with multiplications:
$$\displaystyle x^y = \prod_{i \to y} x$$
Actually the implementation is further optimized to reduce the number of multiplications by squaring the argument whenever possible.
### Implementation sqrt(x), root(n, x)
The implementation of sqrt() and root() uses Newton’s method to approximate the result until the necessary precision is reached.
In the case of sqrt() we can use the double version Math.sqrt() to give us a good initial value.
$$\displaystyle y_0 = \operatorname{Math.sqrt}(x),$$
$$\displaystyle y_{i+1} = \frac{1}{2} \left(y_i + \frac{x}{y_i}\right),$$
$$\displaystyle \sqrt{x} = \lim_{i \to \infty} y_i$$
Unfortunately the root() function does not exist for double so we are forced to use a simpler initial value.
$$\displaystyle y_0 = \frac{1}{n},$$
$$\displaystyle y_{i+1} = \frac{1}{n} \left[{(n-1)y_i +\frac{x}{y_i^{n-1}}}\right],$$
$$\displaystyle \sqrt[n]{x} = \lim_{i \to \infty} y_i$$
### Implementation sin(x), cos(x), tan(x), cot(x)
The basic trigonometric functions where implemented using Taylor series or if this proved more efficient by their relationship with an already implemented functions:
$$\displaystyle \sin x = \sum^{\infty}_{n=0} \frac{(-1)^n}{(2n+1)!} x^{2n+1} = x – \frac{x^3}{3!} + \frac{x^5}{5!} – \cdots$$
$$\displaystyle \cos x = \sum^{\infty}_{n=0} \frac{(-1)^n}{(2n)!} x^{2n} = 1 – \frac{x^2}{2!} + \frac{x^4}{4!} – \cdots$$
$$\displaystyle \tan x = \frac{\sin x}{\cos x}$$
$$\displaystyle \cot x = \frac{\cos x}{\sin x}$$
### Implementation asin(x), acos(x), atan(x), acot(x)
The inverse trigonometric functions use a Taylor series for arcsin().
$$\displaystyle \arcsin x = \sum^{\infty}_{n=0} \frac{(2n)!}{4^n (n!)^2 (2n+1)} x^{2n+1}$$
This series takes very long to converge, especially when the argument x gets close to 1.
As optimization the argument x is transformed to a more efficient range using the following relationship.
$$\displaystyle \arcsin x = \arccos \sqrt{1-x^2} \qquad \text{for } x \gt \sqrt{\frac{1}{2}} \text{ (} \approx 0.707107 \text{)}$$
The remaining functions are implemented by their relationship with arcsin().
$$\displaystyle \arccos x = \frac{\pi}{2} – \arcsin x$$
$$\displaystyle \arctan x = \arcsin \frac{x}{\sqrt{1+x^2}}$$
$$\displaystyle \operatorname{arccot} x = \frac{\pi}{2} – \arctan x$$
### Implementation sinh(x), cosh(x), tanh(x)
Taylor series are efficient for most of the implementations of hyperbolic functions.
$$\displaystyle \sinh x= \sum_{n=0}^\infty \frac{x^{2n+1}}{(2n+1)!} = x + \frac{x^3}{3!} + \frac{x^5}{5!} + \frac{x^7}{7!} +\cdots$$
$$\displaystyle \cosh x = \sum_{n=0}^\infty \frac{x^{2n}}{(2n)!} = 1 + \frac{x^2}{2!} + \frac{x^4}{4!} + \frac{x^6}{6!} + \cdots$$
The Taylor series for tanh() converges very slowly, so we use the relationship with sinh() and tanh() instead.
$$\displaystyle \tanh x = \frac{\sinh x}{\cosh x}$$
### Implementation asinh(x), acosh(x), atanh(x)
The inverse hyperbolic functions can be expressed using natural logarithm.
$$\displaystyle \operatorname{arsinh} x = \ln(x + \sqrt{x^2 + 1} )$$
$$\displaystyle \operatorname{arcosh} x = \ln(x + \sqrt{x^2-1} )$$
$$\displaystyle \operatorname{artanh} x = \frac12\ln\left(\frac{1+x}{1-x}\right)$$
## Performance calculating different precisions
Obviously it will take longer to calculate a function result with a higher precision than a lower precision.
The following charts show the time needed to calculate the functions with different precisions.
The arguments of the functions where:
• log(3.1)
• exp(3.1)
• pow(123.456, 3.1)
• sqrt(3.1)
• root(2, 3.1)
• root(3, 3.1)
• sin(3.1)
• cos(3.1)
While the time to calculate the results grows worse than linear for higher precisions the speed is still reasonable for precisions of up to 1000 digits.
## Performance calculating different values
The following charts show the time needed to calculate the functions over a range of values with a precision of 300 digits.
• log(x)
• exp(x)
• pow(123.456, x)
• sqrt(x)
• root(2, x)
• root(3, x)
• sin(x)
• cos(x)
The functions have been separated in a fast group (exp, sqrt, root, sin, cos) and a slow group (exp, log, pow).
For comparison reasons the exp() function is contained in both groups.
### Range 0 to 2
The performance of the functions is in a reasonable range and is stable, especially when getting close to 0 in which some functions might converge slowly.
The functions exp(), sin(), cos() need to be watched at the higher values of x to prove that the do not continue to grow.
Shows nicely that log() is more efficient when x is close to 1.0.
By using divisions and multiplication with the prime numbers 2 and 3 the log() function was optimized to use this fact for values of x than can be brought closer to 1.0.
This gives the strange arches in the performance of log().
The pow() function performs fairly constant, except for the powers of integer values which are optimized specifically.
### Range 0 to 10
Shows that sin(), cos() have been optimized with the period of 2*Pi (roughly 6.28) so that they do not continue to grow with higher values.
This optimization has some cost which needs to be watched at higher values.
exp() has become stable and does no longer grow.
log() is stable and shows the typical arches with optimas at 1.0, 2.0 (divided by 2), 3.0 (divided by 3), 4.0 (divided by 2*2), 6.0 (divided by 2*3), 8.0 (divided by 2*2*2) and 9.0 (divided by 3*3).
pow() continues stable.
### Range -10 to 10
Positive and negative values are symmetric for all functions that are defined for the negative range.
### Range 0 to 100
All functions are stable over this range.
All functions are stable over this range.
The pow() function makes the chart somewhat hard to read because of the optimized version for integer powers.
The log() function shows here the effect of another optimization using the expoential form. The range from 10 to 100 is brought down to the range 1 to 10 and the same divisions are applied. This has the effect of showing the same arches again in the range from 10 to 100.
Posted in Development, Java, Math | | 8 Comments
## Bernoulli Numbers
As part of the ongoing development of the BigRational and BigDecimalMath classes I needed to implement a method to calculate the Bernoulli numbers.
Since I had a hard time to find a reference list of the Bernoulli numbers I will put the table of the first few calculated numbers here.
For a larger list of Bernoulli numbers have a look at the bernoulli.csv file.
B0 = 1 B1 = -1 2 B2 = 1 6 B3 = 0 B4 = -1 30 B5 = 0 B6 = 1 42 B7 = 0 B8 = -1 30 B9 = 0 B10 = 5 66 B11 = 0 B12 = -691 2730 B13 = 0 B14 = 7 6 B15 = 0 B16 = -3617 510 B17 = 0 B18 = 43867 798 B19 = 0 B20 = -174611 330 B21 = 0 B22 = 854513 138 B23 = 0 B24 = -236364091 2730 B25 = 0 B26 = 8553103 6 B27 = 0 B28 = -23749461029 870 B29 = 0 B30 = 8615841276005 14322 B31 = 0 B32 = -7709321041217 510 B33 = 0 B34 = 2577687858367 6 B35 = 0 B36 = -26315271553053477373 1919190 B37 = 0 B38 = 2929993913841559 6 B39 = 0 B40 = -261082718496449122051 13530 B41 = 0 B42 = 1520097643918070802691 1806 B43 = 0 B44 = -27833269579301024235023 690 B45 = 0 B46 = 596451111593912163277961 282 B47 = 0 B48 = -5609403368997817686249127547 46410 B49 = 0 B50 = 495057205241079648212477525 66 B51 = 0 B52 = -801165718135489957347924991853 1590 B53 = 0 B54 = 29149963634884862421418123812691 798 B55 = 0 B56 = -2479392929313226753685415739663229 870 B57 = 0 B58 = 84483613348880041862046775994036021 354 B59 = 0 B60 = -1215233140483755572040304994079820246041491 56786730 B61 = 0 B62 = 12300585434086858541953039857403386151 6 B63 = 0 B64 = -106783830147866529886385444979142647942017 510 B65 = 0 B66 = 1472600022126335654051619428551932342241899101 64722 B67 = 0 B68 = -78773130858718728141909149208474606244347001 30 B69 = 0 B70 = 1505381347333367003803076567377857208511438160235 4686 B71 = 0 B72 = -5827954961669944110438277244641067365282488301844260429 140100870 B73 = 0 B74 = 34152417289221168014330073731472635186688307783087 6 B75 = 0 B76 = -24655088825935372707687196040585199904365267828865801 30 B77 = 0 B78 = 414846365575400828295179035549542073492199375372400483487 3318 B79 = 0
## Using GLSL to generate gas giant planets
To be completely honest, the code that is described in this blog is already more than a year old. I just wanted to catch up with the current state of my project. I will therefore try to write several blogs in the next couple of days describing what has been going on…
After my first experiments with earth-like planets I wanted to experiment with creating Jupiter-like gas giants in GLSL.
The approach is still noise based but instead of creating a height map we now want to create something that looks like turbulent clouds.
The following screenshots where created using the GLSL code below:
Note how the bands are every time differently distributed and that the turbulence varies from band to band as well as from planet to planet.
Again you will need the excellent noise function from noise2D.glsl.
#ifdef GL_ES
#define LOWP lowp
#define MED mediump
#define HIGH highp
precision highp float;
#else
#define MED
#define LOWP
#define HIGH
#endif
uniform float u_time;
uniform vec3 u_planetColor0;
uniform vec3 u_planetColor1;
uniform vec3 u_planetColor2;
uniform float u_random0;
uniform float u_random1;
uniform float u_random2;
uniform float u_random3;
uniform float u_random4;
uniform float u_random5;
uniform float u_random6;
uniform float u_random7;
uniform float u_random8;
uniform float u_random9;
varying vec2 v_texCoords0;
// INSERT HERE THE NOISE FUNCTIONS ...
float pnoise2(vec2 P, float period) {
return pnoise(P*period, vec2(period, period));
}
float pnoise1(float x, float period) {
return pnoise2(vec2(x, 0.0), period);
}
vec3 toColor(float value) {
float r = clamp(-value, 0.0, 1.0);
float g = clamp(value, 0.0, 1.0);
float b = 0.0;
return vec3(r, g, b);
}
float planetNoise(vec2 P) {
vec2 rv1 = vec2(u_random0, u_random1);
vec2 rv2 = vec2(u_random2, u_random3);
vec2 rv3 = vec2(u_random4, u_random5);
vec2 rv4 = vec2(u_random6, u_random7);
vec2 rv5 = vec2(u_random8, u_random9);
float r1 = u_random0 + u_random2;
float r2 = u_random1 + u_random2;
float r3 = u_random2 + u_random2;
float r4 = u_random3 + u_random2;
float r5 = u_random4 + u_random2;
float noise = 0.0;
noise += pnoise2(P+rv1, 10.0) * (0.2 + r1 * 0.4);
noise += pnoise2(P+rv2, 50.0) * (0.2 + r2 * 0.4);
noise += pnoise2(P+rv3, 100.0) * (0.3 + r3 * 0.2);
noise += pnoise2(P+rv4, 200.0) * (0.05 + r4 * 0.1);
noise += pnoise2(P+rv5, 500.0) * (0.02 + r4 * 0.15);
return noise;
}
float jupiterNoise(vec2 texCoords) {
float r1 = u_random0;
float r2 = u_random1;
float r3 = u_random2;
float r4 = u_random3;
float r5 = u_random4;
float r6 = u_random5;
float r7 = u_random6;
float distEquator = abs(texCoords.t - 0.5) * 2.0;
float noise = planetNoise(vec2(texCoords.x+distEquator*0.6, texCoords.y));
float distPol = 1.0 - distEquator;
float disturbance = 0.0;
disturbance += pnoise1(distPol+r1, 3.0+r4*3.0) * 1.0;
disturbance += pnoise1(distPol+r2, 9.0+r5*5.0) * 0.5;
disturbance += pnoise1(distPol+r3, 20.0+r6*10.0) * 0.1;
disturbance = disturbance*disturbance*2.0;
float noiseFactor = r7 * 0.3;
float noiseDistEquator = distEquator + noise * noiseFactor * disturbance;
return noiseDistEquator;
}
float jupiterHeight(float noise) {
return noise * 5.0;
}
vec3 planetColor(float distEquator) {
float r1 = u_random0 + u_random3;
float r2 = u_random1 + u_random3;
float r3 = u_random2 + u_random3;
float r4 = u_random3 + u_random3;
float r5 = u_random4 + u_random3;
float r6 = u_random5 + u_random3;
float r7 = u_random6 + u_random3;
float r8 = u_random7 + u_random3;
vec3 color1 = u_planetColor0;
vec3 color2 = u_planetColor1;
vec3 color3 = u_planetColor2;
float v1 = pnoise1(distEquator+r1, 2.0 + r4*15.0) * r7;
float v2 = pnoise1(distEquator+r2, 2.0 + r5*15.0) * r8;
vec3 mix1 = mix(color1, color2, v1);
vec3 mix2 = mix(mix1, color3, v2);
return mix2;
}
void main() {
float noise = jupiterNoise(v_texCoords0);
vec3 color = planetColor(noise);
gl_FragColor.rgb = color;
}
The colors where picked from real images of the gas and ice giants in our solar system (Jupiter, Saturn, Uranus, Neptune).
To produce more interesting results the colors are randomized by up to 10% before passing them to the shader.
Every planet receives three random colors which are then randomly interpolated.
private static final Color[] JUPITER_COLORS = new Color[] {
new Color(0.3333f, 0.2222f, 0.1111f, 1.0f),
new Color(0.8555f, 0.8125f, 0.7422f, 1.0f),
new Color(0.4588f, 0.4588f, 0.4297f, 1.0f),
new Color(0.5859f, 0.3906f, 0.2734f, 1.0f),
};
private static final Color[] ICE_COLORS = new Color[] {
new Color(0.6094f, 0.6563f, 0.7695f, 1.0f),
new Color(0.5820f, 0.6406f, 0.6406f, 1.0f),
new Color(0.2695f, 0.5234f, 0.9102f, 1.0f),
new Color(0.3672f, 0.4609f, 0.7969f, 1.0f),
new Color(0.7344f, 0.8594f, 0.9102f, 1.0f),
};
private static final Color[][] GAS_PLANET_COLORS = {
JUPITER_COLORS,
ICE_COLORS
};
public Color[] randomGasPlanetColors() {
return randomGasPlanetColors(GAS_PLANET_COLORS[random.nextInt(GAS_PLANET_COLORS.length)]);
}
public Color[] randomGasPlanetColors(Color[] colors) {
return new Color[] {
randomGasPlanetColor(colors),
randomGasPlanetColor(colors),
randomGasPlanetColor(colors)
};
}
public Color randomGasPlanetColor (Color[] colors) {
return randomDeviation(random, colors[random.nextInt(colors.length)]);
}
private Color randomDeviation (Random random, Color color) {
return new Color(
clamp(color.r * nextFloat(random, 0.9f, 1.1f), 0.0f, 1.0f),
clamp(color.g * nextFloat(random, 0.9f, 1.1f), 0.0f, 1.0f),
clamp(color.b * nextFloat(random, 0.9f, 1.1f), 0.0f, 1.0f),
1.0f);
}
## Using GLSL Shaders to generate Planets
My goal was to write a GLSL shader that would create an earth-like planet.
There are lots of good introductions into GLSL shader programming.
The following example is based on LibGDX, but it should be easily adapted to another framework.
First we need a little program so that we can experiment with the shader programs and see the results.
private ModelBatch modelBatch;
private PerspectiveCamera camera;
private CameraInputController cameraInputController;
private Environment environment;
private final ModelBuilder modelBuilder = new ModelBuilder();
private final Array instances = new Array();
private static final int SPHERE_DIVISIONS_U = 20;
private static final int SPHERE_DIVISIONS_V = 20;
@Override
public void create () {
createTest(new UberShaderProvider("planet")), new Material(), Usage.Position | Usage.Normal | Usage.TextureCoordinates);
}
camera = new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
camera.position.set(10f, 10f, 10f);
camera.lookAt(0, 0, 0);
camera.near = 1f;
camera.far = 300f;
camera.update();
cameraInputController = new CameraInputController(camera);
Gdx.input.setInputProcessor(cameraInputController);
environment = new Environment();
environment.set(new ColorAttribute(ColorAttribute.AmbientLight, Color.DARK_GRAY));
ModelInstance instance = new ModelInstance(model);
}
@Override
public void render () {
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.gl.glClearColor(0.0f, 0.0f, 0.8f, 1.0f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
cameraInputController.update();
modelBatch.begin(camera);
modelBatch.render(instances, environment);
modelBatch.end();
}
}
}
@Override
}
}
Now we can simply replace the string argument to the UberShaderProvider to test a particular pair of vertex and fragment shader programs in
First we will need a simple vertex shader that transforms the local vertex position into a global position and
passes the texture coordinates on to the fragment shader.
attribute vec3 a_position;
attribute vec2 a_texCoord0;
uniform mat4 u_worldTrans;
uniform mat4 u_projViewTrans;
varying vec2 v_texCoords0;
void main() {
v_texCoords0 = a_texCoord0;
gl_Position = u_projViewTrans * u_worldTrans * vec4(a_position, 1.0);
}
Now we can write the fragment shader.
Let’s start by calculating a color directly from the texture coordinates.
Since you typically have no debuggers for shader programs the easiest way to figure out what is going on is to visualize the intermediate steps as colors.
With some experience you will be able to see the values just by looking at the rendered graphics.
#ifdef GL_ES
#define LOWP lowp
#define MED mediump
#define HIGH highp
precision mediump float;
#else
#define MED
#define LOWP
#define HIGH
#endif
varying MED vec2 v_texCoords0;
void main() {
vec3 color = vec3(v_texCoords0.x, v_texCoords0.y, 0.0);
gl_FragColor.rgb = color;
}
You can see that the x coordinate of the texture is mapped to the red color of each pixel.
The y coordinate of the texture is mapped to the green color of each pixel.
## Convert noise into colors
The next step is to use a noise function that we will then use to create the pseudo-random oceans and continents.
You can find an excellent noise function in the webgl-noise project.
Copy and paste the source code from the file noise2D.glsl into your shader.
#ifdef GL_ES
#define LOWP lowp
#define MED mediump
#define HIGH highp
precision mediump float;
#else
#define MED
#define LOWP
#define HIGH
#endif
varying MED vec2 v_texCoords0;
// INSERT HERE THE NOISE FUNCTIONS ...
float pnoise2(vec2 P, float period) {
return pnoise(P*period, vec2(period, period));
}
float earthNoise(vec2 P) {
vec2 r1 = vec2(0.70, 0.82); // random numbers
float noise = 0.0;
noise += pnoise2(P+r1, 9.0);
return noise;
}
void main() {
float noise = earthNoise(v_texCoords0);
gl_FragColor.rgb = noise;
}
Obviously the noise value 1.0 corresponds to white (= vec3(1.0, 1.0, 1.0)), while noise value 0.0 corresponds to black (= vec3(0.0, 0.0, 0.0)).
By now you might be wondering why the black areas are so large – is the noise function faulty?
Actually the noise function returns values in the range -1.0 to 1.0. But the conversion to RGB colors clamps all negative values to 0.0, hence the large black areas.
As an exercise to prove this (and as a tool to debug negative values in the future) let’s write a function that converts positive values (0.0 to 1.0) into green colors and negative values (-1.0 to 0.0) into red colors.
// lots of code omitted ...
vec3 toColor(float value) {
float r = clamp(-value, 0.0, 1.0);
float g = clamp(value, 0.0, 1.0);
float b = 0.0;
return vec3(r, g, b);
}
void main() {
float noise = earthNoise(v_texCoords0);
gl_FragColor.rgb = toColor(noise);
}
Hint: Try to avoid constructs using if because the GPU doesn’t like branching.
Instead of using if branching you should try to implement your functionality with the provided mathematical functions (clamp, mix, step, smoothstep, … ).
For a useful reference see: OpenGL ES Shading Language Built-In Functions
## Convert height into colors
We want to treat the result of the noise function as the height of the planet and map this height into the typical colors.
The easiest way to implement a function of an input value into a color is to use a one-dimensional texture.
The x-axis of the texture corresponds to the height of the planet.
Until about 0.45 we paint all the heights the same deep blue of the ocean, then a few pixels of turquoise for the coastal waters, various green and yellows for the flora and deserts closer to the coast, then a large dark green block for the deep forest, finishing the whole with some grey mountains and a single white pixel for the snow at the top.
In the java code that defines the material we need now to specify this texture.
createTest(new UberShaderProvider("planet_step3"), new Material(new TextureAttribute(TextureAttribute.Diffuse, new Texture("data/textures/planet_height_color.png"))), Usage.Position | Usage.Normal | Usage.TextureCoordinates);
// lots of code omitted ...
void main() {
float noise = earthNoise(v_texCoords0);
vec3 color = texture2D(u_diffuseTexture, vec2(clamp(noise, 0.0, 1.0), 0.0));
gl_FragColor.rgb = color;
}
We do a lookup with texture2D() using the noise value as x-coordinate of the texture.
## Tweak the noise frequencies
Now it is time to make our continents a bit more convincing.
We want a couple of big continents with coastal areas that vary from smooth like the coasts of south-western Africa to fragmented like the fjords of Norway.
After some experiments I liked the following result:
float earthNoise(vec2 P) {
vec2 r1 = vec2(0.70, 0.82);
vec2 r2 = vec2(0.81, 0.12);
vec2 r3 = vec2(0.24, 0.96);
vec2 r4 = vec2(0.39, 0.48);
vec2 r5 = vec2(0.02, 0.25);
vec2 r6 = vec2(0.77, 0.91);
vec2 r7 = vec2(0.48, 0.05);
vec2 r8 = vec2(0.82, 0.48);
float noise = 0.0;
noise += clamp(pnoise2(P+r1, 3.0), 0.0, 0.45); // low-frequency noise clamped just slightly above ocean level - this produces the continental plates
noise += pnoise2(P+r2, 9) * 0.7; // medium frequency noise to produce the high mountain ranges (can be under and above water)
noise += pnoise2(P+r3, 14) * 0.2 + 0.1; // medium frequency noise for some hilly regious
noise += smoothstep(0.0, 0.1, pnoise2(P+r4, 8.0)) * pnoise2(P+r5, 50.0) * 0.3; // high frequency noise - but not in all areas
noise += smoothstep(0.0, 0.1, pnoise2(P+r6, 11.0)) * pnoise2(P+r7, 500.0) * 0.01; // very high frequency noise - but not in all areas
noise += smoothstep(0.8, 1.0, noise) * pnoise2(P+r8, 350.0) * -0.3; // very high frequency noise - only in the highest mountains
return noise;
}
The vectors r1 to r8 are random numbers so that not all our generated planets will look the same.
Later we can make then into uniforms and control them from the Java application.
If you want to understand how the different noise parts contribute to the total noise you can use the toColor() function to debug it visually.
Comment out all the other noise components and feed the final result into toColor().
noise += clamp(pnoise2(P+r1, 3.0), 0.0, 0.45); // low-frequency noise clamped just slightly above ocean level - this produces the continental plates
noise += pnoise2(P+r2, 9) * 0.7; // medium frequency noise to produce the high mountain ranges (can be under and above water)
noise += pnoise2(P+r3, 14) * 0.2 + 0.1; // medium frequency noise for some hilly regious
noise += smoothstep(0.0, 0.1, pnoise2(P+r4, 8.0)) * pnoise2(P+r5, 50.0) * 0.3; // high frequency noise - but not in all areas
The very high frequency with an amplitute of 0.01 is not visible with the color range of our toColor() function
and the last smoothstep() uses the calculated noise so that only the high mountain ranges receive the high frequency noise.
If you want to make these visible you need play around with the functions.
As last step lets have a look at the total output of the noise function using toColor().
## Convert latitude and height into colors
Our planet already looks reasonable but if you look at a picture of earth you will immediately notice that the latitude also influences the color.
High in the north and south we have the polar caps and closer to the equator we see large desert areas.
To implement this we can use a 2 dimensional texture.
As before it encodes in the x-axis the height of the plant, on the y-axis corresponds to the distance to the equator.
We see the desert and steppe close to the equator. In the medium latitudes forest becomes predominant before it is replaced by tundra and finally the ice cap at the pole.
Let’s do the two-dimensional lookup in this texture.
void main() {
float noise = earthNoise(v_texCoords0);
float distEquator = abs(v_texCoords0.t - 0.5) * 2.0;
vec3 color = texture2D(u_diffuseTexture, vec2(clamp(noise, 0.0, 1.0), distEquator));
gl_FragColor.rgb = color;
}
By changing the calculation of the distance to equator we can change the overall climate of the planet.
Let’s have a look how it looks during an ice age.
void main() {
float distEquator = abs(v_texCoords0.t - 0.5) * 6.0;
}
Actually we can make the whole planet look much nicer by running a gaussian blur over the texture, so that the colors of the different bio zones mix with each other.
Please note that the coast is not blurred into the water.
Here some more screenshots using the blurred texture.
Posted in Development, GLSL, Java, LibGDX | 1 Comment
## Memory Impact of Java Collections
Sometimes it is important to know how much memory you need in order to store the data in a Java collection.
Here is a little overview of the memory overhead of the most important Java collections.
In some cases I also added my own implementations to see how they shape up.
All collections where filled with 10000 Integer elements and then the memory was measured by producing a heap dump and then analyzing the heap dump with MemoryAnalyzer.
Executed in:
Java(TM) SE Runtime Environment (build 1.6.0_23-b05)
Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode)
on a
Intel(R) Core(TM) i7 CPU M 620 2.67GHz (4 CPUs), ~2.7GHz
# Set
The data stored in the sets is always the same: 10000 instances of Integer:
Class Name Objects Bytes
java.util.Integer 10,000 240,000
Total 10,000 240,000
## Memory Footprint
### java.util.HashSet
Class Name Objects Bytes
java.util.HashMap$Entry 10,000 480,000 java.util.HashMap$Entry[] 1 131,096
java.util.HashMap 1 64
java.util.HashSet 1 24
java.util.HashMap$KeySet 1 24 Total 10,004 611,208 ### java.util.TreeSet Class Name Objects Bytes java.util.TreeMap$Entry 10,000 640,000
java.util.TreeMap 1 80
java.util.TreeSet 1 24
Total 10,002 640,104
### Collections.synchronizedSet(java.util.HashSet)
A synchronized HashSet by wrapping it with the Collections.synchronizedSet() method.
Class Name Objects Bytes
java.util.HashMap$Entry 10,000 480,000 java.util.HashMap$Entry[] 1 131,096
java.util.HashMap 1 64
java.util.Collections$SynchronizedSet 1 32 java.util.HashSet 1 24 Total 10,004 611,208 ### Collections.newSetFromMap(ConcurrentHashMap) Java does not provide a ConcurrentHashSet out of the box, but you can create an equivalent by wrapping a ConcurrentHashMap with Collections.newSetFromMap(). Class Name Objects Bytes java.util.concurrent.ConcurrentHashMap$HashEntry 10,000 480,000
java.util.concurrent.ConcurrentHashMap$HashEntry[] 16 131,456 java.util.concurrent.ConcurrentHashMap$Segment 16 768
java.util.concurrent.ConcurrentHashMap$NonfairSync 16 768 java.util.concurrent.ConcurrentHashMap$Segment[] 1 152
java.util.concurrent.ConcurrentHashMap 1 72
java.util.Collections$SetFromMap 1 32 java.util.concurrent.ConcurrentHashMap$KeySet 1 24
Total 10,052 613,272
### ch.obermuhlner.collection.ArraySet
This is an experimental implementation of an array-based mutable Set that was designed to have minimal memory footprint.
Access with contains() is O(n).
Class Name Objects Bytes
java.lang.Object[] 1 80,024
ch.obermuhlner.collection.ArraySet 1 24
Total 2 80,048
### ch.obermuhlner.collection.ImmutableArraySet
Similar to the ArraySet above but immutable. Designed for minimal memory footprint.
Access with contains() is O(n).
Class Name Objects Bytes
java.lang.Object[] 1 80,024
ch.obermuhlner.collection.ImmutableArraySet 1 24
Total 2 80,048
### ch.obermuhlner.collection.ImmutableSortedArraySet
Another experimental implementation of an array-based immutable Set.
The array is sorted by hash code and contains() uses a binary search.
Access with contains() is O(log(n)).
The ImmutableSortedArraySet has the option to store the hash codes of all elements in a separate int[] which trading improved performance with additional memory footprint.
Class Name Objects Bytes
java.lang.Object[] 1 80,024
int[] 1 40,024
ch.obermuhlner.collection.ImmutableSortedArraySet 1 32
Total 3 120,080
## Performance
The performance of the different Sets was measured by running contains() with an existing random key against a set of a specific size.
Measuring with up to 20 elements shows that ArraySet and ImmutableArraySet really are linear. With less than 10 elements they are actually faster than the O(log(n)) ImmutableSortedArraySet.
In the next chart we see the performance with up to 1000 elements. The linear Sets are no longer shown because they out-scale everything else.
Posted in Development, Java | 2 Comments
## Benchmarking Scala
Microbenchmarking is controversial as the following links show:
Nevertheless I did write a little micro-benchmarking framework in Scala so I could experiment with the Scala language and libraries.
It allows to run little code snippets such as:
object LoopExample extends ImageReport {
def main(args: Array[String]) {
lineChart("Loops", 0 to 2, 0 to 100000 by 10000,
new FunBenchmarks[Int, Int] {
prepare {
count => count
}
run("for loop", "An empty for loop.") { count =>
for (i <- 0 to count) {
}
}
run("while loop", "A while loop that counts in a variable without returning a result.") { count =>
var i = 0
while (i < count) {
i += 1
}
}
})
}
This produced the image used in the last blog Scala for (i <- 0 to n) : nice but slow:
The framework also allows to create an HTML report containing multiple benchmarks suites.
The following thumbnails are from an example report showing a full run of the suite I wrote to benchmark some basic functionality of Scala (and Java).
Let's have a look at some of the more interesting results.
Note: All micro-benchmarking results should always be interpreted with a very critical mindset. Many things can go wrong when measuring a single operation over and over again. It is very easy to screw up and become meaningless results.
If you want to analyze a particular benchmark in more details, follow the details link at the end of the suite chapter. It will show a detailed statistical analysis of this particular benchmark suite.
## Loops
run("for loop", "An empty for loop.") { count =>
for (i <- 0 to count) {
}
}
run("for loop result", "A for loop that accumulates a result in a variable.") { count =>
var result = 0
for (i <- 0 to count) {
result += i
}
result
}
run("while loop", "A while loop that counts in a variable without returning a result.") { count =>
var i = 0
while (i < count) {
i += 1
}
}
run("while loop result", "A while loop that accumulates a result in a variable.") { count =>
var result = 0
var i = 0
while (i < count) {
result += i
i += 1
}
result
}
run("do while loop", "A do-while loop that counts in a variable without returning a result.") { count =>
var i = 0
do {
i += 1
} while (i <= count)
}
run("do while loop result", "A do-while loop that accumulates a result in a variable.") { count =>
var result = 0
var i = 0
do {
result += i
i += 1
} while (i <= count)
result
}
As already discussed in another blog entry, loops in Scala perform surprisingly different:
The for (i <- 1 to count) loop is significantly slower than while (i < count).
## Arithmetic
Not really a surprise, but BigDecimal arithmetic is very slow compared to double arithmetic (on both charts the red line is the reference benchmark that executes in the same time).
## Casts
I doubted the performance of the Scala way of casting a reference, so I compared it with the Java-like cast method:
var any: Any = "Hello"
var result: String = _
run("asInstanceOf", "Casts a value using asInstanceOf.") { count =>
for (i <- 0 to count) {
result = any.asInstanceOf[String]
}
}
run("match case", "Casts a value using pattern matching with the type.") { count =>
for (i <- 0 to count) {
result = any match {
case s: String => s
case _ => throw new IllegalArgumentException
}
}
}
Happily they perform practically the same:
## Immutable Map
I am really fond of immutable maps, they are easy to reason use and perform very well.
run("contains true", "Checks that a map really contains a value.") { map =>
map.contains(0)
}
run("contains false", "Checks that a map really does not contain a value.") { map =>
map.contains(-999)
}
run("+", "Adds a new entry to a map.") { map =>
map + (-1 -> "X-1")
}
run("-", "Removes an existing entry from a map.") { map =>
map - 0
}
The immutable maps with size 0 to 4 are special classes that store the key/value pairs directly in dedicated fields (testing sequentially) - therefore we see linear behaviour there.
The strong peak when adding another key/value pair to an immutable map with a size of 4 is probably because it switches to the normal scala.collection.immutable.HashMap (and creating 4 tuples) after testing all keys:
// Implementation detail of scala.collection.immutable.Map4
override def updated [B1 >: B] (key: A, value: B1): Map[A, B1] =
if (key == key1) new Map4(key1, value, key2, value2, key3, value3, key4, value4)
else if (key == key2) new Map4(key1, value1, key2, value, key3, value3, key4, value4)
else if (key == key3) new Map4(key1, value1, key2, value2, key3, value, key4, value4)
else if (key == key4) new Map4(key1, value1, key2, value2, key3, value3, key4, value)
else new HashMap + ((key1, value1), (key2, value2), (key3, value3), (key4, value4), (key, value))
def + [B1 >: B](kv: (A, B1)): Map[A, B1] = updated(kv._1, kv._2)
## HashMap
The standard Java java.util.HashMap is also interesting for Java programmers.
run("contains true", "Checks that a map really contains a value.") { map =>
map.containsKey(0)
}
run("contains false", "Checks that a map really does not contain a value.") { map =>
map.containsKey(-999)
}
run("size", "Calculates the size of a map.") { map =>
map.size()
}
At first glance containsKey and size() seem to be constant (as expected), but to be sure an additional benchmark with a larger n up to a 1000 was added:
Surprisingly all measured methods grow slowly with increasing size (contrary to the expected constant time behaviour) - this needs to be analyzed in detail.
A look at the details of this suite shows that actually all measured elapsed times are either 0.00000 ms or 0.00038 ms and the apparent increase with growing size is purely a statistical side effect. Obviously this benchmark is at the low end of measurement accuracy.
The measured write operations are:
run("put", "Adds a new entry to a map.") { map =>
map.put(-1, "X-1")
}
run("remove", "Removes an existing entry from a map.") { map =>
map.remove(0)
}
run("clear", "Removes all entries from a map.") { map =>
map.clear()
}
put() and remove() are reasonably constant, albeit they grow very slowly with increasing size.
I was very surprised to see that the clear() method is not constant time. A quick look at the implementation shows that it is linear with the number of entries.
// Implementation of java.util.HashMap.clear()
public void clear() {
modCount++;
Entry[] tab = table;
for (int i = 0; i < tab.length; i++)
tab[i] = null;
size = 0;
}
## ConcurrentHashMap
The behaviour of java.util.concurrent.ConcurrentHashMap is very similar to HashMap (slightly slower).
## Pattern Matching
Pattern matching is a very nice and powerful feature of Scala.
This benchmark tests only the simplest case of pattern matching and compares them with a comparable if-else cascade.
run("match 1", "Matches the 1st pattern with a literal integer value.") { seq =>
for (value <- seq) {
value match {
case 1 => "one"
case _ => "anything"
}
}
}
run("if 1", "Matches the 1st if in an if-else cascade with integer values.") { seq =>
for (value <- seq) {
if (value == 1) "one"
else "anything"
}
}
run("match 5", "Matches the 5th pattern with a literal integer value.") { seq =>
for (value <- seq) {
value match {
case 1 => "one"
case 2 => "two"
case 3 => "three"
case 4 => "four"
case 5 => "five"
case _ => "anything"
}
}
}
run("if 5", "Matches the 5th if in an if-else cascade with integer values.") { seq =>
for (value <- seq) {
if (value == 1) "one"
else if (value == 2) "two"
else if (value == 3) "three"
else if (value == 4) "four"
else if (value == 5) "five"
else "anything"
}
}
As you see, simple pattern matching and the if-else cascade have comparable speed. | 2021-05-18 08:22:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5143774747848511, "perplexity": 8353.878535581609}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989756.81/warc/CC-MAIN-20210518063944-20210518093944-00368.warc.gz"} |
http://nightrainbownewhaven.com/aklx9/jbtjo8.php?9fe7d9=sample-standard-deviation-symbol-in-word | The population standard deviation will be on the screen. 2 standard deviations of the mean. The symbol for a sample standard distribution tends to be a lowercase letter s in the L A T E X maths font or italics. The statistic called sample standard deviation, is a measure of the spread (variability) of the scores in the sample on a given variable and is … Where: Ï = Lower case sigma is the symbol for standard deviation Σ = Upper case sigma is the summation symbol X = Each individual value in the data set xÌ
= The arithmetic mean (known as âx-barâ) n =The number of data points in the set (the number of X values) To get the symbol 'sigma' to show up (σ) type '03C3' Highlight > Hit Alt+X. Try the free Mathway calculator and problem solver below to practice various math topics. © Rate this symbol: (5.00 / 3 votes) The Standard Deviation is a measure of how spread out numbers are. s (the greek lower-case letter,"sigma") is usually used for the population standard deviation. Privacy Policy | I would suggest consulting the style or citation guide you are using as it will probably tell you what to use. in short, the lower your SD is, the better. Comparing. Caution! Sample Standard Deviation | Formula The sample standard deviation formula is denoted by the greek lower case sigma symbol in the case of the population and the latin letter s for the sample. The following are some major examples of situations where the standard deviation and symbol for the standard deviation will assist in easily understanding the data value – In a class, students wrote the math test. number of members of sample or population. Add your value range. sample mean. cov (X, Y) Name: Standard deviation. For sample standard deviation it is denoting by ‘s’. Standard Deviation Formulas. To make the standard deviation comparable, co-efficient of standard nation is calculated which is the ratio between standard deviation of observation series and its . X- set of population elements. In simple words, the standard deviation is defined as the deviation of the values or data from an average mean. Step 1: Find Mean. What equation would I write in B6 to figure out the relative standard deviation in ⦠standard deviation for samples (s or s n-1) Press . Microsoft Word includes the ability to add this symbol using the Equation Tools Design tab. Standard deviation, denoted by the symbol Ï, describes the square root of the mean of the squares of all the values of a series derived from the arithmetic mean which is also called as the root-mean-square deviation. There are two standard deviations listed on the calculator. RapidTables.com | (defined here in Chapter 10), the In tests of population proportions, p stands for Because sigma (standard deviation symbol) is a Greek letter, in the subset, check Greek and Coptic. The alternative form of sigma (ς) must be used in word-final position. A new window will appear. Step 3: Select the correct standard deviation. The sample standard deviation is usually denoted by $s$ or by $\hat{\sigma}$. X variable. Thus the standard deviation of the sample is greater than that of the population. Mean and standard deviation versus median and IQR. Symbol For the Standard Deviation. The names sigma and standard deviation symbol are used interchangeably for this character. You can get it in Google Sheets by using the CHARformula below. MS Word Doc >> Standard Deviation Symbol Does anyone know if the symbol for standard deviation is available in Word 2003? Then a number close to the standard deviation for the whole group can be found by a slightly different equation called the sample standard deviation, explained below. μ and σ can take subscripts to show what you are taking Sample Standard Deviation. μ population mean. 21 terms. The standard deviation for a population by convention is the Greek letter Ï which is pronounced 'sigma'. If we compare the result from both the sets that are from standard deviation and the sample standard deviation then we are going to see a lot of variation among the result of both of them. If you print, I suggest black-and-white, It is called an x-bar sometimes, and also one of the most important math symbols which can never be ignored. q 1-p. n sample size. N- set of population size. A sample standard deviation is a statistic. s sample standard deviation. The Standard Deviation is a measure of how spread out numbers are.You might like to read this simpler page on Standard Deviation first.But here we explain the formulas.The symbol for Standard Deviation is Ï (the Greek letter sigma).Say what? Sample standard deviation and bias. In hypothesis testing, p is the calculated p-value Standard deviation shows how much dispersion there is from the average in a set of data. Let g be the subscript for girls and b be the subscript for boys. ( s or s n-1 ) Press precessing analysis statistics in Word of standard deviation of and. Searching sample standard deviation is σ ( the simple average of the sample standard deviation variable! 'S the third thing in the sample standard deviation symbol question now and for free without signing.. Case sigma in Google Sheets by using the Equation Tools Design tab you can get it in Sheets... You at first: Click Insert > > symbols > > symbols > > >. A table i have in Word 2003 a statistic that tells you how all! Synonyms in length order so that they are easier to find it under Insert symbol. style citation. Two-Sided printing squared deviations About the mean ( the Greek letter, in the above formula, the standard is... Calculating standard deviation utilizes the sum of the most important math symbols which can never be ignored times the! To interpret values for a sample for both City ‘ a ’ and City ‘ ’... Word includes the ability to add this symbol: ( 5.00 / 3 votes ) standard. The sample attributes and capital case letters are used interchangeably for this character question now and for free without up... 31, and also one of the most important math symbols which can never be ignored this... The subscript for girls and b may seem backward to you at first statistic that tells how! The individuals in a set of data answer: yes, for a population, no, a! Is high symbol: ( 5.00 / 3 votes ) the standard deviation symbol question now and free! Symbol 'sigma ' have in Word 2003 check Greek and Coptic for other uses of the values or from...: yes, for a population, no, for a standard deviation of style or guide! Your own problem and check your answer with the sample SD of 42,,! Table i have in Word 2003 then, μ g is the smallest value of deviation! You are using as it will probably tell you what to use a square root of the or! Probably tell you what to use this means that it is called an sometimes! x-bar, ” is a statistic that tells you how tightly all the various examples are around! If you print, i suggest black-and-white, two-sided printing Calculating standard deviation by taking a root... B is the Unicode table number to Insert a mean symbol, or type in your own problem and your! Set of data '' ) is the symbol for the standard deviation symbol. can take to! Type '03C3 ' Highlight > Hit Alt+X and μ b is the Greek letter in... Of population proportions, p stands for population sample standard deviation symbol in word and p̂ for sample standard deviation for samples ( or... What is the population information for both City ‘ b ’, σx̅ ( �sigma sub x-bar� ) used... S x ( it 's the third thing in the above formula, the following three methods are to. Simple words, the lower your SD is a descriptor that indicates sample average in a set of data sigma... Is greater than that of the numbers ) 2 lower case letters represent the standard deviation for a standard:. Descriptor that indicates sample average in a set of data you are taking square. The Equation Tools Design tab deviation shows how much dispersion there is another to. Includes the ability to add this symbol using the Equation Tools Design tab be..., see sigma § science and mathematics. greater than that of the values or data from average. Proportion and p̂ for sample proportion ( see table above ) try the Mathway. In math and statistics formula is used to represent population attributes a sample are working with make!, '' sigma '' ) is usually used for the population mean for boys, sigma!: Ï or s n-1 ) Press that average Sheets by using the Equation Tools Design tab yes for. Function ( please check my function guide ), calculate the sample corresponding population parameters using it. Symbol question now and for free without signing up sub x-bar� ) used... Free Mathway calculator and problem solver below to practice various math topics lower. Is σ ( the Greek letter sigma ) answer would be s =2.71 root of the (. S is used to denote the standard deviation is a Greek letter sigma ) in... And 840 times during the last 30 days to be 85 % by the Greek letter sigma was from... 2900 that searched for it, by searching sample standard deviation is represented by the teacher for,! Find the values or data from an average mean out the mean score for the standard deviation is as! Greek and Coptic sometimes it ’ s nice to know what your calculator is doing the. To show what you are working with and make sure you select the correct one, our. Your answer with the step-by-step explanations your own problem and check your answer with the explanations... For population proportion and p̂ for sample proportion ( see table above ) '! Is connected to mean, median, and mode as well, which are also statistics concepts it in Sheets! ’ represents the total number of individuals or observations in the subset, Greek... Final answer would be s =2.71, for a population, no, for a standard deviation available. Represent population attributes table i have in Word, you 'll see a list of the STDEV STDEV.S! Population information for both City ‘ a ’ and City ‘ a and! Insert symbol. s ( the simple average of the sample standard will. You select the correct one sigma § science and mathematics, see sigma § science mathematics... Value Recovered value 50 46 50 45 50 42 RSD % an rough example of particular! And mode as well, which is x, the following three methods are used to represent deviation. You can add more than one number this way – one after another know what your calculator doing. Both City ‘ a ’ and City ‘ b ’ the average in mathematic,! No, for a population by convention is the Greek letter, in the formula. The alternative form of sigma ( lower case ), σx̅ ( �sigma sub x-bar� ) is a descriptor indicates! Sigma ( lower case ) can take subscripts to show what you are taking the.! To what kind of data attention to what kind of data the smallest value standard... Mean for boys Word, you 'll see a list of the screen, you see. Numbers are meaning tooth also one of the sample a sample 5.00 / votes! For girls and μ b is the population mean for girls and b may seem backward to at. Attention to what kind of data for both City ‘ b ’ style or guide. Of variable x use of the sample SD of 42, 31 and 67 is argument 963 is the deviation... To create a mean symbol, or x-bar, ” is a Greek letter in. Sd is, the symbol for standard deviation and the corresponding population parameters find 8 to. An rough example of a sample of scores this answer has been viewed 28 times and. Screen, you 'll see a list of the numbers ) 2 symbol question now and free! Check out this guide on standard deviation is a measure of how spread out numbers are below to practice math! Length order so that they are easier to find it under ` Insert symbol. symbol Ï stands sample. > Hit Alt+X please check my function guide ) various sample statistics and symbol... The given examples, or standard deviation is denoted by s. it given... Interpret values for a standard deviation: Generally, the following three methods are used for Calculating standard deviation of! For each number subtract with the sample standard deviation is defined as the of. Attention to what kind of data is dispersed in Word 2003 be 85 % by the teacher thus standard. Create a mean symbol, or standard deviation of sample means, or type your. Case sigma in Google Sheets by using the CHARformula below the free Mathway calculator and solver... More, check out this guide on standard deviation for a population, no, a. The cursor to s x ( it 's the third thing in the,! 3 votes ) the standard deviation is defined as the deviation of x! Insert lower case ) in APA style, the standard deviation symbol ) is Greek... Add this symbol using the CHARformula below ( ς ) must be used word-final... Must be used in word-final position 31 and 67 the Phoenician letter šīn, meaning tooth in simple,...: ( 5.00 / 3 votes ) the standard deviation: 1 error the! ' Highlight > Hit Alt+X the Equation Tools Design tab ) function ( please check my function )! A statistic that tells you how tightly all the various examples are sample standard deviation symbol in word around mean! You select the correct one deviation since it can not be negative symbols > > symbol. use the! Out to be 85 % by sample standard deviation symbol in word teacher finds out the standard deviation it is called an sometimes. Add more than one number this way – one after another you what to use methods are for! By s. it is calculated from only some sample standard deviation symbol in word the values or data from an average.! Total number of individuals or observations in the above formula, the standard symbol! Have in Word, you might need to Insert a mean symbol which. | 2021-04-22 23:00:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7765651345252991, "perplexity": 1077.810010589297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039563095.86/warc/CC-MAIN-20210422221531-20210423011531-00340.warc.gz"} |
https://surveillance.r-forge.r-project.org/pkgdown/reference/ks.plot.unif.html | This plot function takes a univariate sample that should be tested for a U(0,1) distribution, plots its empirical cumulative distribution function (ecdf), and adds a confidence band by inverting the corresponding Kolmogorov-Smirnov test (ks.test). The uniform distribution is rejected if the ECDF is not completely inside the confidence band.
ks.plot.unif(U, conf.level = 0.95, exact = NULL,
col.conf = "gray", col.ref = "gray",
xlab = expression(u[(i)]), ylab = "Cumulative distribution")
Arguments
U
numeric vector containing the sample. Missing values are (silently) ignored.
conf.level
confidence level for the K-S-test (defaults to 0.95), can also be a vector of multiple levels.
exact
see ks.test.
col.conf
colour of the confidence lines.
col.ref
colour of the diagonal reference line.
xlab, ylab
axis labels.
Value
NULL (invisibly).
Author
Michael Höhle and Sebastian Meyer.
The code contains segments originating from the source of the ks.test function https://svn.R-project.org/R/trunk/src/library/stats/R/ks.test.R, which is Copyright (C) 1995-2012 The R Core Team available under GPL-2 (or later) and C functionality from https://svn.R-project.org/R/trunk/src/library/stats/src/ks.c, which is copyright (C) 1999-2009 the R Core Team and available under GPL-2 (or later). Somewhat hidden in their ks.c file is a statement that part of their code is based on code published in Marsaglia et al. (2003).
References
George Marsaglia and Wai Wan Tsang and Jingbo Wang (2003): Evaluating Kolmogorov's distribution. Journal of Statistical Software, 8 (18). doi: 10.18637/jss.v008.i18
ks.test for the Kolmogorov-Smirnov test, as well as checkResidualProcess, which makes use of this plot function.
samp <- runif(99) | 2021-09-24 05:47:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29268166422843933, "perplexity": 5815.1417732199525}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057504.60/warc/CC-MAIN-20210924050055-20210924080055-00661.warc.gz"} |
https://www.rdocumentation.org/packages/spatstat/versions/1.61-0/topics/rescale.ppp | rescale.ppp
0th
Percentile
Convert Point Pattern to Another Unit of Length
Converts a point pattern dataset to another unit of length.
Keywords
spatial, math
Usage
# S3 method for ppp
rescale(X, s, unitname)
Arguments
X
Point pattern (object of class "ppp").
s
Conversion factor: the new units are s times the old units.
unitname
Optional. New name for the unit of length. See unitname.
Details
This is a method for the generic function rescale.
The spatial coordinates in the point pattern X (and its window) will be re-expressed in terms of a new unit of length that is s times the current unit of length given in X. (Thus, the coordinate values are divided by s, while the unit value is multiplied by s).
The result is a point pattern representing the same data but re-expressed in a different unit.
Mark values are unchanged.
If s is missing, then the coordinates will be re-expressed in ‘native’ units; for example if the current unit is equal to 0.1 metres, then the coordinates will be re-expressed in metres.
Value
Another point pattern (of class "ppp"), representing the same data, but expressed in the new units.
Note
The result of this operation is equivalent to the original point pattern. If you want to actually change the coordinates by a linear transformation, producing a point pattern that is not equivalent to the original one, use affine.
unitname, rescale, rescale.owin, affine, rotate, shift
• rescale.ppp
Examples
# NOT RUN {
# Bramble Canes data: 1 unit = 9 metres
data(bramblecanes)
# convert to metres
bram <- rescale(bramblecanes, 1/9)
# or equivalently
bram <- rescale(bramblecanes)
# }
Documentation reproduced from package spatstat, version 1.61-0, License: GPL (>= 2)
Community examples
Looks like there are no examples yet. | 2019-12-11 04:05:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18889623880386353, "perplexity": 2615.4994616703634}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529745.80/warc/CC-MAIN-20191211021635-20191211045635-00369.warc.gz"} |
https://quantumprogress.wordpress.com/2011/02/06/are-change-in-momentum-and-impulse-the-same-thing/ | I’ve been playing around with the 20 minute pulse check thing we’ve been doing at school and I’ve been playing around with various prompts, but still one of my favorites is “what question do you still have?”
When I first did this, we collected all the questions, and tried to summarize a general question to tweet. The thing that was interesting is that we didn’t answer any of the questions. Part of me thinks this is a good thing—I want students to start to see that they simply forumlating a question can be a huge positive step, and thatthey have the ability to answer their own questions.
But at the same time, some questions have been so good, or some students seem to be so confused that I have decided to on occasion, begin to answer their questions. So I have the kids write questions on notecards, and then write a short reply on the notecard and return it the next day.
One of the most frequent comments I hear from students is “but I don’t have a question” and this can come with either two flavors: 1. I think I’ve got it so well I have no questions, or 2. I’m so confused I don’t even know what to ask. Both of these are problematic to me, and I’ve been trying to get students to see that you should always have a question in mind to guide your thinking. If you don’t have a question, it’s a sure sign you don’t understand as well as you think you do, and if you don’t even know what to ask, its a sign to me that you need to build your confidence that it’s ok to ask those most basic questions that students are so afraid to share.
This week, as we are studing momentum and impulse, I got a number of variations on the following question:
Since impulse is the same thing as change in momentum, why do we need the name impulse? Why not just use change in momentum?
This is a very interesting question, since it shows some of my students’ conceptions of mathematics are still a bit naive (which is to be expected).
I’ve avoided using the symbol $\vec{J}$ for momentum just to avoid adding another symbol to their brains. They all know how to show the equivalence of impulse and momentum using N2:
$\vec{a}=\frac{\vec{F}_{net}}{m}\\\frac{\vec{\Delta v}}{\Delta t}=\frac{\vec{F}_{net}}{m}\\m\vec{\Delta v} =\vec{F}_{net}\Delta t\\m\vec{v}_f-m\vec{v}_i=\vec{F}_{net}\Delta t\\\vec{\Delta{p}}=\vec{F}_{net}\Delta t\\change\;in\;momentum = impulse$
We’ve discussed before the various meanings of an equal sign in physics, and this time we came back to this idea and talked about how change in momentum and impulse are two different human defined quantities that turn out through experiment to be measurably the same, and that we can see this equality through N2, but that it isn’t any more correct to say they “are the same” than it would be to go around talking about the $2\pi r$ of a circle instead of circumference.
I need to find a way to assess whether my students are developing a deeper understanding of what an equation is trying to say as relationship. This is the heart of the modeling curriculum, and certainly all sorts of proportional reasoning questions help students to see that equations are really just very short, precise summaries of relationships between various quantities. We also spend a lot of time exploring when a particular model is valid; still I think when students see an equation, they find it very hard to resist the urge to think “what do I plug into this thing?” without asking “what is this thing trying to say about how the world works.”
1. February 6, 2011 11:34 am
I like the focus on the equals sign here. When two things are “equal” sometimes it means that two different things are found to be the same in nature and that equivalence tells us something really interesting about the universe.
I really like the “six ideas that shaped physics” way of doing this. Essentially every interaction is an opportunity to swap momentum and impulse is just an accounting trick to keep track of a continuous exchange of momentum.
Of course we could start calling the units poms like I suggested here: http://andyrundquist.blogspot.com/2010/12/momentum-units.html
• February 6, 2011 11:51 am
Andy,
This is a great idea. We already do talk about the meaning of the equal sign, but I don’t stress that equality of different things can be a clue into the nature of the universe. I’ve seen Six ideas before, and will give that another look. And I love the idea of poms of momentum—right around now, units start to get pretty crazy with Newtons, Joules and kg m/s all looking almost exactly the same. You could even tell students that a pom is shorthand for Parcel of Momentum.
And it occurs to me that this way of introducing students to a conservation law, in terms of poms of momentum being transferred via impulses might be the perfect way to set them up to see energy conservation in terms of energy transfer via work, heat, and radiation.
• February 6, 2011 12:14 pm
love the parcel of momentum (wish I’d thought of that). It’s so interesting how students view units. I’ve said before how awed a student was when I said g was 9.8 meters per second (pause) per second. Of course I like 22 mph per second.
2. February 7, 2011 12:04 am
The post recalls these kid-friendly phrases:
“the same”
“the same but not the same”
Impulse and momentum change are “the same but not the same.” Numerically equal, but they are entirely different quantities. IIRC, M&I stresses this point.
For a=delta v/delta t, they are “the same.” For a=Fnet/m, they are “the same but not the same.”
I hope this makes sense.
• February 7, 2011 10:24 am
I’m not sure that the “six ideas” author would quite agree with the “same but not the same”. He stresses that force is really just a collection of momentum swaps to ease the accounting. I do get what you’re saying, though.
• February 7, 2011 7:02 pm
Andy,
I used the pom idea today to help develop a final synthesis of momentum for my kids, and they loved it. They call pronounce name like you pronounce ‘pwn’, which is their favorite word in the universe. This coupled with velocity mass bar charts (I’ll explain these in a later post) is the perfect way to lay the ground work for conservation of energy.
• February 7, 2011 9:32 pm
very cool, Frank. I look forward to hearing about the bar charts.
3. February 7, 2011 9:34 pm
whoops, got lost in the thread. Meant to say “very cool, John, can’t wait to hear about the bar charts”
• February 7, 2011 9:38 pm
No problem. I learned about them from Frank!
4. April 14, 2011 1:29 pm
Hi guys, I’m an entry level physics student and searched the internet to elaboration on the concept, “are impulse and momentum one in the same?” This thread was inspiring for two reasons, (1) you all seem passionate in teaching which is a deep breath of fresh air. Secondly, the discussion about what makes the two concepts differ, and focusing on the sums being equal in value but only value,really helped. I encourage discussions just like you all submitted here to help students like me gain better understanding. Nice work.
• April 18, 2011 7:20 pm
Thanks so much for the very kind words. Good luck with your physics studying. While you’ve probably found you can find a lot of good insights by googling things (you’ll also find a lot of useless garbage this way), you might also try your hand at posting questions at physics stackexchange, which is a great community of physics enthusiasts who are usually quite willing to answer questions.
March 1, 2012 7:39 am
i am still confused about the mathematical signs for direction.are they always the same for impulse and change in momentum?or are they the opposite signs as in N3?
• March 1, 2012 7:50 am
To answer this, you need to think carefully about the object you are talking about. Let’s start with a single object experiencing some external force $F_{net,\;ext}$.
By Newton’s second law, we know:
$\vec{a}=\frac{\vec{F}_{net,\;ext}}{m}$
and knowing $\vec{a}=\frac{\Delta \vec{v}}{\Delta t}$ we can substitute this expression for $\vec{a}$ and cross-multiply:
$m \Delta \vec{v}=\vec{F}_{net,\;ext}\Delta t$
The term on the left is the change in momentum of that object. The term on the right is the impulse on that object, exerted by an external force. So the impulse experienced by an object and the changein momentum of the same object are always in the same direction.
When you have two objects colliding with each other, the forces they exert on each other are always the same size and opposite in direction, by Newton’s 3rd law. ($F_{a \; on \;b}=-F_{b \; on \; a}$). Combining this with the previous result tells us that for two objects colliding with each other will experience impulses that are opposite each other. | 2018-03-22 15:30:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5658833980560303, "perplexity": 523.4041877236966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647892.89/warc/CC-MAIN-20180322151300-20180322171300-00257.warc.gz"} |
https://quant.stackexchange.com/questions/11217/are-the-sin-cos-tan-functions-used-in-some-financial-calculations/11219 | # Are the sin, cos, tan functions used in some financial calculations?
I ask because those functions are on the TI BA II Plus financial calculator.
I saw some interesting answers but I don't think a calculator would be practical in their environment.
• The question is weird because it comes from a funky reason. But I'm fine having somebody giving an example of trigonometric functions in quantitative finance. – SRKX May 9 '14 at 10:38
• sinh and cosh are used in some formulations of the Heston model of stochastic volatility, but you're not going to be doing those on a calculator. – experquisite May 9 '14 at 15:53
• @SRKX Now that I think about it, I don't think this reason is funky at all. Who else but the non-mathematically inclined would ask such? – BCLC May 11 '14 at 9:18
• You can also refer to the similar post in Economics.stackexchange: economics.stackexchange.com/questions/19172/… – agassi Oct 24 '19 at 12:47
• All the answers below totally gloss over exactly what @experquisite is getting at. Trig functions in finance are mostly used inside the guts of other, considerably more complicated, calculations. It's extremely unlikely unlikely you'd see any use in a calculators trig functions working in a finance role. – will Oct 25 '19 at 6:15
Fourier methods use sine and cosine functions, and are used in calculating option prices, VaR, time series analysis etc. It is an alternative process for doing many things in finance. Some links Fourier Methods in trading on StackExchange and Wiki
One can use the Karhunen–Loève expansion to approximate a trajectory of a Wiener Process, which can be used to model the evolvement of returns in time. (http://en.wikipedia.org/wiki/Karhunen%E2%80%93Lo%C3%A8ve_theorem#The_Wiener_process)
Though the Karhunen–Loève expansion has theoretical advantages to other variants to generate a trajectory of a Wiener Process, many users will use different methods because on computers evaluation of trignometric is very expensive in terms of calculation time.
You can use $\sin$ or $\cos$ to model seasonality. If all you have is a calculator it might be the most practical way.
• But it would be such a crude way to do it that you might as well draw a wiggly line on a bit of paper and eyeball it... – will Oct 25 '19 at 6:15
When you do Monte Carlo simulation and would like to draw sample from the normal distribution $\mathcal{N}(\mu,\sigma^2)$, you may use Box-Muller transform and come up with formulas using $\sin$ and $\cos$.
• But in reality, noone samples normal random numbers like this because its very inefficient... – will Oct 25 '19 at 6:16
Trigonometric functions show up in econometric models for business cycles. For example: the average length of a cycle of an AR(2) process is
$k = \frac{2 \pi}{\cos^{-1}( \phi_1/ (2 \sqrt{-\phi_2}))}$
For an AR(2) model given by $r_t = \phi_0 + \phi_1 r_{t-1} + \phi_2 r_{t-2} + a_t$
with complex roots, $\phi_1^2 + 4\phi_2 <0$
Trigonometric functions are WAVE phenomena. As such, they are best used to model so-called periodic functions, that is, functions with cycles of a fixed period in length. That's why they are good for modelling, seasonal, annual, "blue moon" (once every two and half years), or other functions with set "periods."
Solving some heat/diffusion equations under certain conditions needs trigonometric functions.
Black-Scholes reduces to a heat/diffusion equation by a change of variables.
• But the initial conditions in Black Scholes lead to the normal distribution as a solution (i.e. not trig functions)! – vonjd May 11 '14 at 11:33 | 2020-07-03 10:41:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6369004249572754, "perplexity": 683.5733320762528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655881984.34/warc/CC-MAIN-20200703091148-20200703121148-00151.warc.gz"} |
http://www.pinoybix.org/2015/08/answers-in-integral-calculus-part1.html | MCQs in Integral Calculus Part I - Answers
Answers key for the Compiled MCQs in Integral Calculus Part 1 of the series as one topic in Engineering Mathematics in the ECE Board Exam.
Below are the answers key for the Multiple Choice Questions in Integral Calculus - MCQs Part 1.
1. A. (1/12)(3t – 1)4 + C
Review: Solution for Number 1
2. D. ln 2
Review: Solution for Number 2
3. A. (1/4)sin (2x2 + 7) + C
Review: Solution for Number 3
4. C. (7x4 / 4) + (4x3 / 3) + C
Review: Solution for Number 4
5. C. 0.0417
Review: Solution for Number 5
6. D. 0.533
Review: Solution for Number 6
7. A. 0.2
Review: Solution for Number 7
8. D. 5π/32
Review: Solution for Number 8
9. A. 0.456
Review: Solution for Number 9
10. B. 0.022
Review: Solution for Number 10
11. B. 35π/768
Review: Solution for Number 11
12. D. 0.305
Review: Solution for Number 12
13. A. (y / 2) + (sin 2y / 4) + C
Review: Solution for Number 13
14. A. -2√2 cos (x/2) + C
Review: Solution for Number 14
15. B. 0.293
Review: Solution for Number 15
16. B. 1
Review: Solution for Number 16
17. A. 2.0
Review: Solution for Number 17
18. A. (esin 2x / 2) + C
Review: Solution for Number 18
19. A. sin x + C
Review: Solution for Number 19
20. D. ln (ex + 1)2 – x + C
Review: Solution for Number 20
21. D. 1/3
Review: Solution for Number 21
22. D. 40
Review: Solution for Number 22
23. A. 2/3
Review: Solution for Number 23
24. B. 1.33 sq. units
Review: Solution for Number 24
25. B. 4 sq. units
Review: Solution for Number 25
26. C. 32/3 sq. units
Review: Solution for Number 26
27. A. 88/3 sq. units
Review: Solution for Number 27
28. B. 64/3 sq. units
Review: Solution for Number 28
29. A. 21.33 sq. units
Review: Solution for Number 29
30. A. 75 sq. units
Review: Solution for Number 30
31. A. 5.33 sq. units
Review: Solution for Number 31
32. D. 10.67 sq. units
Review: Solution for Number 32
33. A. 4.25 sq. units
Review: Solution for Number 33
34. D. 5.595 sq. units
Review: Solution for Number 34
35. D. 10.7 sq. units
Review: Solution for Number 35
36. A. 8 sq. units
Review: Solution for Number 36
37. D. 32/3 sq. units
Review: Solution for Number 37
38. C. a2 sq. units
Review: Solution for Number 38
39. B. 0.5 from the x-axis and 0.4 from the y-axis
Review: Solution for Number 39
40. B. (0, 1.6)
Review: Solution for Number 40
41. B. (3/5, 3/4)
Review: Solution for Number 41
42. B. 4.6 units
Review: Solution for Number 42
43. A. 5.33
Review: Solution for Number 43
44. A. 355.3 cubic units
Review: Solution for Number 44
45. B. 2228.83 cubic units
Review: Solution for Number 45
46. B. 181 cubic units
Review: Solution for Number 46
47. D. 26.81 cubic units
Review: Solution for Number 47
48. C. 59.22 cubic units
Review: Solution for Number 48
49. D. 50.26 cubic units
Review: Solution for Number 49
50. B. 2.13
Review: Solution for Number 50
Online Questions and Answers in Integral Calculus Series
Following is the list of multiple choice questions in this brand new series:
Integral Calculus MCQs
PART 1: MCQs from Number 1 – 50 Answer key: PART I
PART 2: MCQs from Number 51 – 100 Answer key: PART II
Search! Type it and Hit Enter
Contribute to PinoyBIX Online Reviewers Option 1 : \$5 USD Option 2 : \$10 USD Option 3 : \$15 USD Option 4 : \$20 USD Option 5 : \$25 USD Option 6 : \$50 USD Option 7 : \$100 USD Option 8 : Other Amount
Labels: | 2017-03-27 14:35:47 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.84929358959198, "perplexity": 11236.71295285346}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189474.87/warc/CC-MAIN-20170322212949-00618-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://hacobe.github.io/notes/0-1%20Knapsack.html | # 0-1 Knapsack¶
Suppose you have $$n$$ items, where the weight of the $$i$$th item is given by the integer weights[i] and the value of the $$i$$th item is given by the integer values[i]. You also have a knapsack that can hold any number of items as long as their total weight is less than or equal to weight $$W$$. Write a function to return the maximum total value that can be attained for items placed in the knapsack.
## Brute force¶
You can either include the last item in the knapsack if its weight is below the knapsack’s capacity or you can not include it. In the worst case, you have 2 choices for each item, so the time complexity is $$O(2^n)$$. The space complexity is $$O(1)$$.
def knapSack(W, wt, val, n):
if n == 0 or W == 0:
return 0
value_if_last_item_not_included = self.knapSack(W, wt, val, n-1)
if wt[n-1] > W:
return value_if_last_item_not_included
value_if_last_item_included = (
val[n-1] + self.knapSack(W-wt[n-1], wt, val, n-1))
return max(value_if_last_item_not_included, value_if_last_item_included)
## Memoization¶
In the worst case, each item has weight 1 and the time complexity is $$O(nW)$$. The space complexity is also $$O(nW)$$.
def knapSack(W, wt, val, n, memo):
if n == 0 or W == 0:
return 0
if (W, n-1) not in memo:
memo[(W, n-1)] = knapSack(W, wt, val, n-1, memo)
value_if_last_item_not_included = memo[(W, n-1)]
if wt[n-1] > W:
return value_if_last_item_not_included
if (W-wt[n-1], n-1) not in memo:
memo[(W-wt[n-1], n-1)] = knapSack(W-wt[n-1], wt, val, n-1, memo)
value_if_last_item_included = val[n-1] + memo[(W-wt[n-1], n-1)]
return max(value_if_last_item_not_included, value_if_last_item_included)
## Tabulation¶
As with memoization, the time complexity is $$O(nW)$$ and the space complexity is $$O(nW)$$.
def knapSack(W, wt, val, n, memo):
if n == 0 or W == 0:
return 0
tab = [[0 for _ in range(W+1)] for _ in range(n+1)]
for m in range(1, n+1):
for w in range(1, W+1):
if wt[m-1] > w:
tab[m][w] = tab[m-1][w]
else:
tab[m][w] = max(tab[m-1][w], val[m-1] + tab[m-1][w-wt[m-1]])
return tab[n][W]
You can get down to $$O(W)$$ space complexity, because we only need to keep track of the result for some $$w$$ for $$n-1$$ and not all $$k < n$$.
def knapSack(W, wt, val, n, memo):
if n == 0 or W == 0:
return 0
tab = [0 for _ in range(W+1)]
for m in range(1, n+1):
for w in range(W, -1, -1):
if wt[m-1] <= w:
tab[w] = max(tab[w], val[m-1] + tab[w-wt[m-1]])
return tab[W] | 2022-06-25 14:55:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5926434993743896, "perplexity": 4056.928362835921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103035636.10/warc/CC-MAIN-20220625125944-20220625155944-00224.warc.gz"} |
http://www.ncatlab.org/nlab/show/essentially+surjective+(infinity%2C1)-functor | nLab essentially surjective (infinity,1)-functor
Context
$(\infty,1)$-Category theory
(∞,1)-category theory
Contents
Definition
An $(\infty,1)$-functor $F : C \to D$ is essentially surjective if, when modeled as a functor of simplicially enriched categories, the induced functor
$h F_0 : h C_0 \to h D_0$
of ordinary categories is essentially surjective
Properties
An (∞,1)-functor which is both essentially surjective as well as full and faithful (∞,1)-functor is precisely an equivalence of (∞,1)-categories.
Revised on May 11, 2012 11:59:47 by Urs Schreiber (82.169.65.155) | 2014-11-01 08:24:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985688328742981, "perplexity": 2132.879769682346}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637904794.47/warc/CC-MAIN-20141030025824-00083-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://support.bioconductor.org/p/p133384/ | Error in MAE experiment regarding the replacement of internal assays with different number of features
1
0
Entering edit mode
svlachavas ▴ 800
@svlachavas-7225
Last seen 4 hours ago
Germany/Heidelberg/German Cancer Resear…
Dear Bioconductor community,
while trying to utilize the MultiAssayExperiment data container, along with a specific queried TCGA dataset, to perform some initial filtering and data cleaning prior downstream analysis, I have encountered the following error when tried to add an additional omics layer to the MAE object:
library(MultiAssayExperiment)
library(TCGAutils)
library(UpSetR)
library(DESeq2)
assays = c(
"RPPAArray","Mutation",
"RNASeq2GeneNorm", # here perhaps the raw RSEM counts
"GISTIC_ThresholdedByGene"
),
dry.run = FALSE
)
A MultiAssayExperiment object of 4 listed
experiments with user-defined names and respective classes.
Containing an ExperimentList class object of length 4:
[1] LUAD_GISTIC_ThresholdedByGene-20160128: SummarizedExperiment with 24776 rows and 181 columns
[2] LUAD_RNASeq2GeneNorm-20160128: SummarizedExperiment with 20501 rows and 146 columns
[3] LUAD_RPPAArray-20160128: SummarizedExperiment with 223 rows and 181 columns
[4] LUAD_Mutation-20160128_simplified: RangedSummarizedExperiment with 22929 rows and 181 columns
Features:
experiments() - obtain the ExperimentList instance
colData() - the primary/phenotype DataFrame
sampleMap() - the sample availability DFrame
$, [, [[ - extract colData columns, subset, or experiment *Format() - convert into a long or wide DataFrame assays() - convert ExperimentList to a SimpleList of matrices # Isolate the rna-seq omics assay to perform normalization rna.seq <- getWithColData(luad.final, 2L) count.dat <- assay(rna.seq) pheno.dat <- colData(rna.seq) dds <- DESeqDataSetFromMatrix(countData = count.dat, colData = pheno.dat, design = ~ 1) dds.norm <- vst(dds) # a small non-specific "intensity" filtering procedure to remove unexpressed features NotExpressed <- apply( assay(dds.norm), MARGIN=2, function( z ){ dens <- density( z ) expr.cut <- dens$x[ which.max( dens$y )] return( z < expr.cut ) } ) expr.ps <- rowSums( !NotExpressed ) > ( ncol( NotExpressed )/2 ) dds.filtered <- dds.norm[expr.ps, ] dim(dds.filtered) [1] 17374 146 sum(duplicated(rownames(assay(dds.filtered)))) [1] 0 luad2 <- c(luad.final, list(newMatrix = assay(dds.filtered)), mapFrom = "RNASeq2GeneNorm") Error: subscript contains invalid rownames In addition: Warning message: Assuming column order in the data provided matches the order in 'mapFrom' experiment(s) colnames # alternative approach to directly replace the expression matrix instead of adding an additional layer: luad.final[["LUAD_RNASeq2GeneNorm-20160128"]] <- assay(dds.filtered) harmonizing input: removing 146 sampleMap rows with 'colname' not in colnames of experiments # but then unfortunately all the relative columns from the RNA-Seq experiment are lost luad.final A MultiAssayExperiment object of 4 listed experiments with user-defined names and respective classes. Containing an ExperimentList class object of length 4: [1] LUAD_GISTIC_ThresholdedByGene-20160128: SummarizedExperiment with 24776 rows and 181 columns [2] LUAD_RNASeq2GeneNorm-20160128: matrix with 17374 rows and 0 columns [3] LUAD_RPPAArray-20160128: SummarizedExperiment with 223 rows and 181 columns [4] LUAD_Mutation-20160128_simplified: RangedSummarizedExperiment with 22929 rows and 181 columns Features: experiments() - obtain the ExperimentList instance colData() - the primary/phenotype DataFrame sampleMap() - the sample availability DFrame $, [, [[ - extract colData columns, subset, or experiment
*Format() - convert into a long or wide DataFrame
assays() - convert ExperimentList to a SimpleList of matrices
sessionInfo()
R version 4.0.2 (2020-06-22)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 17763)
Matrix products: default
locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C
[5] LC_TIME=English_United States.1252
attached base packages:
[1] parallel stats4 stats graphics grDevices utils datasets methods
[9] base
other attached packages:
[1] M3C_1.10.0 TxDb.Hsapiens.UCSC.hg19.knownGene_3.2.2
[3] GenomicFeatures_1.40.1 AnnotationDbi_1.50.3
[5] DESeq2_1.28.1 limma_3.44.3
[7] UpSetR_1.4.0 TCGAutils_1.8.1
[9] cBioPortalData_2.0.10 AnVIL_1.0.3
[13] MultiAssayExperiment_1.14.0 SummarizedExperiment_1.18.2
[15] DelayedArray_0.14.1 matrixStats_0.57.0
[17] Biobase_2.48.0 GenomicRanges_1.40.0
[19] GenomeInfoDb_1.24.2 IRanges_2.22.2
[21] S4Vectors_0.26.1 BiocGenerics_0.34.0
[25] tximportData_1.16.0
Thus:
1) Why there is an error hitting with the c function ? I have also checked and there are not duplicated row names, only a smaller subset of the initial RNA-Seq features-
2) Is there an alternative way of replacing the original RNA-Seq dataset, with the updated transformed one ? without losing any other information like sample phenotype information ?
3) Finally, could a similar replacement be performed for additional assays, like the mutations and the CNAs simultaneously ? providing only additional updated matrices with a reduced number of features ?
Overall, my ultimate goal is to use the final MultiAssayExperiment for multi-omics integration, and one putative alternative is to convert each layer to a long data frame regarding the omics matrices, and keep an additional data frame with all phenotype information...
Best,
Efstathios
MultiAssayExperiment curatedTCGAData TCGAutils • 473 views
2
Entering edit mode
@marcel-ramos-7325
Last seen 1 day ago
United States
Hi Efstathios,
Next time, please be kind enough to provide a minimally reproducible example. It makes it easier for anyone to answer your question quickly.
1) You are having an error because the name provided and the name of the assays do not match. Use something like:
c(luad.final, list(newMatrix = assay(dds.filtered)), mapFrom = "LUAD_RNASeq2GeneNorm-20160128")
2) Yes, the alternative is to use the single bracket method ([) in conjunction with a List/list or ExperimentList of the same length as the subsetting vector (i.e., 1:2 in the following example):
luad.final[, , 1:2] <- list(A = matrix(...), B = SummarizedExperiment(...))
3) Yes, #2 will handle multiple replacements.
Best regards,
Marcel
0
Entering edit mode
Dear Marcel,
thank you very much for your answer and comments !! and please excuse me for the long code chunk, I wanted just to be certain that I did not omitted any "extra" line that might caused the error !!
I was just confused based on my previous post regarding the MAE utilization (https://support.bioconductor.org/p/134693/) regarding your comment on the c function for adding experiments, and I did not utilized the whole name abbreviation (c(coad.updated, list(newMatrix = updated.matrix), mapFrom = "RNASeq2GeneNorm"))
1
Entering edit mode
I've made minor changes to the documentation and examples to make this more clear in MultiAssayExperiment 1.17.2. | 2022-06-29 19:22:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33269426226615906, "perplexity": 10574.281537720497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103642979.38/warc/CC-MAIN-20220629180939-20220629210939-00075.warc.gz"} |
http://mathhelpforum.com/pre-calculus/199138-proof-induction-print.html | # proof by induction
• May 23rd 2012, 07:53 AM
Tweety
proof by induction
prove by mathematical induction, that for n
$\sum_{i=1}^n i^2 = \frac{1}{6}n(n+1)(2n+1)$
assume that the summation formula is true for n=k
$\sum_{i=1}^n i^{2} = \frac{1}{6} k (k+1) (2k+1)$
so must be true for n= k+1 ?
so do I put k+1 into the formula, and try and get it match the original? really stuck from this part,
• May 23rd 2012, 08:06 AM
Plato
Re: proof by induction
Quote:
Originally Posted by Tweety
prove by mathematical induction, that for n
$\sum_{i=1}^n i^2 = \frac{1}{6}n(n+1)(2n+1)$
assume that the summation formula is true for n=k
$\sum_{i=1}^n i^{2} = \frac{1}{6} k (k+1) (2k+1)$
so must be true for n= k+1 ?
so do I put k+1 into the formula, and try and get it match the original? really stuck from this part,
Note $\sum_{i=1}^{n+1} i^{2} =\sum_{i=1}^n i^{2} +(n+1)^2$ | 2016-06-26 00:17:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9824626445770264, "perplexity": 2633.8035568917703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00053-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://space.stackexchange.com/questions/35249/does-gps-spoofing-ever-come-from-space-how-are-spoofings-usually-detected | # Does GPS spoofing ever come from space? How are spoofings usually detected?
The BBC News article Study maps 'extensive Russian GPS spoofing' says:
(GPS spoofing) involves the state using strong radio signals to drown out reliable navigation data, says non-profit C4ADS.
The report by the think tank documents almost 10,000 separate GPS spoofing incidents conducted by Russia.
Most incidents affected ships, said C4ADS, but spoofing was also seen around airports and other locations.
C4ADS, or the Centre for Advanced Defence, is a research organisation that uses sophisticated data analysis techniques to investigate global security and conflict issues.
Its report drew on more than 12 months of work analysing Global Navigation Satellite Systems (GNSS) positioning data taken from several sources. These included:
• automatic route logging systems on ships
• low-earth satellite signals
• route histories taken from users of the Strava exercise app
• public reports of vessels, aircraft and vehicles going off course
The analysis showed Russia was "pioneering" the use of GPS spoofing techniques to "protect and promote its strategic interests", the report said.
Generally, said the research group, the spoofing was being done to deflect commercial drones from entering sensitive airspace.
The spoofing was concentrated around 10 key locations including the Crimea, Syria, as well as ports and airports in Russia.
Question: Does GPS spoofing ever come from space? A directional antenna system, or three antennas + three receivers could probably work out the direction of the incoming signal, but is this really how spoofings are detected?
Possibly related (I can't tell) GPS Receiver Autonomous Integrity Monitoring (RAIM) - parity space method
• I do not understand exactly what the question is about. The interferer acts locally. Within the line of sight. For example hsto.org/webt/ru/v1/el/ruv1el4dtx3nxvan4e3xeyfuvxa.jpeg – A. Rumlin Apr 3 '19 at 8:52
• An article about the work of this system: "Losses in the antenna of this device lead to a fairly large system noise figure (System Noise Floor). With the open sky, I saw only 40 dB with a little. And this is one of the signs of the coming of the demon, observed on ordinary mobile devices". habr.com/ru/post/337608 – A. Rumlin Apr 4 '19 at 5:54
Spoofing from a satellite would need a substantial amount of power , especially if you want to use a GEO satelitte. Doing it from LEO would potentially be lower power than the GPS sats themselves but pose difficulties denying an area long enough to be useful, and make locating the cause easier by matching complaints against orbital elements. It would certainly be against FCC rules (if operating inside the US) and upset an organization with both global reach and ASAT capability.
More normally GPS jamming happens from a portable transmitter that just needs to drown out at <1km enough satellite signals to prevent a fix. Suspect finding those just involves looking at things like the Strava data sets, finding the circular holes and then deleting the lakes.
More complex jamming transmits strong fake satellite signals, and causes receivers to appear to teleport, so again delectable in data sets by looking at impossible speeds or complaints from users missed the left at Albuquerque.
If you have the antenna real estate you can get multi antenna systems that look for low angle signals and flag up jammer warnings or shape the sensitivity to suppress them.
• Thanks, I'd thought that "spoofing" proper was only the second kind - resulting in real fixes that produced an apparently valid but incorrect position being reported. (spoof: to deceive or falsify) – uhoh Apr 3 '19 at 13:12
First, on the practical side, it is most likely that GPS spoofing will come from a terrestrial spoofing source rather than a LEO, much less a rogue GEO. This allows several exploitable factors to detect spoofed signals. To answer your question(s), no, they are unlikely to come from space, and second, direction finding is one of the possibilities.
• Direction finding - using something like a CRPA antenna, can tackle both jamming and spoofing, and so a directional approach is often the baseline preference. On the plus side, DF approaches rarely make changes to the receiver design (they just add additional front-end layers). This makes DF very attractive. It tackles jamming AND spoofing by creating a "spatial filter" via beamforming, 'steering' the beam in the direction of the GPS satellites, accepting in signals that only come from a direction it expects, and cancelling out signals from unlikely directions. Remember, that GPS receivers usually also hold an almanac of GPS satellite ephemerides so it has a rough idea of what the direction of each GPS satellite is already in - useful for a first-run initialisation.
More on direction-finding approaches here!
Other methods also include using and comparing the encrypted P(Y) code (a signal spoofer is unlikely to know the code sequence as it is kept secret). There is also work in signal-based authentication. Check out Logan Scott's work, on CHIMERA. In a nutshell of how it works (all credits to Logan Scott and AFRL):
1. CHIMERA inserts digitally encrypted signatures and watermarks them within the GPS L1C signal.
2. After a slight delay (6 seconds), the GPS satellite reveals the keys that generate those encrypted watermarks.
3. Every 3 minutes, the entire system then changes the key.
4. Since the receiver has already recorded the signal with its watermarks before the key is sent, spoofers cannot know the correct key ahead of time, in time to insert correct watermarks of their own.
5. This means that any spoofed signal can be identified, because the subsequent key will not match up with the spoofed watermarks, or there will be no watermarks at all.
More on CHIMERA here!
GPS or any GNSS system spoofing is a real, and likely to-be-heavily-used threat in any period of hostility or tension and we are bound to see more techniques used to identify spoofed attacks in the years to come.
• +1 Wow thank you for a really great answer! The images in the "More on DF approaches" link don't work for me, but from the text and your mention of the almanac I can imagine that the formed beam is complex and ideally would have a maximum in the predicted direction of each satellite and a minimum near the horizon. Maybe another, fancier feature might determine the direction of signals near the horizon and put a minimum there. – uhoh Mar 12 at 3:50
• Thanks for the great feedback! I recently attended a GPS conference a few months ago and so I was sharing about what I learnt speaking to the real professionals there! :) – Sam Low Mar 12 at 14:05
Transmitters on GPS satellites are very weak (~60W) so by the time the signal reaches the Earth's surface, it is below the noise floor. That is; the signal is weaker than the normal level of background electromagnetic noise, making it hard to detect. This is why GPS signals can't usually be seen indoors without an external antenna, for example.
It is trivially easy to jam GPS signals because of the weak signal strength, all you need to do is generate a stronger signal in the same band. Even a small jammer powered from a car’s 12V socket can jam GPS in a radius 10s of metres in size. It is not much more difficult to spoof them - either intentially or accidentally. The L1 civillian band has no encryption or authentication so a receiver will just accept whatever signal it finds.
Although space based spoofing is feasible, given how easy it is to do from the ground it seems entirely pointless going to the expense and trouble.
There are systems that can mitigate and detect interference (jamming or spoofing) - some use multiple antennae, some use a learned pattern of behavoiur to detect anomalies. Recievers that can use multiple GNSS constellations (e.g. GLONASS uses slightly different frequencies to GPS) can also help.
• Could you expand on the comment about GPS signals being "way below the noise floor"? I always thought a signal needed to be above the noise floor in order to be parsed out of the noise. – Undo Apr 9 '19 at 5:06
• @Undo “The measure of the signal created from the sum of all the noise sources and unwanted signals within a measurement system”. I have updated the answer. – Darren Apr 9 '19 at 6:00
• Thanks for your answer! On the topic of GPS S/N, these are somewhat related (but don't directly address your points about noise): What exactly does C/No (dBHz) mean in u-Blox GPS data? and also How to detect potentially poor antenna placement from GPS data? – uhoh May 12 '19 at 0:13
• I've read that it is possible to parse it out of the noise floor with a good enough match filter though, although I'm not too well versed with the entire filter design of it. Perhaps someone else can explain better :) – Sam Low Mar 12 at 2:11 | 2020-08-09 09:07:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3499075174331665, "perplexity": 2616.649166788257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738523.63/warc/CC-MAIN-20200809073133-20200809103133-00263.warc.gz"} |
https://wiki.net4sat.org/doku.php?id=opensand:emulated_satcom_features:physical:capacity:index | # Net4sat wiki
### Sidebar
opensand:emulated_satcom_features:physical:capacity:index
The capacity available on each link (Forward and Return in transparent mode, or Uplink and Downlink on regenerative mode) is determined in OpenSAND by two factors, the carrier bandwidth and the MODCODs used: the larger the carrier bandwidth in that link is, and/or the higher the MODCODs (more performing) used are, the larger the available capacity will be (and vice-versa).
Since the MODCOD used on each link may change dynamically (if VCM or ACM coding and modulation schemes are used), the capacity will not always be the same. However, the maximum and minimum values can be calculated.
Given a carrier symbol rate ($R$), the total carrier capacity ($C$) for a certain MODCOD ($m$) can be obtained as:
$$C = R \times modulated\_bits\_per\_symbol\left(m\right) \times coding\_rate\left(m\right)$$
The maximum carrier capacity is determined by the most performing MODCOD, whereas the minimum is determined by the most robust MODCOD, giving thus the range of the capacity of a given link.
In reality, however, the attainable data rate is smaller, because of different reasons (protocol overhead, pilot symbols, etc.).
Coding rates in MODCOD names (for example QPSK 1/4) are not exact, since they only specify the coding rate of the outer code for simplicity purposes. Real coding rate is slightly smaller, when the inner code is considered.
Another way of determining the capacity $C$, considering the real coding rate, is using the following expression: $$C = BW_{effective} \times spectral\_efficiency\left(m\right)$$ where $BW_{effective} = R \times 1 sym^{-1}$ is the effective bandwidth (without considering the roll-off), and the spectral efficiency is a parameter of the MODCOD $m$.
## OpenSAND Exploitation
Since the $modulated\_bits\_per\_symbol$ and $coding\_rate$ values are determined by the MODCOD (fixed by the standard), the user only can change the carrier symbol rate to adjust the capacity (other than enabling/disabling different MODCODs). Refer to the carrier page for more information.
The maximum and minimum capacities available are displayed on the Resource Configuration tab on the OpenSAND Manager, under each link configuration. The capacities are obtained for each category, adding the capacities of all carriers of that category.
Bear in mind that the data rate on the return link will never exceed the allocations for each terminal. On the forward link, there is no such limitation, so the total capacity should be attained without additional configuration.
### Probes
There are several probes that concern the link capacities in OpenSAND, albeit with different units depending on the link, and only available on the gateways.
On the forward link, capacity probes are available for each spot and category (Standard and Premium). For each category, two probes are available for each carrier (and two for the sum of all carriers) inside the Down_Forward_capacity section:
• Available: The available capacity per frame (expressed in symbols per frame).
• Remaining: The remaining (not used) capacity per frame (expressed in symbols per frame).
On the return link, capacity probes are available for each spot and category (Standard and Premium). For each category, two probes are available for each carrier (and two for the sum of all carriers) inside the Up_Return_capacity section. In the section Up_Return_total_capacity the same two probes are also available for the sum of all categories:
• Available: The available capacity (expressed in symbols per second) in one Superframe.
• Remaining: The remaining (not used) capacity (expressed in symbols per second) in one Superframe.
## OpenSAND Software Design
Link capacities are calculated and used when scheduling the different protocols, and their implementation is detailed on each page. Please refer to the DVB-S2, DVB-RCS and DVB-RCS2 for more information. | 2021-05-06 15:05:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7568236589431763, "perplexity": 1804.6758913421131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00224.warc.gz"} |
https://www.amyschlesener.com/posts/2013/11/the-magic-of-phpmailer/ | The magic of PHPMailer
November 10, 2013 - 2 minutes
Educational
Recently I updated the website for my club, the Association for Women in Computing at WWU, to send e-mails via PHPMailer. We have a contact form with the usual subject, content, name, etc. and want to send all that to our e-mail address. The previous e-mail system went through the school’s network and was very much broken, so it was in need of a good fix.
It was easier than I thought it would be, mostly just building some basic PHPMailer code. Add the script to your site, create a new instance of PHPMailer, and set the various properties on that instance. For example, if your mailer is $mail, you set the subject of the email like so: $mail->Subject ="E-mail subject here";
Then once you’re done adding all the properties, you do $mail->Send, and you’re good to go! For the server, I ended up just creating a new Gmail e-mail address so I could use Gmail as a host. So our host property looked something like this: $mail->Host = "smtp.gmail.com"; | 2020-01-19 08:30:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29533448815345764, "perplexity": 1983.6661322498705}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594333.5/warc/CC-MAIN-20200119064802-20200119092802-00259.warc.gz"} |
http://stats.stackexchange.com/questions/46597/dependent-bernoulli-trials | # Dependent Bernoulli trials
The probability of a sequence of n independent Bernoulli trials can be easily expressed as $$p(x_1,...,x_n|p_1,...,p_n)=\prod_{i=1}^np_i^{x_i}(1-p_i)^{1-x_i}$$ but what if the trials are not independent?
How would one express the probability to capture the dependence?
-
What is the dependence? E.g. Summing over the N trials must equal K? There must be an even number of 'true' results, etc. Once you define the kind of dependence it will be possible to write down the actual likelihood more concretely. – Nick Dec 27 '12 at 17:25
$$p(x_1,...,x_n) = \prod_i p(X_i=x_i|X_1=x_1,...,X_{i-1}=x_{i-1}).$$ | 2015-02-28 01:57:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9204183220863342, "perplexity": 427.70559583922045}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461647.44/warc/CC-MAIN-20150226074101-00213-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/192915/does-the-umvue-have-to-be-a-minimal-sufficient-statistic | # Does the UMVUE have to be a minimal sufficient statistic?
I'm studying point estimation and I have found this question that seems pretty tricky to me.
If $T$ is a minimal sufficient statistic for $\theta$ with $E(T) = \tau(\theta)$, can you say that $T$ is also the UMVUE for $\tau(\theta)$?
Rao-Blackwell theorem states that an unbiased estimator $T$ for $\tau(\theta)$ can be improved using a sufficient statistic $U$ for $\theta$, i.e. $T^*=E[T|U]$ has a variance lower than the one of $T$.
Lehmann-Scheffé theorem states that $T$ must be a function of a complete sufficient statistic in order to be the unique UMVUE for $\tau(\theta)$.
But what about the fact that $T$ is minimal sufficient? Does this provide some results about $T$?
• Hold on a second, are you looking for MVUE or the UMVUE? The 'U' in UMVUE stands for Unique, so saying you are looking for the unique UMVUE is a little confusing. Jan 29 '16 at 13:03
• @JohnK Oh sorry, in our notation the U stays for Unbiased. The uniqueness derives from Lehmann-Scheffé theorem. Jan 29 '16 at 14:24
• Here is what I think, the MVUE definitely has to be a sufficient statistic, otherwise you can always get a better estimator by applying the Rao-Blackwell step. The same applies to a minimial sufficient statistic as by definition it is a function of all other sufficient statistics. Jan 29 '16 at 14:28
• Thus, we can state that being MVUE implies being a minimal sufficient statistic? Jan 30 '16 at 14:52
• @JohnK The 'U' as I know stands for 'uniform' and not 'unique'. UMVUE is always unique whenever it exists. You know this of course, but this wasn't conveyed properly I feel. Jun 27 '18 at 11:59
For a concrete example, consider $$X\sim U(\theta,\theta+1)$$ where $$\theta$$ is the parameter of interest. Here $$X$$ is minimal sufficient for $$\theta$$ but $$X$$ is not the UMVUE of its expectation. For details see this post. | 2021-12-06 12:28:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7467201352119446, "perplexity": 296.4344701724588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363292.82/warc/CC-MAIN-20211206103243-20211206133243-00177.warc.gz"} |
https://gmatclub.com/forum/at-1-pm-ship-a-leaves-port-traveling-15-mph-three-hours-later-ship-241354.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 22 Jul 2018, 07:50
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# At 1 PM, Ship A leaves port traveling 15 mph. Three hours later, Ship
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 47168
At 1 PM, Ship A leaves port traveling 15 mph. Three hours later, Ship [#permalink]
### Show Tags
28 May 2017, 03:32
00:00
Difficulty:
15% (low)
Question Stats:
82% (01:18) correct 18% (01:18) wrong based on 113 sessions
### HideShow timer Statistics
At 1 PM, Ship A leaves port traveling 15 mph. Three hours later, Ship B leaves the same port in the same direction traveling 25 mph. At what time does Ship B pass Ship A?
(A) 8:30 PM
(B) 8:35 PM
(C) 9 PM
(D) 9:15 PM
(E) 9:30 PM
_________________
Manager
Joined: 06 Sep 2016
Posts: 138
Location: Italy
Schools: EDHEC (A)
GMAT 1: 650 Q43 V37
GPA: 3.2
WE: General Management (Human Resources)
Re: At 1 PM, Ship A leaves port traveling 15 mph. Three hours later, Ship [#permalink]
### Show Tags
28 May 2017, 06:04
4
1
We can also use the GAP approach.
Ship A in 3hrs travels 45km
Ship B has to close that GAP.
Rate(B) - Rate(A) = 25 - 15 = 10mph
Then we can set up the following equation
45 = 10T
T= 45/10 ---> 9/2 hence 4,5hrs
So the time required is 1pm + 3 + 4,5 hrs = 8.30 therefore A.
##### General Discussion
Director
Joined: 18 Aug 2016
Posts: 631
Concentration: Strategy, Technology
GMAT 1: 630 Q47 V29
GMAT 2: 740 Q51 V38
Re: At 1 PM, Ship A leaves port traveling 15 mph. Three hours later, Ship [#permalink]
### Show Tags
28 May 2017, 03:37
1
Ship A travels 112.5 Km( 15 *7.5) in 7:30 hrs (from 1 PM to 8:30 PM) Ship B travels 112.5(25*4.5) Km in 4:30 hrs
hence option A
Sent from my iPhone using GMAT Club Forum
_________________
We must try to achieve the best within us
Thanks
Luckisnoexcuse
Director
Joined: 21 Mar 2016
Posts: 536
Re: At 1 PM, Ship A leaves port traveling 15 mph. Three hours later, Ship [#permalink]
### Show Tags
29 May 2017, 03:48
MvArrow wrote:
We can also use the GAP approach.
Ship A in 3hrs travels 45km
Ship B has to close that GAP.
Rate(B) - Rate(A) = 25 - 15 = 10mph
Then we can set up the following equation
45 = 10T
T= 45/10 ---> 9/2 hence 4,5hrs
So the time required is 1pm + 3 + 4,5 hrs = 8.30 therefore A.
Gud one..
+1 for this method
DS Forum Moderator
Joined: 22 Aug 2013
Posts: 1298
Location: India
Re: At 1 PM, Ship A leaves port traveling 15 mph. Three hours later, Ship [#permalink]
### Show Tags
29 May 2017, 04:13
Let them meet 'n' hours after 4 pm. From 1 to 4 pm, ship A has already travelled = 15*3 = 45 miles
So in 'n' hours, A has total travelled = (45 + 15n) miles
and B has travelled = 25n miles
Since they meet at that time, distances should be equal OR 45+15n = 25n
Solving we get n=4.5
So the ships meet 4.5 hours after 4 pm, OR at 8.30 pm
BSchool Forum Moderator
Joined: 26 Feb 2016
Posts: 2960
Location: India
GPA: 3.12
At 1 PM, Ship A leaves port traveling 15 mph. Three hours later, Ship [#permalink]
### Show Tags
29 May 2017, 04:26
Since, the ship A leaves three hours early and travels 15 mph, it would have traveled 45 miles.
Now the speed relative to Ship B, which travels at 25 mph is 10 miles/hour
Since the distance it need to cover up is 45 miles, time take can be given by formula
Time taken = Distance to cover/Relative Speed
= $$\frac{45}{10}$$ = 4.5 hours.
Since the ship B leaves port at 4 PM, it would meet Ship A exactly 4.5 hours later(at 8:30 PM) (Option A)
_________________
You've got what it takes, but it will take everything you've got
Manager
Joined: 06 Sep 2016
Posts: 138
Location: Italy
Schools: EDHEC (A)
GMAT 1: 650 Q43 V37
GPA: 3.2
WE: General Management (Human Resources)
Re: At 1 PM, Ship A leaves port traveling 15 mph. Three hours later, Ship [#permalink]
### Show Tags
29 May 2017, 11:09
1
mohshu wrote:
MvArrow wrote:
We can also use the GAP approach.
Ship A in 3hrs travels 45km
Ship B has to close that GAP.
Rate(B) - Rate(A) = 25 - 15 = 10mph
Then we can set up the following equation
45 = 10T
T= 45/10 ---> 9/2 hence 4,5hrs
So the time required is 1pm + 3 + 4,5 hrs = 8.30 therefore A.
Gud one..
+1 for this method
Thank you very much
Intern
Joined: 03 Nov 2016
Posts: 18
GMAT 1: 620 Q47 V30
Re: At 1 PM, Ship A leaves port traveling 15 mph. Three hours later, Ship [#permalink]
### Show Tags
30 May 2017, 02:01
Option A: 8.30 PM
A @ 1PM, 15 MPH
B @4PM (3 hrs later), 25 MPH. by the time B starts, A would have travelled 45 miles (15MPH * 3 H)
So, as A is moving away from B, the combined rate would be = 25 - 15 = 10 MPH and the distance to be covered is 45 miles.
Time = 45/10 hrs = 4.5
so, 4 PM + 4.5 = 8.30 PM.
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 2973
Location: United States (CA)
Re: At 1 PM, Ship A leaves port traveling 15 mph. Three hours later, Ship [#permalink]
### Show Tags
01 Jun 2017, 10:21
Bunuel wrote:
At 1 PM, Ship A leaves port traveling 15 mph. Three hours later, Ship B leaves the same port in the same direction traveling 25 mph. At what time does Ship B pass Ship A?
(A) 8:30 PM
(B) 8:35 PM
(C) 9 PM
(D) 9:15 PM
(E) 9:30 PM
We can let the time traveled by Ship A = t + 3 and time traveled Ship B = t.
Since Ship A is traveling at a rate of 15 mph, the distance of Ship A = 15(t + 3) = 15t + 45, and since Ship B is traveling at a rate of 25 mph, the distance of Ship B = 25t.
Let’s now determine t:
Ship A’s distance = Ship B’s distance
15t + 45 = 25t
45 = 10t
45/10 = 9/2 = 4.5 hours = t
Recall that Ship B left port at 1 p.m + 3 hours = 4 p.m. Thus, Ship B passes Ship A at 4 p.m + 4.5 hours = 8:30 p.m.
_________________
Scott Woodbury-Stewart
Founder and CEO
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
Director
Joined: 12 Nov 2016
Posts: 774
Location: United States
Schools: Yale '18
GMAT 1: 650 Q43 V37
GRE 1: Q157 V158
GPA: 2.66
Re: At 1 PM, Ship A leaves port traveling 15 mph. Three hours later, Ship [#permalink]
### Show Tags
13 Aug 2017, 13:56
Bunuel wrote:
At 1 PM, Ship A leaves port traveling 15 mph. Three hours later, Ship B leaves the same port in the same direction traveling 25 mph. At what time does Ship B pass Ship A?
(A) 8:30 PM
(B) 8:35 PM
(C) 9 PM
(D) 9:15 PM
(E) 9:30 PM
15( T +3) =25T
15T +45 =25T
45=10T
T=4.5
4pm + 4 hr 30 m
8 30 pm
A
Re: At 1 PM, Ship A leaves port traveling 15 mph. Three hours later, Ship &nbs [#permalink] 13 Aug 2017, 13:56
Display posts from previous: Sort by
# Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 2018-07-22 14:50:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7190982103347778, "perplexity": 11028.365161685884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593302.74/warc/CC-MAIN-20180722135607-20180722155607-00268.warc.gz"} |
https://www.vedantu.com/question-answer/in-an-ap-if-the-mth-term-is-n-and-nth-term-is-m-class-11-maths-cbse-5f5f90589427543f91cfe39b | # In an AP, if the ${m^{th}}$ term is n and ${n^{th}}$ term is m, then find the pth term. ($m \ne n$).
Verified
211.2k+ views
Hint: Following the question we will get two equations. Subtracting them we will get the value of d and putting d value in one equation we will get the first term of the sequence. Substituting them in the general formula we will get the answer.
We know that in arithmetic progression, the general formula is,
${a_n} = a + (n - 1)d$
Where, ${a_n}$ = ${n^{th}}$ term of AP
a = first term of the AP
d = common difference in AP
Thus ${m^{th}}$ term, ${a_m} = a + (m - 1)d$
In the question it is given that ${m^{th}}$ term is n
i.e. $a + (m - 1)d = n$………………….(1)
And ${n^{th}}$ term, ${a_n} = a + (n - 1)d$
It is also given in the question that ${n^{th}}$ term is m
I.e. $a + (n - 1)d = m$………………..(2)
Subtracting equation (2) from equation (1) we find the common difference of the arithmetic series.
Hence, $[a + (m - 1)d] - [a + (n - 1)d] = n - m$
$\Rightarrow a + (m - 1)d - a - (n - 1)d = n - m$
Cancelling a and –a in left hand side we get,
$(m - 1)d - (n - 1)d = n - m$
Taking d common in left hand side we get,
$(m - 1 - n + 1)d = n - m$
Cancelling -1 and +1 we get,
$(m - n)d = n - m$
$\Rightarrow d = \dfrac{{n - m}}{{m - n}}$
Multiplying -1 on both side we get,
$d = - 1$
Putting value of d in equation (2) we get,
$a + (n - 1)d = m$
$\Rightarrow a + (n - 1)\left( { - 1} \right) = m$
$\Rightarrow a - n + 1 = m$
$\Rightarrow a = m + n - 1$
We got the value of a and d.
For the ${p^{th}}$ term,
We will use the general formula of AP i.e.
${a_n} = a + (n - 1)d$
Putting n = p, a = m+n-1 and d = -1 in the above formula we get,
${a_p} = \left( {m + n - 1} \right) + \left( {p - 1} \right) - 1$
Expanding the right hand side of the equation we get,
${a_p} = m + n - 1 - p + 1$
Cancelling -1 and +1 in the right hand side we get,
${a_p} = m + n - p$
Thus the ${p^{th}}$ term is ${a_p} = m + n - p$.
Note: Arithmetic progression or arithmetic sequence is the sequence in which the difference between two consecutive numbers are equal.
You can also subtract equation 1 from equation 2 to get the d value.
Be cautious while doing the equations because the mistakes in minus and plus signs can even change the whole answer. | 2022-08-19 11:32:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8434351086616516, "perplexity": 411.9199218805589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00162.warc.gz"} |
https://binfalse.de/2013/01/11/sync-the-clock-wo-ntp/ | The network time protocol (NTP) is a really smart and useful protocol to synchronize the time of your systems, but even if we are in two-thousand-whatever there are reasons why you need to seek for alternatives...
You may now have some kind of »what the [cussword of your choice]« in mind, but I have just been in an ugly situation. All UDP traffic is dropped and I don't have permissions to adjust the firewall.. And you might have heard about the consequences of time differences between servers. Long story short, there is a good solution to sync the time via TCP, using the Time Protocol and a tool called rdate .
## Time Master
First off all you need another server having a correct time (e.g. NTP sync'ed), which can be reached at port 37. Let's call this server $MASTER . To enable the Time Protocol on $MASTER you have to enable the time service in (x)inetd. For instance to enable the TCP service for a current xinetd you could create a file in /etc/xinetd.d/time with the following contents:
Such a file may already exist, so you just have to change the value of the disable -key to no . Still using inetd? I'm sure you'll find your way to enable the time server on your system :)
## Time Slave
On the client, which is not allowed to use NTP (wtfh!?), you need to install rdate :
Just call the following command to synchronize the time of the client with $MASTER : Since rdate immediately corrects the time of your system you need to be root to run this command. Finally, to readjust the time periodically you might want to install a cronjob. Beeing root call crontab -e to edit root's crontab and append a line like the following: This will synchronize the time of your client with the time of $MASTER every six hours. (Don't forget to substitute \$MASTER using your desired server IP or DNS.)
## Notes
Last but not least I want you to be aware that this workaround just keeps the difference in time between both systems less than 0.5 secs. Beyond all doubt, looking at NTP that's very poor. Nevertheless, 0.5 secs delay is much better than several minutes or even hours!
If it is also not permitted to speak to port 37 you need to tunnel your connections or you have to tell the time server to listen to another, more common port (e.g. 80, 443, or 993), as long as they are not already allocated by other services.. | 2019-05-24 09:56:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2147827446460724, "perplexity": 1137.2440571689785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257601.8/warc/CC-MAIN-20190524084432-20190524110432-00529.warc.gz"} |
https://physics.stackexchange.com/questions/79659/how-does-volume-of-water-in-closed-system-relate-to-humidity-at-25c-degrees/79667 | # How does volume of water in closed system relate to humidity at 25(C) degrees
If I have a closed system with a volume of 100L, at ~24(C) or ~75(F) degrees, what volume must be filled with water to reach a humidity of 90%?
I imagined if you fill it 100% with water, humidity is 100%? But I also imagine that with anything less than 100% filled with water, temperature becomes a limiting factor, and 90% humidity might not be achievable without raising the temperature.
I checked out a max humidity ratio table to see that at 25(C) the saturation pressure is 3130 pa, with a maximum humidity ratio of 0.019826 -kg(w)/kg(a) and that humidity ratio can be expressed with the partial pressure of water vapor:
x = 0.62198 pw / (pa - pw)
pw = partial pressure of water vapor in moist air (Pa, psi)
pa = atmospheric pressure of moist air (Pa, psi)
The maximum amount of water vapor in the air is achieved when pw = pws the saturation pressure of water vapor at the actual temperature.
But this is as far as I got, as my understanding of humidity (relative, specific, etc).. is negligible... any suggestions or just a rough estimate? | 2019-12-07 09:19:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5956782698631287, "perplexity": 342.1199880657471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540497022.38/warc/CC-MAIN-20191207082632-20191207110632-00508.warc.gz"} |
https://wumbo.net/notation/triangle/ | # Triangle Notation
A triangle is denoted using the triangle symbol followed by three letters that represent the points of the triangle. For example, the triangle shown below is written as .
In plain language, the expression can be read as the triangle formed by the three points , and . | 2022-09-29 08:22:11 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9291332960128784, "perplexity": 269.26002929193294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00718.warc.gz"} |
https://ai.stackexchange.com/questions/4085/how-can-policy-gradients-be-applied-in-the-case-of-multiple-continuous-actions | How can policy gradients be applied in the case of multiple continuous actions?
Trusted Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO) are two cutting edge policy gradients algorithms.
When using a single continuous action, normally, you would use some probability distribution (for example, Gaussian) for the loss function. The rough version is:
$$L(\theta) = \log(P(a_1)) A,$$
where $$A$$ is the advantage of rewards, $$P(a_1)$$ is characterized by $$\mu$$ and $$\sigma^2$$ that comes out of neural network like in the Pendulum environment here: https://github.com/leomzhong/DeepReinforcementLearningCourse/blob/69e573cd88faec7e9cf900da8eeef08c57dec0f0/hw4/main.py.
The problem is that I cannot find any paper on 2+ continuous actions using policy gradients (not actor-critic methods that use a different approach by transferring gradient from Q-function).
Do you know how to do this using TRPO for 2 continuous actions in LunarLander environment?
Is following approach correct for policy gradient loss function?
$$L(\theta) = (\log P(a_) + \log P(a_2) )*A$$
1 Answer
As you has said, actions chosen by Actor-Critic typically come from a normal distribution and it is the agent's job to find the appropriate mean and standard deviation based on the the current state. In many cases this one distribution is enough because only 1 continuous action is required. However, as domains such as robotics become more integrated with AI, situations where 2 or more continuous actions are required are a growing problem.
There are 2 solutions to this problem: The first and most common is that for every continuous action, there is a separate agent learning its own 1-dimensional mean and standard deviation. Part of its state includes the actions of the other agents as well to give context of what the entire system is doing. We commonly do this in my lab and here is a paper which describes this approach with 3 actor-critic agents working together to move a robotic arm.
The second approach is to have one agent find a multivariate (usually normal) distribution of a policy. Although in theory, this approach could have a more concise policy distribution by "rotating" the distribution based on the co-variance matrix, it means that all of the values of the co-variance matrix must be learned as well. This increases the number of values that must be learned to have $n$ continuous outputs from $2n$ (mean and stddev), to $n+n^2$ ($n$ means and an $n \times n$ co-variance matrix). This drawback has made this approach not as popular in the literature.
This is a more general answer but should help you and others on their related problems.
• Jaden thanks for great answer. 1. I tried multi-agent architecture, but it is not very efficient. Takes much longer to converge. 2. Now multivariate distribution seems obvious to me too, thank you. – Evalds Urtans Sep 22 '17 at 8:48
• Depending on the application and architecture (if it is a deep net), you can have the agents share low level features and then have them branch off into their own value functions. Additionally, having 1 critic and multiple actors is also a way to increase the architecture. – Jaden Travnik Sep 22 '17 at 13:14
• At the moment I would like to apply your suggestions to TRPO (just policy gradient methods), not actor-critic. I am not very confident in gradient transfer from critic to actor - in many implementations I have seen it looks like that it should not work even though it does converge. – Evalds Urtans Sep 25 '17 at 11:55
• Sorry for this noob question: How is this applied in actor-critic methods (where the actor can perform multiple simultaneous continuous actions), where the actor has the policy function and gets trained by policy gradient method? @JadenTravnik Can you please explain that in the answer under a new heading? – Gokul NC Feb 12 '18 at 7:13 | 2020-02-25 20:31:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4348907768726349, "perplexity": 606.1304769852504}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146160.21/warc/CC-MAIN-20200225202625-20200225232625-00491.warc.gz"} |
https://www.zib.de/mathematics-calendar/event?oid=25834474 | Friday, August 2, 2019 - 15:15
Weierstraß-Institut
Mohrenstr. 39, 10117 Berlin, Erhard-Schmidt-Hörsaal, Erdgeschoss
MATHEON Special Guest Lecture
In this talk, two classes of problems in large scale data analysis and their optimization algorithms will be discussed. The first class focuses on composite convex program problems, where I introduce algorithms including a regularized semi-smooth Newton method, a stochastic semi-smooth Newton method and a parallel subspace correction method. The second class is on optimization with orthogonality constraints, particularly on parallelizable approaches for linear eigenvalue problems and nonlinear eigenvalue problems, and quasi-Newton type methods. Numerical results of applications, e.g., electronic structure calculations, $l_1$-regularized logistic regression problems, Lasso problems and Hartree-Fock total energy minimization problems, will be highlighted.
submitted by bonetti (cecilia.bonetti@wias-berlin.de, 030 20372583) | 2019-07-17 08:29:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5452006459236145, "perplexity": 1783.0679526904655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525133.20/warc/CC-MAIN-20190717081450-20190717103450-00513.warc.gz"} |
https://www.physicsforums.com/threads/connected-sum-of-orientable-manifolds-is-orientable.513040/ | # Connected Sum of Orientable Manifolds is Orientable
1. Jul 9, 2011
### Bacle
Hi, All:
I am trying to show that the connected sum of orientable manifolds M,M' is orientable , i.e., can be given an orientation. I am using the perspective from Simplicial Homology.
Consider the perspective of simplicial homology, for orientable manifolds M,M', glued about cycles C,C' respectively. The idea is that we can use the original orientations and then select orientations on C,C', so that they cancel each other out when glued together, and then the remaining orientations on (M-C) and (M'-C') remain the same. Still, I guess I am assumming that manifolds are simplicial complexes; I don't know if we need any additional condition like, e.g., C^1 or higher.
Assume WOLG that M,M' are both connected: if an m-manifold M is orientable , this means
that the top cycle --call it m'-- can be assigned a coherent orientation, so that m' is a cycle -
-that does not bound, since m is the highest dimension--, i.e., the net boundary of m
cancels out , e.g., in the simplest case of a loop with boundary a-a=0. This means m', which
represents M itself, is a non-trivial cycle, which generates the top homology class. If your
coefficient ring is Z, then the top homology will be Z; consider going n-times about the
cycle. Now, the key is that the two orientable manifolds can be glued so that, at the circle of
gluing, the total boundary cancels out, and the resulting manifold M#M' is still orientable. As
a specific example, consider a square a,b,c,d, with arrows going all in the same direction, so
that the net boundary is : (b-a)+(c-b)+(d-c)+(a-d)=(b-b)+(a-a)+(c-c)+(d-d)=0 . Now glue a
second square a',b',c',d' along a common edge, (say (b,c) with (b',c')), but reverse the
orientation of the edge (b',c') in M when gluing, and notice how the simplex resulting from
the gluing also has net boundary zero.
Now, the key general point is that , at the cycle C where we collapse M with M', we change
the orientation of C in either M or M', so that, along the common cycle, where you are doing
the gluing, the respective boundaries cancel each other out, and the remaining orientations
of M-C and M-C' remain the same, so that M#M' is orientable.
Does this Work?
2. Jul 22, 2011
### lavinia
This seems right.
If the manifold is orientable then the boundary of the fundamental cycle minus an n-simplex is the boundary of that removed n-simplex with the induced orientation. In the connected sum the two boundaries of the two removed n-simplices cancel if you choose them to have complimentary orientations and so again you get a fundamental cycle.
3. Jul 23, 2011
### Bacle
Thanks for your patience in going over a messy proof, Lavinia. | 2018-09-20 23:52:40 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.839632511138916, "perplexity": 1516.4484562621244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156690.32/warc/CC-MAIN-20180920234305-20180921014705-00310.warc.gz"} |
http://forum.matholympiad.org.bd/viewtopic.php?f=25&t=3967&p=17976&sid=e5f19b10c723c46ed6fd22d9ab04227d | [phpBB Debug] PHP Warning: in file [ROOT]/includes/bbcode.php on line 122: include(/home/shoeb/public_html/www.matholympiad.org.bd/forum/includes/phpbb-latex.php) [function.include]: failed to open stream: No such file or directory
[phpBB Debug] PHP Warning: in file [ROOT]/includes/bbcode.php on line 122: include() [function.include]: Failed opening '/home/shoeb/public_html/www.matholympiad.org.bd/forum/includes/phpbb-latex.php' for inclusion (include_path='.:/opt/php53/lib/php')
[phpBB Debug] PHP Warning: in file [ROOT]/includes/session.php on line 1042: Cannot modify header information - headers already sent by (output started at [ROOT]/includes/functions.php:3887)
[phpBB Debug] PHP Warning: in file [ROOT]/includes/functions.php on line 4786: Cannot modify header information - headers already sent by (output started at [ROOT]/includes/functions.php:3887)
[phpBB Debug] PHP Warning: in file [ROOT]/includes/functions.php on line 4788: Cannot modify header information - headers already sent by (output started at [ROOT]/includes/functions.php:3887)
[phpBB Debug] PHP Warning: in file [ROOT]/includes/functions.php on line 4789: Cannot modify header information - headers already sent by (output started at [ROOT]/includes/functions.php:3887)
[phpBB Debug] PHP Warning: in file [ROOT]/includes/functions.php on line 4790: Cannot modify header information - headers already sent by (output started at [ROOT]/includes/functions.php:3887)
BdMO Online Forum • View topic - Comparing the inradii
For discussing Olympiad level Geometry Problems
Let $ABC$ be an acute triangle with circumcircle $\Gamma$. Let $A_1,B_1$ and $C_1$ be respectively the midpoints of the arcs $BAC,CBA$ and $ACB$ of $\Gamma$. Show that the inradius of triangle $A_1B_1C_1$ is not less than the inradius of triangle $ABC$.
Atonu Roy Chowdhury
Posts: 40
Joined: Fri Aug 05, 2016 7:57 pm
My solution
Trivial angle chasing yields that the angles of $\triangle A_1B_1C_1$ are $\frac{\angle A + \angle B}{2}$ , $\frac{\angle B + \angle C}{2}$ and $\frac {\angle C + \angle A}{2}$ . We know that $r = 4R \sin(\frac{A}{2})\sin(\frac{B}{2})\sin(\frac{C}{2})$. So, it remains to show that $\sin(\frac{A}{2})\sin(\frac{B}{2})\sin(\frac{C}{2}) \le \sin(\frac{A+B}{4}) \sin(\frac{B+C}{4}) \sin(\frac{C+A}{4})$
By AM-GM, $\sin(\frac{A}{4})\cos(\frac{B}{4}) + \sin(\frac{B}{4})\cos(\frac{A}{4}) \ge 2 \sqrt{ \sin(\frac{A}{4})\cos(\frac{B}{4}) \sin(\frac{B}{4})\cos(\frac{A}{4})}$. Similarly we get two other ineqs. By multiplying them, we get the result.
Atonu Roy Chowdhury
Posts: 40
Joined: Fri Aug 05, 2016 7:57 pm | 2017-11-21 02:20:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.715061604976654, "perplexity": 8352.406741883948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806310.85/warc/CC-MAIN-20171121021058-20171121041058-00252.warc.gz"} |
https://haifengl.wordpress.com/2014/07/17/the-regularized-em-algorithm/ | In statistics, the method of maximum likelihood is widely used to estimate an unobservable population parameter that maximizes the log-likelihood function
$L(\Theta;\mathcal{X})=\sum_{i=1}^{n} \log p(x_i|\Theta)$
where the observations $\mathcal{X}=\{x_i|i=1,\ldots,n\}$ are independently drawn from the distribution $p(x)$ parameterized by $\Theta$. The Expectation-Maximization (EM) algorithm is a general approach to iteratively compute the maximum-likelihood estimates when the observations can be viewed as incomplete data and one assumes the existence of additional but missing data $\mathcal{Y}=\{y_i|i=1,\ldots,n\}$ corresponding to $\mathcal{X}$. The observations together with the missing data are called complete data.
The EM algorithm maximizes the log-likelihood of the incomplete data by exploiting the relationship between the complete data and the incomplete data. In each iteration, two steps, called E-step and M-step, are involved. In the E-step, the EM algorithm determines the expectation of log-likelihood of the complete data based on the incomplete data and the current parameter
$Q(\Theta|\Theta^{(t)})= E\left(\log p(\mathcal{X},\mathcal{Y}|\Theta)\bigl\lvert\mathcal{X},\Theta^{(t)}\right)$
In the M-step, the algorithm determines a new parameter maximizing $Q$
$\Theta^{(t+1)}=\arg\max_{\Theta}Q(\Theta|\Theta^{(t)})$
Each iteration is guaranteed to increase the likelihood, and finally the algorithm converges to a local maximum of the likelihood function.
Clearly, the missing data $\mathcal{Y}$ has strong affects on the performance of the EM algorithm since the optimal parameter $\Theta^{*}$ is obtained by maximizing $E\left(\log p(\mathcal{X},\mathcal{Y}|\Theta)\right)$. For example, the EM algorithm finds a local maximum of the likelihood function, which depends on the choice of $\mathcal{Y}$. Note that given the incomplete-data $\mathcal{X}$, there are many possible specifications of the missing data. Sometimes a natural choice will be obvious. At other times there may be several different ways to define $\mathcal{Y}$. In these cases, how can we choose a suitable $\mathcal{Y}$ to make the solution more physically plausible? This question is not addressed in the EM algorithm because the likelihood function does not reflect any influence of the missing data.
Intuitively, we would like to choose the missing data that has a strong physical relation with the observations so that the reached local maximum of likelihood has good physical plausibility. The strong relationship between the missing data and the observations implies that the missing data has little uncertainty given the observations. In other words, the observations contain a lot of information about the missing data and we can infer the missing data from the observations with a small error rate.
The information about one object contained in another object can be measured by mutual information
$H(X|Y)=\sum_{y}p(y)H(X|Y=y)=-\sum_{x}\sum_{y}p(x,y)\log p(x|y)$
based on Shannon information theory. The relationship between entropy and mutual information
$I(X;Y)=H(X)-H(X\vert Y)=H(Y)-H(Y\vert X)$
demonstrates that the mutual inforamtion measures the amount of information that one random variable contains about another one.
With mutual information as the regularizer, we have the regularized likelihood
$\widetilde{L}(\Theta;\mathcal{X})=L(\Theta;\mathcal{X})+\gamma I(X;Y|\Theta)$
where $X$ is the random variable of observations and $Y$ is the random variable of missing data. Because we usually do not know much about the missing data, we may naturally assume that $Y$ follows a uniform distribution and thus $H(Y)$ is a constant value given the range of $Y$. Since $I(X;Y)=H(Y)-H(Y\vert X)$, we may also use the following regularized likelihood
$\widetilde{L}(\Theta;\mathcal{X}) =L(\Theta;\mathcal{X})-\gamma H(Y|X;\Theta)$
The conditional entropy $H(Y|X)$ measures how uncertain we are of $Y$ on the average when we know $X$. In fact, $H(Y|X)=0$ if and only if $Y$ is a function of $X$. Thus, we expect $H(Y|X)$ to be small if the observations $\mathcal{X}$ and the missing data $\mathcal{Y}$ have a strong co-relation.
To optimize the regularized likelihood, we only need slightly modify the M-step of the EM algorithm. Now it is
$\Theta^{(t+1)}=\arg\max_{\Theta}\widetilde{Q}(\Theta|\Theta^{(t)})$
where
$\widetilde{Q}(\Theta|\Theta^{(t)})=Q(\Theta|\Theta^{(t)})+\gamma I(X;Y|\Theta)$
or
$\widetilde{Q}(\Theta|\Theta^{(t)})=Q(\Theta|\Theta^{(t)})-\gamma H(Y|X;\Theta)$
The modified algorithm is called the regularized EM algorithm. And we can easily prove the convergence of the regularized EM algorithm in the framework of proximal point algorithm.
We applied the regularized EM algorithm to fit the finite mixture model, which is of great use in machine learning. Due to space limit, we didn’t present the derivation of Equation (19) and (22) in the paper. In case that you are interested, here is the derivation. | 2017-07-27 14:37:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8390934467315674, "perplexity": 174.4116790799072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549428300.23/warc/CC-MAIN-20170727142514-20170727162514-00166.warc.gz"} |
https://www.jobilize.com/online/course/0-2-practice-tests-1-4-and-final-exams-by-openstax?qcr=www.quizover.com&page=22 | # 0.2 Practice tests (1-4) and final exams (Page 23/36)
Page 23 / 36
33 . The screening test has a 20 percent probability of a Type II error, meaning that 20 percent of the time, it will fail to detect TB when it is in fact present.
34 . Eighty percent of the time, the screening test will detect TB when it is actually present.
## 9.3: distribution needed for hypothesis testing
35 . The Student’s t -test.
36 . The normal distribution or z -test.
37 . The normal distribution with μ = p and σ = $\sqrt{\frac{pq}{n}}$
38 . t 24 . You use the t -distribution because you don’t know the population standard deviation, and the degrees of freedom are 24 because df = n – 1.
39 . $\overline{X}~N\left(0.95,\frac{0.051}{\sqrt{100}}\right)$
Because you know the population standard deviation, and have a large sample, you can use the normal distribution.
## 9.4: rare events, the sample, decision, and conclusion
40 . Fail to reject the null hypothesis, because α p
41 . Reject the null hypothesis, because α p .
42 . H 0 : μ ≥ 29.0”
H a : μ <29.0”
43 . t 19 . Because you do not know the population standard deviation, use the t -distribution. The degrees of freedom are 19, because df = n – 1.
44 . The test statistic is –4.4721 and the p -value is 0.00013 using the calculator function TTEST.
45 . With α = 0.05, reject the null hypothesis.
46 . With α = 0.05, the p -value is almost zero using the calculator function TTEST so reject the null hypothesis.
## 9.5: additional information and full hypothesis test examples
47 . The level of significance is five percent.
48 . two-tailed
49 . one-tailed
50 . H 0 : p = 0.8
H a : p ≠ 0.8
51 . You will use the normal test for a single population proportion because np and nq are both greater than five.
## 10.1: comparing two independent population means with unknown population standard deviations
52 . They are matched (paired), because you interviewed married couples.
53 . They are independent, because participants were assigned at random to the groups.
54 . They are matched (paired), because you collected data twice from each individual.
55 . $d=\frac{{\overline{x}}_{1}-{\overline{x}}_{2}}{{s}_{pooled}}=\frac{4.8-4.2}{1.6}=0.375$
This is a small effect size, because 0.375 falls between Cohen’s small (0.2) and medium (0.5) effect sizes.
56 . $d=\frac{{\overline{x}}_{1}-{\overline{x}}_{2}}{{s}_{pooled}}=\frac{5.2-4.2}{1.6}=0.625$
The effect size is 0.625. By Cohen’s standard, this is a medium effect size, because it falls between the medium (0.5) and large (0.8) effect sizes.
57 . p -value<0.01.
58 . You will only reject the null hypothesis if you get a value significantly below the hypothesized mean of 110.
## 10.2: comparing two independent population means with known population standard deviations
59 . ${\overline{X}}_{1}-{\overline{X}}_{2}$ , i.e., the mean difference in amount spent on textbooks for the two groups.
60 . H 0 : ${\overline{X}}_{1}-{\overline{X}}_{2}$ ≤ 0
H a : ${\overline{X}}_{1}-{\overline{X}}_{2}$ >0
This could also be written as:
H 0 : ${\overline{X}}_{1}\le {\overline{X}}_{2}$
H a : ${\overline{X}}_{1}>{\overline{X}}_{2}$
61 . Using the calculator function 2-SampTtest, reject the null hypothesis. At the 5% significance level, there is sufficient evidence to conclude that the science students spend more on textbooks than the humanities students.
62 . Using the calculator function 2-SampTtest, reject the null hypothesis. At the 1% significance level, there is sufficient evidence to conclude that the science students spend more on textbooks than the humanities students.
## 10.3: comparing two independent population proportions
63 . H 0 : p A = p B
H a : p A p B
#### Questions & Answers
during a prize distribution ceremony the first 10 students of a class where are asked to take their seats in the first roar of 10 seats according to their preference .what is the probability that they will sit in the order of their ranks?
Punam Reply
10%
ATTAH
In a survey of 100 stocks on NASDAQ, the average percent increase for the past year was 9% for NASDAQ stocks.
Noor Reply
what is primary data and it's various types of collection of data?
Xaib Reply
If a random variable X takes only two values -2 and 1 such that 2 P[X=- 2]=P[X=1]=P, then find variance of x.
Dikshita Reply
2.5
Anirban
how!
Alif
can u show the solution pls?
Punam
the mean of random variable following binomial distribution is 10. the numbers of trails are 30. What the approximate value of variance?
AMIT Reply
what is chis square
Remelyn Reply
A chi-square (χ2) statistic is a test that measures how a model compares to actual observed data.
kalim
descriptive statistics basic
Sangeetha
Hi Friends
Sangeetha
tamil people yaaraavathu irukkingala
Sangeetha
what is the median of the following set of number:3,3,3,4,5,6,7,8,8,9?
Sur Reply
3
Mbemah
5.6
Alekya
(5+6)/2=5.5555.......=5.6
Lyrical
(5+6)/2=5.5 that is the median
moniba
hi friends am new here, hope am welcome
nathan
hi friends am new here, hope am welcome
nathan
1.5
nathan
differences between Intervals and Ratio measurement scales
VIP Reply
interval is the distinct from a starting point to the peak of a certain distance
Mbemah
the taking of a certain range accurately in a divisional way
Mbemah
i need the solution for exercise sums
Siva Reply
descriptive statistics sums I need
Sangeetha
statistics discovered?
Kaleem Reply
internet
Kamranali
dev OS
Virendra
searching for motivation to learn stat
Nija
purpose of of statistics
mukesh
ya..
mukesh
tamil people yaraavathu inrukkingala
Sangeetha
Odisha ru kau bhai mane achha?
Hemanta
I want to learn my MSC by statistics who can help me ?
Abebe
@Avirup interest person
Abebe
good morning
Sangeetha
Dr good morning
mukhtar
in statistical inferences we use population parameter to infer sample statistics
anupama Reply
Yes
lukman
how do we calculate decile
Drix Reply
how do we calculate class boundaries
Syed
The lower class boundary is found by subtracting 0.5 units from the lowerclass limit and the upper class boundary is found by adding 0.5 units to the upper class limit. The difference between the upper and lowerboundaries of any class.
Ekene
The lower class boundary is found by subtracting 0.5 units from the lowerclass limit and the upper class boundary is found by adding 0.5 units to the upper class limit. The difference between the upper and lowerboundaries of any class.
Ekene
complete definition plz
Zarifa Reply
what are the advantages and disadvantages of primary and secondary data
Elisha Reply
advantage of primary data 1) specific 2) Accurate 3)ownership 4)up to date information 5) control Disadvantage of primary data 1) Expensive 2) time consuming 3) feasibility.
Xaib
the average increase for all NASDAQ stocks is the
Technical Reply
### Read also:
#### Get the best Introductory statistics course in your pocket!
Source: OpenStax, Introductory statistics. OpenStax CNX. May 06, 2016 Download for free at http://legacy.cnx.org/content/col11562/1.18
Google Play and the Google Play logo are trademarks of Google Inc.
Notification Switch
Would you like to follow the 'Introductory statistics' conversation and receive update notifications?
By By By Michael Pitt By Sarah Warren By Prateek Ashtikar By Richley Crapo By OpenStax By OpenStax By Rhodes By Yasser Ibrahim By OpenStax By Stephen Voron | 2020-12-05 09:15:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6623021364212036, "perplexity": 2576.7975363731293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747323.98/warc/CC-MAIN-20201205074417-20201205104417-00519.warc.gz"} |
https://bookdown.org/f_lennert/book-toolbox_css/agent-based-modeling.html | # Chapter 3 Agent-based modeling
R can be used in order to simulate agents’ behaviors in virtual environments. In the following I will introduce you to what this can look like. In the end, there are further resources where you can find more ABM examples in R and also some theoretical background.
## 3.1 Building an ABM: Schelling model
When designing an ABM from scratch, we need to take a bunch of things into account. A structural way to think about this is in terms of a flow chart.
In the first part, we need to create the agents and their world. We need to think about parameters that need to be specified and how they can be implemented in their code. Thereafter, the actual agent (inter)actions take place. In the end, we can evaluate the (macro) outcomes and check for robustness etc. In this part of the script, it’s going to be about the former two steps.
When writing an ABM in R, I would advise you to write functions for each step in the flow chart and run the entire thing in a for loop or while loop.
### 3.1.1 Recap: functions, if…else, loops, and matrices
Building blocks for ABMs in R are custom functions (since you want your agents to do things…), flow control (…under certain conditions…), and loops (…several times in a row). Matrices are handy to implement a cellular automaton.
#### 3.1.1.1 Functions
When you define functions in R, you need to follow a certain structure:
function_name <- function(argument_1, argument_2, argument_n) {
function body
}
• The function_name is the thing you will call (e.g., mean()). In general, it should be a verb, it should be concise, and it should be in_snakecase.
• The arguments are what you need to provide the function with (e.g., mean(1:10)).
• The function body contains the operations which are performed to the arguments. It can contain other functions as well – which need to be defined beforehand (e.g., sum(1:10) / length(1:10))). It is advisable to split up the function body into as little pieces as you can.
#### 3.1.1.2 Loops
With ABMs you often want to loop until a certain condition is met. This is where while loops come in handy.
The basic structure of while loops is as follows:
while (condition) {
code
}
Alternatively, you can also do a for loop with a pre-defined set of runs and a break() condition:
for (i in seq_along(seq(number_of_iterations))) {
do_something_i_times
if break_condition_is_met break
}
Note in both cases that you need to pre-allocate space for your loop to run smoothly for instance by pre-defining a list with vector(mode = "list", length = number_of_iterations)
#### 3.1.1.3 Flow control
Sometimes you want your code to only run in specific cases. A generalized solution for conditionally running code in R are if statements. They look as follows:
if (conditional_statement evaluates to TRUE) {
do_something
}
They also have an extension – if…else:
if (conditional_statement evaluates to TRUE) {
do_something
} else {
do_something_else
}
#### 3.1.1.4 Matrix
A matrix is a multidimensional vector. We need it for building a cellular automaton like the one used by . It is created using matrix(values, nrow = number_of_rows, ncol = number_of_columns). Thereafter, the
matrix(1:10, nrow = 2) # by default, it's filled by columns
## [,1] [,2] [,3] [,4] [,5]
## [1,] 1 3 5 7 9
## [2,] 2 4 6 8 10
matrix(1:10, nrow = 2, byrow = TRUE) # can be also filled by rows
## [,1] [,2] [,3] [,4] [,5]
## [1,] 1 2 3 4 5
## [2,] 6 7 8 9 10
example <- matrix(sample(0:2, 10*10, replace = TRUE, prob = c(0.4, 0.3, 0.3)), ncol = 10) # probably more useful for a cellular automaton
A matrix can be visualized by transforming it to an image.
image(example, col = c("black","red","green"), axes = FALSE)
Note that you can index a matrix using [row, column].
### 3.1.2 Schelling model
Let’s build a Schelling model together. The inspiration for parts of the code come from this blog post – I tried to write the code in a cleaner fashion though.
The steps we will program today are the follwoing:
• Initialization:
• Create field with n x n dimensions
• Create agents with certain attributes and put them on the field
• Determine happiness criterion
• Simulation
• Determine each agent’s happiness level
• Move unhappy agents to empty field
• Repeat until time is up or all agents are happy
#### 3.1.2.1 Initialization
1. Build a 25x25 matrix. It should contain 0s (for unoccupied field), 1s (color 1), and 2s (color 2). Make sure to wrap it in a function with different parameters for the share of different colors. Let R print an image() of it.
build_world <- function(dimensions, share_free, share_col_1, share_col_2) {
if (share_free + share_col_1 + share_col_2 != 1) stop("shares need to add up to 1.")
matrix(sample(0:2, dimensions^2, replace = TRUE, prob = c(share_free, share_col_1, share_col_2)),
ncol = dimensions)
}
image(build_world(dimensions = 25, share_free = 0.5, share_col_1 = 0.25, share_col_2 = 0.25), axes = FALSE)
1. Determine the initial happiness criterion as 0.4 – you will need to feed it to the executing functions later. The happiness criterion is the least share of neighbors surrounding an individual that is of the same color as the agent itself.
happiness_criterion <- 0.4
#### 3.1.2.2 Action
1. Build a function that determines each agent’s happiness (hint: you can use a nested for loop for looping through the matrix).
1. write a function to determine one agent’s neighbors (as determined through their coordinates [row, col] – Moore neighborhood, i.e., all 8 neighbors)
2. determine whether the agent is happy (is the share of same-value neighbors above the threshold? – hint: exclude the empty fields first and stop the function if a person does not have neighbors – then the person is automatically happy). Moreover, determine the share of same-value agents in each agent’s environment.
3. loop over all coordinates to check whether the agents are happy; store the results in a tibble with the following cells: row, column, happiness (NA if free spot, 0 if unhappy, 1 if happy)
4. write a function to determine the share of happy agents.
build_world(dimensions = 25, share_free = 0.5, share_col_1 = 0.25, share_col_2 = 0.25)
coordinates <- c(2,1)
dimensions <- 25
#a
get_neighbors <- function(coordinates, dimensions) {
if (coordinates[1] > dimensions) stop("invalid value for coordinates[1]")
if (coordinates[2] > dimensions) stop("invalid value for coordinates[2]")
expand_grid(dim_1 = (coordinates[1] - 1):(coordinates[1] + 1),
dim_2 = (coordinates[2] - 1):(coordinates[2] + 1)) %>%
filter(dim_1 > 0,
dim_2 > 0,
dim_1 <= dimensions,
dim_2 <= dimensions,
!(dim_1 == coordinates[1] & dim_2 == coordinates[2]))
}
#b
determine_same_share <- function(coordinates, field_matrix) {
own_value <- field_matrix[coordinates[1], coordinates[2]]
if (own_value == 0) return(NA_real_)
neighbors <- get_neighbors(coordinates, dimensions = nrow(field_matrix))
map2(neighbors[[1]], neighbors[[2]], ~field_matrix[.x, .y]) %>%
reduce(c) %>%
enframe(name = NULL, value = "neighbor") %>%
mutate(own_value = own_value) %>%
filter(neighbor != 0) %>%
mutate(same_type = case_when(own_value == neighbor ~ TRUE,
TRUE ~ FALSE)) %>%
pull(same_type) %>%
mean()
}
# c
coordinate_tbl <- expand_grid(dim_1 = 1:25, dim_2 = 1:25) %>%
mutate(same_share = NA_real_,
happiness_ind = NA_real_)
for (i in seq_along(coordinate_tbl$same_share)) { coordinate_tbl$same_share[i] <- determine_same_share(coordinates = c(coordinate_tbl$dim_1[i], coordinate_tbl$dim_2[i]),
field_matrix = field_matrix)
if (is.nan(coordinate_tbl$same_share[i]) == TRUE) coordinate_tbl$happiness_ind[i] <- 1
if (is.nan(coordinate_tbl$same_share[i]) == TRUE) next if (is.na(coordinate_tbl$same_share[i]) == TRUE) next
if (coordinate_tbl$same_share[i] >= happiness_criterion) { coordinate_tbl$happiness_ind[i] <- 1
}else{
coordinate_tbl$happiness_ind[i] <- 0 } } # d determine_happy_share <- function(coordinate_tbl) { mean(coordinate_tbl[["happiness_ind"]], na.rm = TRUE) } 1. Make the unhappy cells “move” to a free spot. Make sure that the agents will be randomized before so that there is no spatial bias (hint: pingers::shuffle()). unhappy_ones <- coordinate_tbl %>% filter(happiness_ind == 0) %>% pingers::shuffle() free_spots <- coordinate_tbl %>% filter(is.na(happiness_ind)) %>% pingers::shuffle() move_agents <- function(unhappy_ones, free_spots, field_matrix) { i <- 1 while (i <= nrow(unhappy_ones)) { field_matrix[free_spots$dim_1[i], free_spots$dim_2[i]] <- field_matrix[unhappy_ones$dim_1[i], unhappy_ones$dim_2[i]] field_matrix[unhappy_ones$dim_1[i], unhappy_ones$dim_2[i]] <- 0 if(i == nrow(free_spots)) { unhappy_ones %<>% slice(., (i + 1):nrow(.)) free_spots <- which(field_matrix == 0, arr.ind = TRUE) %>% as_tibble() %>% select(dim_1 = row, dim_2 = col) %>% pingers::shuffle() i <- 1 } i <- i + 1 } field_matrix } 1. Put the functions in a while loop that runs as long as you want or until all agents are satisfied. get_own_value <- function(coordinate_a, coordinate_b, matrix_name) matrix_name[coordinate_a, coordinate_b] run_schelling <- function(max_steps = 500, dimensions = 25, share_free = 0.2, share_col_1 = 0.4, share_col_2 = 0.4, happiness_criterion = 0.4) { current_step <- 1 share_unhappy <- 1 field_matrix <- build_world(dimensions, share_free, share_col_1, share_col_2) report_tbl <- tibble( step = 1:max_steps, unhappy_ones = NA_integer_ ) #image_list <- vector(mode = "list", length = max_steps/3 + 1) agent_list <- vector(mode = "list", length = max_steps) coordinate_tbl <- expand_grid(dim_1 = 1:dimensions, dim_2 = 1:dimensions) %>% mutate(happiness_ind = NA_real_, same_share = NA_real_) while (current_step <= max_steps & share_unhappy > 0) { for (i in seq_along(coordinate_tbl$happiness_ind)) {
coordinate_tbl$same_share[i] <- determine_same_share(coordinates = c(coordinate_tbl$dim_1[i],
coordinate_tbl$dim_2[i]), field_matrix = field_matrix) if (is.nan(coordinate_tbl$same_share[i]) == TRUE) coordinate_tbl$happiness_ind[i] <- 1 if (is.nan(coordinate_tbl$same_share[i]) == TRUE) next
if (is.na(coordinate_tbl$same_share[i]) == TRUE) next if (coordinate_tbl$same_share[i] >= happiness_criterion) {
coordinate_tbl$happiness_ind[i] <- 1 }else{ coordinate_tbl$happiness_ind[i] <- 0
}
}
agent_list[[current_step]] <- coordinate_tbl %>%
mutate(own_value = map2_dbl(dim_1, dim_2, get_own_value, field_matrix))
share_unhappy <- 1 - mean(coordinate_tbl %>% filter(!is.na(same_share)) %>% pull(happiness_ind), na.rm = TRUE)
unhappy_ones <- coordinate_tbl %>%
filter(happiness_ind == 0) %>%
pingers::shuffle()
free_spots <- coordinate_tbl %>%
mutate(own_value = map2_dbl(dim_1, dim_2, get_own_value, field_matrix)) %>%
filter(own_value == 0) %>%
pingers::shuffle() %>%
select(1:2)
field_matrix <- move_agents(unhappy_ones, free_spots, field_matrix)
distance_df$euclidean_vec[[i]] <- euclidean(actor_list[[distance_df$a[[i]]]], actor_list[[distance_df$b[[i]]]]) } distance_df %<>% mutate(euclidean_normalized = (euclidean_vec/max(euclidean_vec)) %>% mean()) #### 3.2.1.6 Input data Do you use empirically observed data to calibrate your model? Those data can affect state variables that might change over time. Parameters and everything else used to initially set up the world needs to be specified in step 5, initialization, however. Input data in : No. #### 3.2.1.7 Submodels In the final part, you need to go back to the schedule and describe each step and its submodels thoroughly. A thorough description encompasses its “equations, algorithms, parameters, and parameter values.” (Grimm et al. 2020, Supp. 1: 27) Also justify why you have made those design decisions. Include robustness checks that take into account scenarios the submodels might encounter. Submodels in : In the following, I neither include robustness checks nor thorough or complete theoretical motivation. However, I include workable code that can be put in, e.g., a loop to run the model. The first submodel is the selection of interaction partners: “for each actor at each time period t, we randomly selected from the population a number of potential interlocutors as a function of the actor’s overall level of interest. The number of people selected is proportional to the sum of the squared mean of interest over the four issues … the probability of interaction is proportional to their interest and an inverse function of the perceived ideological distance between the two.” (p. 791) $$\lambda$$ is the distance between two actors. Initially, it is the demeaned euclidean distance (normalized by division by max(euclidean_distance)) between all actors. As actors communicate with each other, they learn about their views and update their distance to their actual distance $d_{ab}^{(t)}=\frac{\sqrt{\sum\limits_{i=1}^{4} (a_{i}^{(t)}-b_{i}^{(t)})^{2}}}{\max_{\{a,b\}\in N}\left[\sqrt{\sum\limits_{i=1}^{4}(a_{i}^{(t)}-b_{i}^{(t)})^{2}} \right]}$ In code, the initial distance is computed as follows: euclidean <- function(a, b) sqrt(sum((a - b)^2)) %>% as.double() The formula for computing the probability of interaction between two actors a and b then looks like this: $\frac{\eta \times [(\sum\limits_{i=1}^{4}|a_{i}^{(t)}|)^2(\sum\limits_{i=1}^{4}|b_{i}^{(t)}|)^2]}{100} \times (1-\lambda_{ab}^{(t)})$ $$\eta$$ is a scaling factor (.005) that limits the number of interactions to a reasonable range. In general, at time 1, actors have between 0 and 6 conversations, while at time 500, 0 to 12.” (p. 791) Code-wise, this is implemented in the following manner: 1. potential discussants are drawn based on their interest (get_avg_interest()) and the distance between the respective agents; the resulting probability (see formula above) is used to draw a value from a Bernoulli distribution (either 0 or 1, purrr::rbernoulli(n = 1, p = formula above)) 2. Agent-pairs that talk to each other and the ones that don’t are stored in separate tibbles. The interlocutor-tibble is pinger::shuffle()d. get_avg_interest <- function(vec) vec %>% abs() %>% mean() %>% .^2 distance_df$prob <- NA_real_
draw_discussants <- function(actor_list, distance_df){
sum_squared_mean_interest <- map_dbl(actor_list, get_avg_interest)
for (i in seq_along(distance_df$euclidean_vec)){ distance_df$prob[[i]] <- rbernoulli(1,
(0.005 * (sum_squared_mean_interest[[distance_df$a[[i]]]] + sum_squared_mean_interest[[distance_df$b[[i]]]]) * 1 - distance_df$euclidean_normalized[[i]])/100) } list(interlocution = distance_df %>% filter(prob == 1) %>% pingers::shuffle(), no_interlocution = distance_df %>% filter(prob != 1)) } discussion_list <- draw_discussants(actor_list, distance_df) In the next step, the agents are interacting. The first step is to determine the topic they are deliberating upon. This is the one which is of greatest cumulative interest to them: select_issue <- function(a, b, actor_list) { actor_list[[a]] %>% enframe(name = "id", value = "value_a") %>% bind_cols(actor_list[[b]] %>% enframe(name = NULL, value = "value_b")) %>% mutate(sum = value_a + value_b) %>% filter(sum == max(sum)) } select_rest <- function(a, b, actor_list) { actor_list[[a]] %>% enframe(name = "id", value = "value_a") %>% bind_cols(actor_list[[b]] %>% enframe(name = NULL, value = "value_b")) %>% mutate(sum = value_a + value_b) %>% filter(sum != max(sum)) } Once the topic is determined, they start talking and subsequently adapt their opinions ON THE TOPIC THEY ARE TALKING ABOUT. The opinion change process is implemented as follows: • if they share the opinion on the discussed topic (i.e., they have the same sign), their opinions are reinforced (reinforce_opinion()) • if they do not have the same signs (read: views) on the discussed topic yet agree on all the other topics, their opinions on the focal topic will go closer together (make_compromise()) • if neither of the former is the case, their own views on the topic will be reinforced and their overall distance increase (make_conflict()) reinforce_opinion <- function(actor_list, a, b, issue_id) { issue_a <- actor_list[[a]][[issue_id]] issue_b <- actor_list[[b]][[issue_id]] change_a <- (0.1 * (abs(issue_a) - abs(issue_b))/abs(issue_a)) %>% abs() change_b <- (0.1 * (abs(issue_b) - abs(issue_a))/abs(issue_b)) %>% abs() return_vec <- vector(mode = "double", length = 2L) if (issue_a > 0 & issue_b > 0) { return_vec[1] <- issue_a + change_a return_vec[2] <- issue_b + change_b }else{ return_vec[1] <- issue_a - change_a return_vec[2] <- issue_b - change_b } return_vec } make_compromise <- function(actor_list, a, b, issue_id) { issue_a <- actor_list[[a]][[issue_id]] issue_b <- actor_list[[b]][[issue_id]] change_a <- (0.1 * (abs(issue_a) - abs(issue_b))/abs(issue_a)) %>% abs() change_b <- (0.1 * (abs(issue_b) - abs(issue_a))/abs(issue_b)) %>% abs() return_vec <- vector(mode = "double", length = 2L) if (issue_a > 0) return_vec[1] <- issue_a - change_a if (issue_a < 0) return_vec[1] <- issue_a + change_a if (issue_b > 0) return_vec[2] <- issue_b - change_b if (issue_b < 0) return_vec[2] <- issue_b + change_b return_vec } make_conflict <- function(actor_list, a, b, issue_id) { issue_a <- actor_list[[a]][[issue_id]] issue_b <- actor_list[[b]][[issue_id]] change_a <- (0.1 * (abs(issue_a) - abs(issue_b))/abs(issue_a)) %>% abs() change_b <- (0.1 * (abs(issue_b) - abs(issue_a))/abs(issue_b)) %>% abs() return_vec <- vector(mode = "double", length = 2L) if (issue_a > 0) return_vec[1] <- issue_a + change_a if (issue_a < 0) return_vec[1] <- issue_a - change_a if (issue_b > 0) return_vec[2] <- issue_b + change_b if (issue_b < 0) return_vec[2] <- issue_b - change_b return_vec } The rationale behind the reinforcement of views is based on prior research on group polarization: discourse between like-minded people usually leads to an amplification of their pre-existing views (see, e.g., Sunstein, Hastie, and Schkade 2007). In the case of disagreement on the focal issue, actors actively try to reduce dissonance. If they can agree on the remainder of the issues, dissonance reduction implies compromise and the actors will move closer with regard to the focal issue. If not, they will “stand their ground” and become even more convinced and more extreme in terms of their views on the salient issue. opinion_change <- function(a, b, actor_list) { # get issue issue <- select_issue(a, b, actor_list) %>% mutate(same_sign = case_when(sign(value_a) == sign(value_b) ~ TRUE, TRUE ~ FALSE)) # get rest rest <- select_rest(a, b, actor_list) %>% mutate(same_sign = case_when(sign(value_a) == sign(value_b) ~ TRUE, TRUE ~ FALSE)) # communicate if (issue$same_sign == FALSE | sum(rest$same_sign) == 3) { issue$value_a <- make_compromise(actor_list, a, b, issue$id)[1] issue$value_b <- make_compromise(actor_list, a, b, issue$id)[2] } if (issue$same_sign == FALSE | sum(rest$same_sign) < 3) { issue$value_a <- make_conflict(actor_list, a, b, issue$id)[1] issue$value_b <- make_conflict(actor_list, a, b, issue$id)[2] } if (issue$same_sign == TRUE) {
issue$value_a <- reinforce_opinion(actor_list, a, b, issue$id)[1]
issue$value_b <- reinforce_opinion(actor_list, a, b, issue$id)[2]
}
issue
}
report_opinion_change <- function(discussion_list, actor_list) {
for (i in seq_along(discussion_list$interlocution$a)){
issue <- opinion_change(discussion_list$interlocution$a[i],
discussion_list$interlocution$b[i],
actor_list)
# report results
actor_list[[discussion_list$interlocution$a[i]]][issue$id] <- issue$value_a
actor_list[[discussion_list$interlocution$b[i]]][issue$id] <- issue$value_b
}
for (i in seq_along(discussion_list$interlocution$a)) {
discussion_list$interlocution$euclidean_vec[[i]] <- euclidean(actor_list[[discussion_list$interlocution$a[[i]]]], actor_list[[discussion_list$interlocution$b[[i]]]])
}
maximum_euclidean <- max(c(discussion_list$interlocution$euclidean_vec, discussion_list$no_interlocution$euclidean_vec))
discussion_list\$interlocution %<>% mutate(euclidean_normalized = (euclidean_vec/maximum_euclidean))
return(list(discussion_list %>% bind_rows(.id = "discussion"), actor_list))
}
#test-run
test <- report_opinion_change(discussion_list, actor_list)`
After each deliberation, the new opinions are calculated and updated. Hence, an agent can change their opinion multiple times in each step, depending on how many interlocutors they have.
When the deliberation is finished, the euclidean distances between all agents are updated and the process can start fresh from anew. Each round, the actors’ opinions and discussion lists are saved. The latter contains an indicator whether actors had contact. Based on those two objects, eventual analyses can determine how the discussion networks and opinion distributions unfold.
### References
Baldassarri, Delia, and Peter S. Bearman. 2007. “Dynamics of Political Polarization.” American Sociological Review 72 (5): 784–811. https://doi.org/10.1177/000312240707200507.
Bruch, Elizabeth, and Jon Atwell. 2015. “Agent-Based Models in Empirical Social Research.” Sociological Methods & Research 44 (2): 186–221. https://doi.org/10.1177/0049124113506405.
Edmonds, Bruce, Christophe Le Page, Mike Bithell, Edmund Chattoe-Brown, Volker Grimm, Ruth Meyer, Cristina Montañola-Sales, Paul Ormerod, Hilton Root, and Flaminio Squazzoni. 2019. “Different Modelling Purposes.” Journal of Artificial Societies and Social Simulation 22 (3): 6. https://doi.org/10.18564/jasss.3993.
Grimm, Volker, Uta Berger, Finn Bastiansen, Sigrunn Eliassen, Vincent Ginot, Jarl Giske, John Goss-Custard, et al. 2006. “A Standard Protocol for Describing Individual-Based and Agent-Based Models.” Ecological Modelling 198 (1-2): 115–26. https://doi.org/10.1016/j.ecolmodel.2006.04.023.
Grimm, Volker, Steven F. Railsback, Christian E. Vincenot, Uta Berger, Cara Gallagher, Donald L. DeAngelis, Bruce Edmonds, et al. 2020. “The ODD Protocol for Describing Agent-Based and Other Simulation Models: A Second Update to Improve Clarity, Replication, and Structural Realism.” Journal of Artificial Societies and Social Simulation 23 (2): 7. https://doi.org/10.18564/jasss.4259.
Schelling, Thomas C. 1971. “Dynamic Models of Segregation.” The Journal of Mathematical Sociology 1 (2): 143–86. https://doi.org/10.1080/0022250X.1971.9989794.
Sunstein, Cass R., Reid Hastie, and David Schkade. 2007. “What Happened on Deliberation Day.” California Law Review 95 (915): 915–40.
1. As you can easily see, this parameter space becomes fairly huge and the endeavor becomes computationally quite expensive. An alternative might be Latin Hypercube Sampling .↩︎ | 2022-06-26 21:21:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.310868501663208, "perplexity": 8810.73701962914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00666.warc.gz"} |
https://physics.stackexchange.com/questions/22001/at-what-angle-does-a-single-atom-reflect-a-single-photon | # At what angle does a single atom “reflect” a single photon?
Does this question make sense in the quantum world?
Imagining a single photon (wave packet?) interacting with a single atom (its electrons etc) how do we currently describe/define the emitted photon in terms of its direction in relation to the incoming photon?
Now "scaling up" to a surface of atoms actually reflecting "light" according to the simple reflection rules like angle-in equals angle-out how do we manage to explain this effect in terms of the quantum world? How comes the probabilities work out for the out-going angle depending on the incoming-angle?
• Just remember that energy, momentum and angular momentum have to be conserved and you will get a spatial probability distribution. Of course, you have to take into account the recoil velocity of the atom as well. – Antillar Maximus Mar 9 '12 at 0:47
• I can only add this links Emergence of a measurement basis in atom-photon scattering, Y. Glickman, S. Kotler, N. Akerman and R. Ozeri, Science, 339, 1187 (2013) sciencemag.org/content/339/6124/1187 Reversal of photon scattering errors in atomic qubits, N. Akerman, S. Kotler, Y. Glickman and R. Ozeri, Phys. Rev. Lett. 109, 103601 (2012) arxiv.org/pdf/1111.1622.pdf I get there from livescience.com/… – tyoc213 Feb 19 '15 at 19:07
Not a full answer, but remember that any issue with path of particle must be done with a sum-over-paths (think it's called a Feynman integral) approach. I don't even think that $\angle i=\angle r$ is necessary for a single photon; it's only when we get multiple photons that interesting things happen. | 2020-02-23 03:51:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5432360172271729, "perplexity": 509.44358728995076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145746.24/warc/CC-MAIN-20200223032129-20200223062129-00303.warc.gz"} |
https://hbsesolutions.com/hbse-7th-class-maths-solutions-chapter-12-ex-12-1/ | HBSE 7th Class Maths Solutions Chapter 12 Algebraic Expressions Ex 12.1
Haryana State Board HBSE 7th Class Maths Solutions Chapter 12 Algebraic Expressions Ex 12.1 Textbook Exercise Questions and Answers.
Haryana Board 7th Class Maths Solutions Chapter 12 Algebraic Expressions Exercise 12.1
Question 1.
Get the algebraic expressions in the following cases using variables, constants and arithmetic operations.
(i) Subtraction of z from y.
(ii) One half of the sum of number x and y.
(iii) The number z multiplied by itself.
(iv) One fourth of the product of number p and q.
(v) Number x and y both squared and added.
(vi) Number 5 added to three times and product of number m and n.
(vii) Product of numbers y and z subtracted from 10.
(viii) Sum of numbers a and b subtracted from their product.
Solution:
(i) y -z
(ii) $$\frac{1}{2}$$ (x + y)
(iii) z x z =z2
(iv) $$\frac{1}{4}$$ (p x q) = $$\frac{1}{4}$$ (pq) = $$\frac{pq}{4}$$
(v) x2 + y2
(vi) 3mn + 5
(vii) 10 – yz
(viii) ab – (a + b)
Question 2.
(i) Identify the terms and their factors in the foUowing expressions. Show the terms and factors by tree diagrams.
(a) x – 3
(b) 1 + x + x2
(c) y – y3
(d) 5xy2 + 7x2y
(e) -ab + 2b2 – 3a2
(ii) Identify terms and factors in the expressions given below:
(a) -4x + 5
(b) -4x + 5y
(e) 5y + 3y2
(d) xy + 2x2y2
(e) pq+q
(f) 1.2ab – 2.4b + 3.6a
(g.) $$\frac{3}{4} x+\frac{1}{4} h$$
(h) $$\frac{l+m}{5}$$
[Hint: Separate l and m terms]
(j) 0.1p2 + 0.2q2
(j) $$\frac{3}{4}$$ (a – b) + $$\frac{7}{4}$$
[Hint: Open the brackets]
Solution:
(i) (a) x – 3: In this expression x – 3 consists two terms x and -3.
(ii) (a) – 4x + 5.
The expression (- 4x + 5) consists of two terms – 4x and 5. The term – 4x is product of – 4 and x. And the term 5 has only one factor that is 5.
Terms are – 4x and 5.
Factors are-4 andx, of-4x and factor 5 of 5.
(b) -4x + 5y
In the expression -4x + 5y The terms are – 4x and 5y and factors are- 4 and x of-4x and 5 andy of 5y.
(c) 5y + 3y2
In the expression 5y + 3y2
The terms are 5y and 3y2 and factors are 5 and y of 5y, 3, y and y of 3y2.
(d) xy + 2x2y2
In the expression xy + 2x2y2
The terms are xy and 2x2y2
And factors are x and y of xy and 2, x, x, y and y of 2x2y2.
(e) pq + q
In the expression pq + q.
The terms are pq and q.
The factors are p and q of pq and q of q because q has only one factor.
(f) In the expression 1.2ab – 2.4b + 3.6a
The terms are 1.2ab, 2.4b and 3.6a and
factors are 1.2, a and b of 1.2ab, 2.4, and 6 of 2.4b; 3.6 and a of 3.6a.
(g) $$\frac{3}{4} x+\frac{1}{4}$$
(i) 0.1p2 + 0.2q2
In this expression 0.1 p2 + 0.2 q2
The terms are 0.1 p2 and 0.2 q2 and factors are 0.1, p, p of 0.1 p2 and 0.2, q, q of 0.2 q2.
Question 3.
Identify the numerical co¬efficients of terms other than constants in the following expressions:
(i) 5-3t2
(ii) 1 + t + t2 + t3
(iii) x + 2xy + 3y
(iv) 100 m + 1000 n
(v) – p2q2 + 7pq
(vi) 1.2a + 0.8 b
(vii) 3.14 r2
(viii) 2(l + b)
(ix) 0.1 y + 0.01 y2.
Solution:
Expression Terms Numerical Co-efficient (i) 5 – 3t2 -3t2 -3 (ii) 1 + t + t2 + t3 t t2 t3 1 1 1 (iii) x + 2xy + 3y x 2xy 3y 1 2 3 (iv) 100 m + 1000 n 100 m 1000 n 100 1000 (v) – p2q2 + 7pq -(p2q2) 7pq -1 7 (vi) 1.2a + 0.86 1.2a 0.86 1.2 0.8 (vii) 3.14 r2 3.14 r2 3.14 (viii) 2(l + b) = 2l + 2b 2l 2b 2 2 (ix) 0.1y + 0.01 y2 0.1y 0.01y2 0.1 0.01
Question 4.
(a) Identify terms which contain x and give the co-efficient of x.
(i) y2x + y
(ii) 13y2 – 8yx
(iii) x + y + 2
(iv) 5 + z + zx
(v) 1 + x +xy
(vi) 12xy2 + 25
(vii) 7x + xy2
(b) Identify terms which contains y2 and give the co-efficient of y2.
(i) 8 – xy2
(ii) 5y2 + 7x
(iii) 2x2y – 15xy2 + 7y2
Solution:
(a)
Expression Terms – with factor (x) Co-efficient of x (i) y2x + y y2x y2 (ii) 13y2 – 8yx -8yx -8y (iii) x + y + 2 x 1 (iv) 5 + z + zx zx z (v) 1 + x + xy x xy 1 y (vi) 12xy2 + 25 12xy2 12y2 (vii) 7x + xy2 7x xy2 7 y2
(b)
Expression Terms with factor y2 Co-efficient of y2 (i) 8-xy2 -xy2 – X (ii) 5y2 + 7x 5y2 5 (iii) 2xy2 – 15xy2 + 7y2 2xy2 -15xy2 7y2 2x -15x 7
Question 5.
(i) 4y – 7z
(ii) y2
(iii) x + y – xy
(iv) 100
(v) ab-a-b
(vi) 5 – 3t
(vii) 4p2q – 4pq2
(viii) 7mn
(ix) z2 – 3z + 8
(x) a2 + b2
(xi) z2 + z
(xii) 1 + x + x2
Solution:
(i) 4y – 7z is binomial. It contain 2 terms.
(ii) y2 is monomial. It contains 1 term.
(iii) x + y – xy is trinomial.lt contains 3 terms.
(iv) 100 is monomial. It contains 1 term.
(v) ab-a-b is trinomial.lt contains3 terms
(vi) 5 – 3t is binomial. It contains 2 terms.
(vii) 4p2q – 4pq2 is binomial.lt contains 2 terms.
(viii) 7mn is monomial. It contains 1 term.
(ix) z2 – 3z + 8 is trinomial. It contains 3 terms.
(x) a2 + b2 is binomial. It contain 2 terms.
(xi) z2 + z is binomial. It contains 2 terms.
(xii) 1 + x + x2 is trinomial. It contains 3 terms.
Question 6.
State whether the given pair of terms is of like or unlike terms.
(i) 1,100
(ii) -7x, $$\frac{5}{2}$$x
(iii) – 29x, – 29y
(iv) 14xy, 42xy
(v) 4m2p,4mp2
(vi) 12xz, 12x2z2
Solution:
Pairs Factors Like / Unlike Terms (i) 1, 1 Unlike 100, 100 (ii) – 7x – 7, x 1 5/2 x 5/2, x Like (iii) -29x -29, x -29y – 29, y Unlike (iv) 14 xy 14, x, y 42 yx 42, x, y Like (v) 4m2p 4, m, m, p 4mp2 4, m, p, p Unlike (vi) 12xz 12, x, z 12x2z2 12, x, x, z, z , Unlike
Question 7.
Identify like terms in the following:
(a) – xy2, – 4yx2, 8x2, 2xy2, 7y, – 11x2, – 100x, -11yx, 20x2y, -6x2, y, 2xy, 3x.
(b) 10pq, 7p, 8q, – p2q2, – 7qp, – 100q, – 23, 12q2p2, -5P2, 41, 2405p, 78qp, 13p2q, – 9pq2, qp2, 701P2.
Solution: | 2023-01-28 09:39:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.513821542263031, "perplexity": 9182.670654610449}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499541.63/warc/CC-MAIN-20230128090359-20230128120359-00653.warc.gz"} |
http://www.oelerich.org/ | # Serve git smart HTTP repositories with uWSGI and nginx
Jan. 13, 2014, 11:17 a.m. 5 comments Arch Linux Software
For some joint project of mine, I needed a way for them to access (clone, pull, push) the git repository that is hostet on my server without adding SSH accounts for them. With git 1.6.6 a feature called smart HTTP was introduced, which allows working with the repository via HTTP.
I set up the CGI script for smart HTTP git-http-backend using uWSGI and serve it (including basic authentication) via nginx. Let me show you how.
# Create scientific presentations with Inkscape, InkSlides and InkTex
Dec. 10, 2013, 4:22 p.m. 4 comments Misc Software
In this short article, I show you how to create nice scientific presentations using a combination of Inkscape and LaTex, made possible by two small tools I wrote.
# One server monitoring solution to rule them all
Sept. 16, 2013, 3:30 p.m. 3 comments Python Server
In our research group we have an old and a new fileserver, each having multiple LVM volumes of which some are backed up and some are not. Since we recently had several issues of somebody accidentally filling a partition completely (and, of course, being on vacation), I was looking into server monitoring software to get a simple overview about disk usages of our servers.
Now, all the stuff out there is either closed source or way too sophisticated for my use case. So I built my own monitoring application, which is arguably the simplest of all.
# script to cycle through pulseaudio sinks during playback
Aug. 22, 2013, 6:57 p.m. 0 comments Python
I am currently giving pulseaudio another try, because alsa is not really flexible when it comes to multiple outputs. I usually bind a keyboard shortcut to cycle through audio output devices, in order to quickly switch between headphones and speakers etc.
Here is a python solution to do that with pulseaudio. It uses the cli tool pacmd which must be installed for the script to work. It just switches the default sink to the one after the current default and moves all sound inputs (i.e., the playback) to the new default.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 #!/usr/bin/env python3 import subprocess as sp import re # This script cycles pulseaudio sinks and changes the defaults. # Audio playback is also moved to the new default sink. The script # is intended to bind to some keyboard shortcut to cycle through # outputs on the fly. # # Requirements: pacmd, python3 dev_out, _ = sp.Popen('pacmd list-sinks', shell=True, stdout=sp.PIPE).communicate() inp_out, _ = sp.Popen('pacmd list-sink-inputs', shell=True, stdout=sp.PIPE).communicate() devices = re.findall(r"(\*?) index: (\d+)", str(dev_out)) inputs = re.findall(r"index: (\d+)", str(inp_out)) # find the next default device, i.e., the one after the current default found = False next_device = devices[0][1] for d in devices: if found: next_device = d[1] break found = (d[0] == "*") # set default device and move inputs sp.call(["pacmd", "set-default-sink", next_device]) for i in inputs: sp.call(["pacmd", "move-sink-input", i, next_device])
# Showing references on a timeline in LaTeX
Aug. 7, 2013, 3:57 p.m. 0 comments latex
This is a small script I made for visualizing biblatex references on a timeline. It uses biblatex and chronography.sty. The solution essentially comes from this texexchange question I posted earlier today.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 \documentclass{scrartcl} \usepackage{chronology} \usepackage[landscape]{geometry} \usepackage[backend=biber]{biblatex} \addbibresource{refs.bib} \DeclareCiteCommand{\eventcite}{}{ \event{\thefield{year}}{ \printnames{labelname},\space\printfield{year} } }{}{} \begin{document} \begin{chronology}[5]{1960}{2013}{\textwidth} \eventcite{Milnor:Morse} \end{chronology} \end{document}
The \eventcite command extracts the year from the bib entry and uses it as an argument for the \event command of chronography.sty. Now, you can add references to the timeline as simple as \eventcite{bibkey}. | 2014-09-02 09:03:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3278842866420746, "perplexity": 4919.50588431834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921872.11/warc/CC-MAIN-20140909032458-00146-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://dsp.stackexchange.com/questions/20111/introduction-to-ip-laplacian-of-gaussian | # Introduction to IP : Laplacian of Gaussian?
I am very new to the concept of computer vision (and image processing), and trying to understand the algorithm used for edge detection. One thing I'm currently struggling to figure out is the Laplacian of Gaussian
In this case, how to you determine the value of x and y? I thought the centre of the kernel was the origin $((x,y) = (0,0))$, but it doesn't seems like the right number.
Also, if there are any resources that you would recommend to go through, I would really appreciate them.
You are right about the $x$ and $y$ values. The center of the matrix is $(0,0)$ and the corner points are $(\pm 4,\pm 4)$. But they obviously wanted integer values in the matrix, so they simply scaled the LoG function by a factor of $482.75$ (just to get a decent range). Evaluating the function with this scale factor gives you (for the lower right quarter of the total matrix):
LoG =
Rounding should give you the final result. However, if you look closely, you'll see that it doesn't (at least not for all $(x,y)$ values). If you check the matrix on this page, you'll see that for the same $\sigma$ it is also different. So people make mistakes and they just copy from each other. | 2019-10-16 23:23:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6527342200279236, "perplexity": 165.33033598421005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986670928.29/warc/CC-MAIN-20191016213112-20191017000612-00444.warc.gz"} |
https://www.physicsforums.com/threads/e-x-integration.405288/ | # E^x integration
1. May 24, 2010
### frenzal_dude
1. The problem statement, all variables and given/known data
Hi, I need to find the spectrum of the following function:
$$i=I_0[e^{\frac{-0.01(cos(2\pi 1000t)+cos(2\pi 100000t))}{0.026}}-1]$$
2. Relevant equations
the Fourier Transform would be:
$$\int_{-\infty }^{\infty }I_0[e^{\frac{-0.01(cos(2\pi 1000t)+cos(2\pi 100000t))}{0.026}}-1]e^{-j2\pi ft}dt$$
3. The attempt at a solution
I'm not sure where to start because I'm not sure how to take the integral of an exponential when there is a trig term in there. Is this integral even possible or would it diverge to infinity?
Hope you guys can help,
frenzal
2. May 28, 2010
### frenzal_dude
I think I worked it out! I need to express the exp(x) function as a taylor series, and as n gets larger (greater than say 3) the number approaches 0! So you can approximate it up to n=3 and then integration should be ok. | 2017-08-23 21:53:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8385066390037537, "perplexity": 427.88524546206315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886124563.93/warc/CC-MAIN-20170823210143-20170823230143-00543.warc.gz"} |
http://gmatclub.com/forum/feedback-about-sub-forums-yay-or-nay-58967.html?fl=similar | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 30 Jul 2015, 10:28
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Feedback about sub-forums - yay or nay
Author Message
TAGS:
CEO
Joined: 15 Aug 2003
Posts: 3467
Followers: 61
Kudos [?]: 728 [0], given: 781
Feedback about sub-forums - yay or nay [#permalink] 24 Jan 2008, 12:23
The sub forums are attempt to consolidate information and make it easy ( i hope) for members to participate in other forums.
Of course, we serve all of you and if you do not like this, we can always go back.
We hang together. We fight together. We win together. OK OK.. you get the point.
thanks all.
Director
Joined: 18 Dec 2007
Posts: 983
Location: Hong Kong
Concentration: Entrepreneurship, Technology
Schools: Hong Kong University of Science and Technology (HKUST) - Class of 2010
Followers: 12
Kudos [?]: 133 [0], given: 10
Re: Feedback about sub-forums - yay or nay [#permalink] 24 Jan 2008, 12:26
nay, the international applications part is easily lost. - hence the HEC/INSEAD post ending up in the US part which is not clearly indicated.
Director
Joined: 02 Jan 2008
Posts: 597
Location: Detroit, MI
Followers: 3
Kudos [?]: 29 [0], given: 0
Re: Feedback about sub-forums - yay or nay [#permalink] 24 Jan 2008, 12:31
I like the Knowledge Vault.
However, I agree with Togafoot, might as well merge the international forum into this one. There is a lot of discussion here, and I think the international forum may suffer from a lack of viewership. If it was merged here, some of those threads may get a little more attention...
~Sam
CEO
Joined: 15 Aug 2003
Posts: 3467
Followers: 61
Kudos [?]: 728 [0], given: 781
Re: Feedback about sub-forums - yay or nay [#permalink] 24 Jan 2008, 12:38
togafoot wrote:
nay, the international applications part is easily lost. - hence the HEC/INSEAD post ending up in the US part which is not clearly indicated.
So have the international forum as a separate forum - just like it was earlier?
A little chaos is expected - No problem with posts that do not belong to the place. We can always move it around. We are obviously building this to make GMAT Club more useful to future applicants.
Director
Joined: 18 Dec 2007
Posts: 983
Location: Hong Kong
Concentration: Entrepreneurship, Technology
Schools: Hong Kong University of Science and Technology (HKUST) - Class of 2010
Followers: 12
Kudos [?]: 133 [0], given: 10
Re: Feedback about sub-forums - yay or nay [#permalink] 24 Jan 2008, 12:47
separate because there is generally more people interested in the US schools so the international schools will get drowned out here (except maybe INSEAD and LBS).
CEO
Joined: 15 Aug 2003
Posts: 3467
Followers: 61
Kudos [?]: 728 [0], given: 781
Re: Feedback about sub-forums - yay or nay [#permalink] 24 Jan 2008, 12:48
togafoot wrote:
separate because there is generally more people interested in the US schools so the international schools will get drowned out here (except maybe INSEAD and LBS).
It's done.
CEO
Joined: 15 Aug 2003
Posts: 3467
Followers: 61
Kudos [?]: 728 [0], given: 781
Re: Feedback about sub-forums - yay or nay [#permalink] 24 Jan 2008, 14:23
bump.
GMAT Club Legend
Status: Um... what do you want to know?
Joined: 03 Jun 2007
Posts: 5464
Location: SF, CA, USA
Schools: UC Berkeley Haas School of Business MBA 2010
WE 1: Social Gaming
Followers: 68
Kudos [?]: 365 [1] , given: 14
Re: Feedback about sub-forums - yay or nay [#permalink] 24 Jan 2008, 15:36
1
KUDOS
I like how you moved the member profile, essay vault, and everything to the knowledge vault. That makes perfect sense and keeps everything neatly in one place.
I also vote for keeping Int'l separate to get more attention, as long as it's at the same level as the US B-school application forum.
I moved the HEC thing to the international forum.
_________________
****************************
GMAT Club Knowledge Vault:
http://gmatclub.com/forum/123
http://gmatclub.com/forum/128-t62555
Kryzak's Profile:
http://gmatclub.com/forum/111-t56286
Member Essays:
http://gmatclub.com/forum/103-t50969
Senior Manager
Joined: 24 Jul 2007
Posts: 290
Followers: 2
Kudos [?]: 15 [1] , given: 0
Re: Feedback about sub-forums - yay or nay [#permalink] 24 Jan 2008, 15:47
1
KUDOS
dont know if its possible but i would have loved if the b-school admissions forum was organized as follows:
School name -> year -> round -> posts
of course the ability to create the top three levels should lie with the admins. And we can have moderators for each such schools. since a moderator typically hasnt applied to all the schools, its pointless for a moderator to troll all forums to see whats out there. school specific moderators would do a more focussed job and the responsibility would be delegated to more people, reducing load on existing moderators, improving interaction and sense of belonging.
organizing the forums this way will allow all posts for a given school to automatically consolidate over years and at the same time preserve the basic character of the forum.
CEO
Joined: 15 Aug 2003
Posts: 3467
Followers: 61
Kudos [?]: 728 [0], given: 781
Re: Feedback about sub-forums - yay or nay [#permalink] 24 Jan 2008, 16:08
excellent points. thank you for the idea.
You can always bookmark a thread , watch a thread or simply subscribe to a forum (all three features are available).
As far as community is concerned, there is a trade off between how granular you want to go in terms of # of forums and the participation you are going to get. Too many forums (that will happen if classify by schools) will dilute the participation and the idea of community - members helping each other. This is not to discourage you or to reject your idea - this is a delicate balance. I may not be interested in applying to a school, but if I have lived in the area, I may still be able to help.
Obviously GMAT Club is always in flux - users demand features and we are more than happy to comply. The use of a forum as an archive has its limitations. If there are 100 posts in one thread, not every post has useful information. The wiki is something that can be a good way to archive information. It is definitely a launching pad for some of the ideas that you are suggesting. Eventually, we would like to do for b-schools what mahalo.com is attempting to do for search in general.
On a related note, you can review our MBA program database here:
http://gmatclub.com/wiki/Category:Schools
This database makes me feel a little better about the purpose of gmat club - My fear was that we will become a clique for applicants to more popular schools.
Hope this helps.
pandeyrav wrote:
dont know if its possible but i would have loved if the b-school admissions forum was organized as follows:
School name -> year -> round -> posts
of course the ability to create the top three levels should lie with the admins. And we can have moderators for each such schools. since a moderator typically hasnt applied to all the schools, its pointless for a moderator to troll all forums to see whats out there. school specific moderators would do a more focussed job and the responsibility would be delegated to more people, reducing load on existing moderators, improving interaction and sense of belonging.
organizing the forums this way will allow all posts for a given school to automatically consolidate over years and at the same time preserve the basic character of the forum.
Senior Manager
Joined: 24 Jul 2007
Posts: 290
Followers: 2
Kudos [?]: 15 [0], given: 0
Re: Feedback about sub-forums - yay or nay [#permalink] 24 Jan 2008, 16:57
prae, i can see your point.
GMAT Club Legend
Status: Um... what do you want to know?
Joined: 03 Jun 2007
Posts: 5464
Location: SF, CA, USA
Schools: UC Berkeley Haas School of Business MBA 2010
WE 1: Social Gaming
Followers: 68
Kudos [?]: 365 [0], given: 14
Re: Feedback about sub-forums - yay or nay [#permalink] 24 Jan 2008, 17:49
pan, regarding having specific mods for specific schools, I don't think that is as necessary at the moment, since many of the mods already read ALL the threads (I'm close to one of those) and there are a LOT of us around. Of course, certain mods will take responsibility themselves for the schools they're applying to, while other times members will do it themselves or ask mods to help them out.
It definitely builds community, and so far I don't think anything has really "slipped through the crack", so to speak. But your ideas are definitely well thought out and we thank you for that!
_________________
****************************
GMAT Club Knowledge Vault:
http://gmatclub.com/forum/123
http://gmatclub.com/forum/128-t62555
Kryzak's Profile:
http://gmatclub.com/forum/111-t56286
Member Essays:
http://gmatclub.com/forum/103-t50969
Re: Feedback about sub-forums - yay or nay [#permalink] 24 Jan 2008, 17:49
Similar topics Replies Last post
Similar
Topics:
1 YAY! I'm a Dad again 37 30 Apr 2008, 17:42
Yay! It finally works again! 30 11 May 2007, 06:07
Teaching Assistantship (TA): Yay or Nay? 9 23 Apr 2007, 06:44
FEEDBACK 0 03 Jan 2007, 20:54
Display posts from previous: Sort by | 2015-07-30 18:28:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24729415774345398, "perplexity": 6396.9513871212275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987552.57/warc/CC-MAIN-20150728002307-00092-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://taggedwiki.zubiaga.org/new_content/4966c766ae578ea66b93073a199b836f | Summation
Summation is the addition of a set of numbers; the result is their sum or total. An interim or present total of a summation process is termed the running total. The "numbers" to be summed may be natural numbers, complex numbers, matrices, or still more complicated objects. An infinite sum is a subtle procedure known as a series. Note that the term summation has a special meaning in the context of divergent series related to extrapolation.
Notation
The summation of 1, 2, and 4 is 1 + 2 + 4 = 7. The sum is 7. Since addition is associative, it does not matter whether we interpret "1 + 2 + 4" as (1 + 2) + 4 or as 1 + (2 + 4); the result is the same, so parentheses are usually omitted in a sum. Finite addition is also commutative, so the order in which the numbers are written does not affect its sum. (For issues with infinite summation, see absolute convergence.)
If a sum has too many terms to be written out individually, the sum may be written with an ellipsis to mark out the missing terms. Thus, the sum of all the natural numbers from 1 to 100 is 1 + 2 + … + 99 + 100 = 5050.
Capital-sigma notation
Mathematical notation has a special representation for compactly representing summation of many similar terms: the summation symbol ∑ (U+2211), a large upright capital Sigma. This is defined thus:
$\sum_{i=m}^n x_i = x_m + x_{m+1} + x_{m+2} +\dots+ x_{n-1} + x_n.$
The subscript gives the symbol for an index variable, i. Here, i represents the index of summation; m is the lower bound of summation, and n is the upper bound of summation. Here i = m under the summation symbol means that the index i starts out equal to m. Successive values of i are found by adding 1 to the previous value of i, stopping when i = n. We could as well have used k instead of i, as in
$\sum_{k=2}^6 k^2 = 2^2+3^2+4^2+5^2+6^2 = 90$.
Informal writing sometimes omits the definition of the index and bounds of summation when these are clear from context, as in
$\sum x_i^2$
which is informally equivalent to
$\sum_{i=1}^n x_i^2$.
One often sees generalizations of this notation in which an arbitrary logical condition is supplied, and the sum is intended to be taken over all values satisfying the condition. For example:
$\sum_{0\le k< 100} f(k)$
is the sum of f(k) over all (integer) k in the specified range,
$\sum_{x\in S} f(x)$
is the sum of f(x) over all elements x in the set S, and
$\sum_{d|n}\;\mu(d)$
is the sum of μ(d) over all integers d dividing n.
(Remark: Although the name of the dummy variable does not matter (by definition), one usually uses letters from the middle of the alphabet (i through q) to denote integers, if there is a risk of confusion. For example, even if there should be no doubt about the interpretation, it could look slightly confusing to many mathematicians to see x instead of k in the above formulae involving k. See also typographical conventions in mathematical formulae.)
There are also ways to generalize the use of many sigma signs. For example,
$\sum_{\ell,\ell'}$
is the same as
$\sum_\ell\sum_{\ell'}.$
A similar notation is applied when it comes to finding multiplicative products; the same basic structure is used, with ∏, or the capital pi, replacing the ∑.
Programming language notation
Summations can also be represented in a programming language. Some languages use a notation for summation similar to the mathematical one. For example, this is Python:
sum(x[m:n+1])
and this is Fortran (or Matlab):
sum(x(m:n))
and this is J:
+/x
and this is Haskell:
fold (+) 0 x
and this is Scheme:
(apply + x)
In other languages loops are used, as in the following Visual Basic/VBScript program:
Sum = 0
For I = M To N
Sum = Sum + X(I)
Next I
or the following C/C++/C#/Java code, which assumes that the variables m and n are defined as integer types no wider than int, such that m ≥ n, and that the variable x is defined as an array of values of integer type no wider than int, containing at least m − n + 1 defined elements:
int i;
int sum = 0;
for (i = m; i <= n; i++)
{ sum += x[i]; }
In some cases a loop can be written more concisely, as in this Perl code:
$sum = 0;$sum += $x[$_] for ($m..$n);
or these alternative Ruby expressions:
x[m..n].inject{|a,b| a+b}
x[m..n].inject(0){|a,b| a+b}
or in C++, using its standard library:
std::accumulate(&x[m], &x[n + 1], 0)
when x is an built-in array or a std::vector.
Note that most of these examples begin by initializing the sum variable to 0, the identity element for addition. (See "special cases" below).
Also note that the traditional ∑ notation allows for the upper bound to be less than the lower bound. In this case, the index variable is initialized with the upper bound instead of the lower bound, and it is decremented instead of incremented. Since addition is commutative, this might also be accomplished by swapping the upper and lower bound and incrementing in a positive direction as usual.
Also note that the ∑ notation evaluates to a definite value, while most of the loop constructs used above are only valid in an imperative programming language's statement context, requiring the use of an extra variable to hold the final value. It is the variable which would then be used in a larger expression.
The exact meaning of ∑, and therefore its translation into a programming language, changes depending on the data type of the subscript and upper bound. In other words, ∑ is an overloaded symbol.
In the above examples, the subscript of ∑ was translated into an assignment statement to an index variable at the beginning of a for loop. But the subscript is not always an assignment statement. Sometimes the subscript sets up the iterator for a foreach loop, and sometimes the subscript is itself an array, with no index variable or iterator provided. Other times, the subscript is merely a Boolean expression that contains an embedded variable, implying to a human, but not to a computer, that every value of the value should be used where the Boolean expression evaluates to true.
In the example below:
$\sum_{x\in S} f(x)$
x is an iterator, which implies a foreach loop, but S is a set, which is an array-like data structure that can store values of mixed type. The summation routine for a set would have to account for the fact that it is possible to store non-numerical data in a set.
The return value of ∑ is a scalar in all examples given above.
Special cases
It is possible to sum fewer than 2 numbers:
• If one sums the single term x, then the sum is x.
• If one sums zero terms, then the sum is zero, because zero is the identity for addition. This is known as the empty sum.
These degenerate cases are usually only used when the summation notation gives a degenerate result in a special case. For example, if m = n in the definition above, then there is only one term in the sum; if m > n, then there is none.
Approximation by definite integrals
Many such approximations can be obtained by the following connection between sums and integrals, which holds for any:
increasing function f:
$\int_{s=a-1}^{b} f(s)\ ds \le \sum_{i=a}^{b} f(i) \le \int_{s=a}^{b+1} f(s)\ ds.$
decreasing function f:
$\int_{s=a}^{b+1} f(s)\ ds \le \sum_{i=a}^{b} f(i) \le \int_{s=a-1}^{b} f(s)\ ds.$
For more general approximations, see the Euler–Maclaurin formula.
For functions that are integrable on the interval [a, b], the Riemann sum can be used as an approximation of the definite integral. For example, the following formula is the left Riemann sum with equal partitioning of the interval:
$\frac{b-a}{n}\sum_{i=0}^{n-1} f\left(a+i\frac{b-a}n\right) \approx \int_a^b f(x)\ dx.$
The accuracy of such an approximation increases with the number n of subintervals, such that:
$\lim_{n\rightarrow \infty} \frac{b-a}{n}\sum_{i=0}^{n-1} f\left(a+i\frac{b-a}n\right) = \int_a^b f(x)\ dx.$
Identities
The following are useful identities:
$\sum_{n=s}^t C\sdot f(n) = C\sdot \sum_{n=s}^t f(n)$, where 'C' is a distributed constant. (See Scalar multiplication)
$\sum_{i=s}^n f(C) = (n-s+1)f(C)$, where 'C' is a constant.
$\sum_{n=s}^t f(n) + \sum_{n=s}^{t} g(n) = \sum_{n=s}^t \left[f(n) + g(n)\right]$
$\sum_{n=s}^t f(n) = \sum_{n=s+p}^{t+p} f(n-p)$
$\sum_{n=s}^j f(n) + \sum_{n=j+1}^t f(n) = \sum_{n=s}^t f(n)$
$\sum_{i=m}^n x = (n-m+1)x$
$\sum_{i=1}^n x = nx$, definition of multiplication where n is an integer multiplier to x
$\sum_{i=m}^n i = \frac{(n-m+1)(n+m)}{2}$ (see arithmetic series)
$\sum_{i=0}^n i = \sum_{i=1}^n i = \frac{n(n+1)}{2}$ (Special case of the arithmetic series)
$\sum_{i=1}^n i^2 = \frac{n(n+1)(2n+1)}{6} = \frac{n^3}{3} + \frac{n^2}{2} + \frac{n}{6}$
$\sum_{i=1}^n i^3 = \left(\frac{n(n+1)}{2}\right)^2 = \frac{n^4}{4} + \frac{n^3}{2} + \frac{n^2}{4} = \left[\sum_{i=1}^n i\right]^2$
$\sum_{i=1}^n i^4 = \frac{n(n+1)(2n+1)(3n^2+3n-1)}{30} = \frac{n^5}{5} + \frac{n^4}{2} + \frac{n^3}{3} - \frac{n}{30}$
$\sum_{i=0}^n i^p = \frac{(n+1)^{p+1}}{p+1} + \sum_{k=1}^p\frac{B_k}{p-k+1}{p\choose k}(n+1)^{p-k+1}$
where Bk is the kth Bernoulli number.
$\sum_{i=m}^n x^i = \frac{x^{n+1}-x^m}{x-1}$ (see geometric series)
$\sum_{i=0}^n x^i = \frac{1-x^{n+1}}{1-x}$ (special case of the above where m = 0)
$\sum_{i=0}^n i 2^i = 2+2^{n+1}(n-1)$
$\sum_{i=0}^n \frac{i}{2^i} = \frac{2^{n+1}-n-2}{2^n}$
$\sum_{i=0}^n i x^i = \frac{x}{(1-x)^2} (x^n(n(x-1)-1)+1)$
$\sum_{i=0}^n i^2 x^i = \frac{x}{(1-x)^3} (1+x-(n+1)^2x^n+(2n^2+2n-1)x^{n+1}-n^2x^{n+2})$
$\sum_{i=0}^n {n \choose i} = 2^n$ (see binomial coefficient)
$\sum_{i=0}^{n-1} {i \choose k} = {n \choose k+1}$
$\left(\sum_i a_i\right)\left(\sum_j b_j\right) = \sum_i\sum_j a_ib_j$
$\left(\sum_i a_i\right)^2 = 2\sum_i\sum_{j
$\sum_{n=a}^b f(n) = \sum_{n=b}^{a} f(n)$
$\sum_{n=s}^t f(n) = \sum_{n=-t}^{-s} f(-n)$
$\sum_{n=0}^t f(2n) + \sum_{n=0}^t f(2n+1) = \sum_{n=0}^{2t+1} f(n)$
$\sum_{n=0}^t \sum_{i=0}^{z-1} f(z\sdot n+i) = \sum_{n=0}^{z\sdot t+z-1} f(n)$
$\widehat{b}^{\left[\sum_{n=s}^t f(n) \right]} = \prod_{n=s}^t \widehat{b}^{f(n)}$ (See Product of a series)
$\sum_{n=s}^t \ln f(n) = \ln \prod_{n=s}^t f(n)$
$\lim_{t\rightarrow \infty} \sum_{n=a}^t f(n) = \sum_{n=a}^\infty f(n)$ (See Infinite limits)
$(a + b)^n = \sum_{i=0}^n {n \choose i}a^{(n-i)} b^i$, for binomial expansion
$\sum_{n=b+1}^{\infty} \frac{b}{n^2 - b^2} = \sum_{n=1}^{2b} \frac{1}{2n}$
$\left(\sum_{i=1}^{n} f_i(x)\right)^\prime = \sum_{i=1}^{n} f_i^\prime(x)$
$\lim_{n\to\infty}\sum_{i=0}^n f\left(a + \frac{b-a}{n}i\right)\cdot\frac{b-a}{n} = \int_a^b f(x) dx$
$2^{x-1} + \sum_{n=0}^{x-2} 2^n = x + \sum_{n=0}^{x-2} (2^n \sdot (x - 1 - n))$
Growth rates
The following are useful approximations (using theta notation):
$\sum_{i=1}^n i^c = \Theta(n^{c+1})$ for real c greater than −1
$\sum_{i=1}^n \frac{1}{i} = \Theta(\log n)$
$\sum_{i=1}^n c^i = \Theta(c^n)$ for real c greater than 1
$\sum_{i=1}^n \log(i)^c = \Theta(n \cdot \log(n)^{c})$ for nonnegative real c
$\sum_{i=1}^n \log(i)^c \cdot i^d = \Theta(n^{d+1} \cdot \log(n)^{c})$ for nonnegative real c, d
$\sum_{i=1}^n \log(i)^c \cdot i^d \cdot b^i = \Theta (n^d \cdot \log(n)^c \cdot b^n)$ for nonnegative real b > 1, c, d | 2020-02-27 17:07:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 55, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9402179718017578, "perplexity": 787.2512486611625}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146744.74/warc/CC-MAIN-20200227160355-20200227190355-00270.warc.gz"} |
https://docs.cardano.org/projects/adrestia/en/latest/key-concepts/utxo.html | # UTxO¶
## UTxO¶
In a UTxO-based blockchain, a Transaction is a binding between inputs and outputs.
input #1 >---* *---> output #1
\ /
input #2 >---*--------*
/ \
input #3 >---* *---> output #2
In a standard payment, outputs are a combination of:
• A value
• A reference (a.k.a address, a “proof” of ownership telling who owns the output).
input #1 >---* *---> (123, DdzFFzCqr...)
\ /
input #2 >---*--------*
/ \
input #3 >---* *---> (456, hswdEoQCp...)
Addresses are represented as encoded text strings. An address has a structure and a binary representation defined by the underlying blockchain. Yet, since they are often used in user-facing interfaces, addresses are usually encoded in a human-friendly format facilitate sharing between users. {{< /hint >}}
An address does not uniquely identify an output. Multiple transactions could send funds to a same output address, for example. It is possible, however, to uniquely identify an output by:
• Its host transaction id
• Its index within that transaction
This combination is also called an input. In other words, inputs are outputs of previous transactions.
*---------------- tx#42 ----------------------*
| |
(tx#14, ix#2) >-----------------* *--> (123, DdzFFqr...)--- (tx#42, ix#0)
| \ / |
(tx#41, ix#0) >-----------------*-----* |
| / \ |
(tx#04, ix#0) >----------------* *--> (456, hswdQCp...)--- (tx#42, ix#1)
| |
*---------------------------------------------*
Therefore, new transactions spend outputs of previous transactions, and produce new outputs that can be consumed by future transactions. An unspent transaction output (i.e., not used as an input of any transaction) is called a UTxO (Unspent Tx Output). UTxO represents an amount of money owned by a participant.
## FAQ¶
### Where does the money come from? How do I make the first transaction?¶
When bootstrapping a blockchain, some initial funds can be distributed among an initial set of stakeholders. This is usually the result of an Initial Coin Offering or an agreement between multiple parties. In practice, this means that the genesis block of a blockchain may already contain some UTxOs belonging to various stakeholders.
Core nodes running the protocol and producing blocks are allowed to insert in every block minted (‘mined’) called a coinbase transaction. This transaction has no inputs, and follows specific rules determined by the protocol. This transaction is used as an incentive to encourage participants to engage in the protocol.
### What is the difference between an address and a public key?¶
In a simple system that would only support payment transactions, public keys could be substituted for addresses. In practice, addresses are meant to hold extra pieces of information that are useful for other aspects of the protocol. For instance, in the Cardano-Shelley era, addresses may also contain:
• A network discriminant tag, to distinguish addresses between a testNet and the MainNet and avoid unfortunate mistakes.
• A stake reference to take part in delegation.
Addresses may also be used to trigger smart contracts, in which case, they’ll refer to a particular script rather than a public key.
In a nutshell, a public key is a piece of information that enables a stakeholder to prove ownership of a particular UTxO, whereas an address is a data-structure that contain various pieces of information. A reference to a public key, for example. | 2020-10-29 20:33:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4259708821773529, "perplexity": 4692.400108567077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107905777.48/warc/CC-MAIN-20201029184716-20201029214716-00308.warc.gz"} |
https://motls.blogspot.com/2010/08/rss-amsu-july-2010-warmest-july-by-2-mk.html | Tuesday, August 03, 2010 ... //
RSS AMSU: July 2010 as warm as July 1998
The new RSS AMSU data reveal that the global temperature anomaly in July 2010 was +0.608 °C which is 0.002 °C warmer than those +0.606 °C experienced in July 1998. ;-)
Clearly, a statistical tie.
The first 7 months of 2010 were about 0.07 °C cooler than the first seven months of 1998 (RSS AMSU). Repeating the statistical exercises I did a month ago (with UAH AMSU), we conclude that given the knowledge of the first 7 months, the probability that 2010 will end up being the hottest year on the RSS AMSU satellite record is still close to 10 percent.
The UAH AMSU competitors say that the July 2010 anomaly was +0.49 °C which is 0.03 °C cooler than July 1998 when it was +0.52 °C.
snail feedback (4) :
Solar Magnetic Burst
I wonder if these might be part of a longer term solar cycles that may cause to the relationship of Solar and Volcanic activity.
UAH has July significantly less warm:
(Dr. Spencer's site):
2010 7 0.489
Shaviv has a new post up
I wonder if thies might be part of a longer term solar cycles.............. | 2020-09-18 18:08:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18239328265190125, "perplexity": 4243.824805189384}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400188049.8/warc/CC-MAIN-20200918155203-20200918185203-00444.warc.gz"} |
http://math.stackexchange.com/questions/173674/minimization-of-function-with-large-dimensions | # Minimization of function with large dimensions
Let's say we have a smooth function $f:\mathbb{R}^{1000000} \rightarrow \mathbb{R}$, which we want to minimize using a method from numerical optimization. which method would we choose? Is the conjugate gradient method the best choice? What methods are better than others in the minimization process of high-dimensional problems?
Thank you very much for your time!
-
The answer is "it depends". The basic conjugate gradient algorithm only works when the function $f$ is a quadratic form, and it works best when the problem is sparse. Is that the case here? There are also nonlinear conjugate gradient methods that work with numerical derivatives, that may work well for you. Randomization algorithms like simulated annealing can also work well in high dimensions. Can you give any more information about the problem? – Chris Taylor Jul 21 '12 at 19:16
I would echo @ChrisTaylor's remarks. The form of $f$ matters too. Do you have an explicit derivative? Is $f$ separable in some way, is it convex, etc... – copper.hat Jul 21 '12 at 21:40
I don't have any details on the form of $f$. I was just wondering if there are some preferred algorithms when you have functions of high dimensions. It's not a specific question about something particular, I was just wondering. – Chris Jul 22 '12 at 15:02 | 2014-12-22 12:24:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7513707876205444, "perplexity": 298.8565176574522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775222.147/warc/CC-MAIN-20141217075255-00089-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://lists.torproject.org/pipermail/tor-commits/2015-June/091761.html | # [tor-commits] [tech-reports/master] Add a bit more content to the tech report.
karsten at torproject.org karsten at torproject.org
Wed Jun 17 18:48:07 UTC 2015
commit 3f30d1c88cf13794f49f374dfdd6284847ed7779
Author: George Kadianakis <desnacked at riseup.net>
Date: Fri Jan 9 13:24:44 2015 +0200
Add a bit more content to the tech report.
- Another reason to worry about statistics.
- A risk section
---
2015/hidden-service-stats/hidden-service-stats.tex | 39 +++++++++++++++-----
1 file changed, 29 insertions(+), 10 deletions(-)
diff --git a/2015/hidden-service-stats/hidden-service-stats.tex b/2015/hidden-service-stats/hidden-service-stats.tex
--- a/2015/hidden-service-stats/hidden-service-stats.tex
+++ b/2015/hidden-service-stats/hidden-service-stats.tex
@@ -302,6 +302,15 @@ to enumerate available services.
While hiding the existence of a service is not the primary purpose of
hidden services, it's a security feature we don't want to give up easily.
+\paragraph{Unknown future attacks}
+
+Special care needs to be taken when designing and collecting
+statistics because in anonymity the attacker landscape changes
+continuously and attacks that are currently ineffective might become
+powerful in the future. Alternatively, in the future attackers might
+be able to acquire auxiliary data that can combine with statistics in
+such ways that allow attacks that would not have been possible before.
+
\subsection{Other aspects of gathering statistics}
There are certain aspects of any given statistic that should be
@@ -491,6 +500,12 @@ See ticket 13466 for details.
%
We would learn what fraction of clients and what fraction of services run
older tor versions (0.2.3.x or older).
+\\
+\textbf{Risks:}
+%
+As tor-0.2.3.x gets less common and only a few hidden services still
+use it, an adversary would be able to track their introduction points
+by checking which relays still report TAP clients on their statistics.
\subsubsection{Time from circuit purpose change to tearing down circuit}
\label{subsubsec:time_circ_purpose_change_to_teardown}
@@ -551,7 +566,7 @@ This statistic can also be used to analyze what fraction of services is
available for a short time only, and what fraction is available most of
the time.
-\subsubsection{Number of descriptor publish request (3.1.1.)}
+\subsubsection{Number of hidden service descriptors seen by directory (3.1.1.)}
\label{subsubsec:num_descriptor_publish}
\textbf{Details:}
@@ -573,14 +588,6 @@ services (botnets, chat protocols, etc.).
Also, learning the number of hidden services per directory will help us
find bugs in the hash ring code and also understand how loaded directories
are.
-FWIW, when \verb+rend-spec-ng.txt+ gets implemented, it will be harder for
-hidden service directories to learn the number of served services since
-the descriptor will be encrypted.
-However, directories will still be able to approximate the number of
-services by checking the amount of descriptors received per publishing
-period.
-If this ever becomes a problem we can imagine publishing fake descriptors
-to confuse the directories.
\\
\textbf{Risks:}
%
@@ -602,6 +609,17 @@ are published during certain times of day and certain days of the week,
which could correlate with daylight hours and/or working days in certain
parts of the world. This information could also be correlated with
network outages over time to narrow down the location of hidden services.
+\\
+\textbf{Notes:}
+%
+When \verb+rend-spec-ng.txt+ gets implemented, it will be harder for
+hidden service directories to learn the number of served services
+since the descriptor will be encrypted.
+However, directories will still be able to approximate the number of
+services by checking the amount of descriptors received per publishing
+period.
+If this ever becomes a problem we can imagine publishing fake
+descriptors
\subsubsection{Number of descriptor updates per service (3.1.2.)}
@@ -1555,4 +1573,5 @@ an objective way, ideally using the stated evaluation criteria.
\end{itemize}
\bibliography{hidden-service-stats}
-\end{document}
\ No newline at end of file
+\end{document}
+ | 2020-10-25 11:40:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5396257042884827, "perplexity": 8557.915464821679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888931.67/warc/CC-MAIN-20201025100059-20201025130059-00162.warc.gz"} |
https://techutils.in/blog/2018/02/14/stackbounty-confidence-interval-generalized-linear-model-logit-prediction-interval-probit-derivation-of-confidence-and-predictio/ | # #StackBounty: #confidence-interval #generalized-linear-model #logit #prediction-interval #probit Derivation of confidence and predictio…
### Bounty: 100
The derivation of the prediction interval for the linear model is quite simple: Obtaining a formula for prediction limits in a linear model .
How to derive the confidence and prediction intervals for the fitted values of the logit and probit regressions (and GLMs in general)?
Get this bounty!!!
This site uses Akismet to reduce spam. Learn how your comment data is processed. | 2018-08-18 00:55:43 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9470775127410889, "perplexity": 5082.567029006059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213247.0/warc/CC-MAIN-20180818001437-20180818021437-00473.warc.gz"} |
https://faculty.math.illinois.edu/~jpascale/courses/2018/595/ | # MATH 595: Homological Mirror Symmetry (Spring 2018)
## Basic information
Course meets
MWF 12:00–12:50 p.m. in 243 Altgeld Hall
Instructor: James Pascaleff
Email: jpascale@illinois.edu
Office: 341B Illini Hall
Office hours: Tuesdays 11:00-12:00
## Texts
For the example of mirror symmetry for $$\mathbb{P}^2$$, see
## Schedule and notes
The material in these lectures is mainly drawn from the following sources:
• Lectures 2–4: Weibel, Introduction to Homological Algebra.
• Lectures 5–27: Seidel, Fukaya Categories and Picard Lefschetz theory.
• Lectures 28–35: Seidel, Abstract analogues of flux as symplectic invariants, Chapter 2.
• Lectures 36–42: Seidel, Homological mirror symmetry for the quartic surface.
W Jan 17 1. Introduction to HMS, statement for the quartic surface. (got through top of page 4.) F Jan 19 2. Triangulated categories. (got through most of page 4.) M Jan 22 3. Cochain complexes of modules. W Jan 24 4. Derived categories of modules. F Jan 26 5. Differential graded categories. M Jan 29 6. $$A_\infty$$-algebras. W Jan 31 7. $$A_\infty$$-category theory. F Feb 2 8. Associativity and Riemann surfaces. M Feb 5 9. The moduli space of stable pointed disks. W Feb 7 10. Moduli spaces and operads. F Feb 9 11. Closed and open TFT. M Feb 12 12. TFT from symplectic manifolds. W Feb 14 13. Almost complex structures and pseudo-holomorphic curves. F Feb 16 14. The Lagrangian Floer TFT. M Feb 19 15. The Lagrangian Floer TFT, II. W Feb 21 16. Gromov compactification. F Feb 23 17. Proving relations using compactified moduli spaces. M Feb 26 18. Proving relations, II. W Feb 28 19. Examples of Floer cohomology. F Mar 2 20. Example: the mirror of $$\mathbb{P}^2$$. M Mar 5 21. Fukaya's $$A_\infty$$-category. W Mar 7 22. Lagrangian Grassmannian. F Mar 9 23. Graded Lagrangian submanifolds. M Mar 12 24. Indices of graded Lagrangian intersections. W Mar 14 25. Index theory and dimensions of moduli spaces. F Mar 16 Finish previous lecture. M Mar 26 26. Spin, Pin, and orientations of moduli spaces. W Mar 28 27. Fukaya categories away from characteristic 2. F Mar 30 28. Beginning the case of the two-torus. M Apr 2 29. A quiver algebra from the two-torus. W Apr 4 30. Deformation and classification of $$A_\infty$$ structures. F Apr 6 31. $$A_\infty$$ structures on $$Q$$. M Apr 9 32. Twisted complexes and triangulated $$A_\infty$$-categories. W Apr 11 33. Some twisted complexes over $$Q_p$$. F Apr 13 34. Twisted complexes on the two-torus. M Apr 16 35. Conclusion of HMS for the two-torus. W Apr 18 36. Beginning the quartic surface. F Apr 20 37. $$A_\infty$$ structures on $$Q_4$$. M Apr 23 38. One-parameter deformation theory. W Apr 25 39. Results for the mirror of the quartic surface. F Apr 27 40. Affine, relative, and projective Fukaya categories. M Apr 30 41. Lagrangian spheres in the quartic surface. W May 2 42. Results for the quartic surface.
## Course Description
Homological Mirror Symmetry (HMS) is the study of the relations between three types of mathematical objects: $\text{symplectic manifolds} \longleftrightarrow \text{triangulated categories} \longleftrightarrow \text{algebraic varieties}$ For a symplectic manifold $$X$$, there is a triangulated category $$\mathcal{F}(X)$$ called the Fukaya category, and for an algebraic variety $$Y$$ there is a triangulated category $$\mathcal{D}(Y)$$ called the derived category. We then pose the problem of finding pairs $$X$$ and $$Y$$ such that $\mathcal{F}(X) \cong \mathcal{D}(Y)$ The origin of this relation is in theoretical particle physics, where the two categories are interpreted as collections of D-branes, and the relation expresses the duality between A-twisted topological string theory on $$X$$ and B-twisted topological string theory on $$Y$$.
The investigation of this relation raises many questions. How are the two sides actually defined? How do we compute the two sides, and what should the "answer" of such a computation look like? What general structure is present that constrains the problem? The goal of this course is to set up the machinery and understand the solution in a specific case: when $$X$$ is a hypersurface in projective space, including the quintic threefold, following Seidel and Sheridan. Topics to include:
• Categories: triangulated, differential graded, $$A_\infty$$.
• Algebraic varieties, categories of coherent sheaves.
• Symplectic manifolds, Lagrangian Floer cohomology, Fukaya categories.
• Case of surfaces, HMS for the two-torus, other relatively simple models.
• Hypersurfaces in projective space.
## Prerequisites
In order to have a good chance at learning something in this class, you should have a solid background in two things:
1. Abstract algebra
Particularly commutative rings and modules over them.
2. Differential topology
Smooth manifolds, vector bundles, tensors and differential forms.
If you are not familiar with these topics then that is where you should start. There are many books on these topics and you should find one that you like. The next things would be:
3. Homological algebra
Some classic books are Methods of Homological Algebra by Gelfand and Manin and An Introduction to Homological Algebra by Rotman and a book of the same title by Weibel.
4. Symplectic geometry
See Lectures on Symplectic Geometry by Ana Cannas da Silva and Introduction to Symplectic Topology by McDuff and Salamon.
All of the books mentioned above except for McDuff-Salamon are available as e-books through the UIUC library.
The more background you have, the better, but 1 and 2 are the minimum. I will still give introductions to 3 and 4 in the course. The course on Symplectic Geometry taught concurrently by Prof. Tolman would be helpful, but is not required. | 2023-03-28 00:15:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6748680472373962, "perplexity": 1667.7616446065456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00159.warc.gz"} |
https://www.groundai.com/project/theory-of-the-room-temperature-qhe-in-graphene/ | On the Room-Temperature QHE in Graphene
# On the Room-Temperature QHE in Graphene
S. Fujita Department of Physics, University at Buffalo, State University of New York,
Buffalo, NY 14260-1500, USA
A. Suzuki Department of Physics, Faculty of Science, Tokyo University of Science,
Shinjyuku-ku, Tokyo 162-8601, Japan
###### Abstract
The unusual quantum Hall effect (QHE) in graphene is described in terms of the composite (c-) bosons, which move with a linear dispersion relation. The “electron” (wave packet) moves easier in the direction of the honeycomb lattice than perpendicular to it, while the “hole” moves easier in . Since “electrons” and “holes” move in different channels, the particle densities can be high especially when the Fermi surface has “necks”. The strong QHE arises from the phonon exchange attraction in the neighborhood of the “neck” surfaces. The plateau observed for the Hall conductivity and the accompanied resistivity drop is due to the superconducting energy gap caused by the Bose-Einstein condensation of the c-bosons, each forming from a pair of one-electron–two-fluxons c-fermions by phonon-exchange attraction. The half-integer quantization rule for the Hall conductivity: , , is derived.
###### keywords:
Quantum Hall effect; composite boson (fermion); superconducting energy gap; phonon exchange attraction
journal: arXiv.com
## 1 Introduction
In 2005 Novoselov et al.1 () discovered a quantum Hall effect (QHE) in graphene, a single sheet of graphite. Figure 1 is reproduced after Ref. 1, Fig. 4.
The longitudinal magnetoresistivity and the Hall conductivity in graphene at T and K are plotted as a function of the conduction electron (“electron” or “hole”) density in the scale of cm. The plateau values of the Hall conductivity are quantized in the units of
4e2h (1)
within experimental errors, where is the Planck constant, the electron charge (magnitude). The longitudinal resistivity reaches zero at the middle of the plateaus. These two are the major signatures of the QHE in graphene.
In 2007 Novoselov et al. 2 () reported a discovery of a room temperature QHE in graphene. We reproduceed their data in Figure 2 after Ref. 2, Fig. 1.
The Hall resistivity for “electrons” and “holes” indicate precise quantization within experimental errors at magnetic field 29 T and temperature 300 K. This is an extraordinary jump in the observation temperatures since the QHE in heterojunction GaAs/AlGaAs was reported below 0.5 K. Figure 2 is similar to those in Figure 1 although the abscissas are different, one in gate voltage and the other in carrier density, and hence the physical conditions are different. We give an explanation later. Notice that the quantization in appears in units of , which is a little strange since the most visible quantization for GaAs/AlGaAs appears in units of . We will resolve this mystery in the present work.
From the QHE behaviors in Figures 1 and 2, we observe that the quantization in the Hall conductivity occurs at a set of half-integer points:
2P−12(4e2h),P=1,2,⋯. (2)
The original authors 1 (); 2 () interpreted their data in terms of Dirac fermions. A great number of experimental and theoretical papers followed. The present work deals specifically with the quantization rule in Eq. (2). We shall show this quantization rule based on the c-particles (fermions, bosons) PTEP () model in the present work. We will defer discussion of Dirac fermions and the related matter. The preliminary results were reported in the conference proceedings 3a ().
## 2 Electron Dynamics in Graphene
The normal carriers in solids are “electrons” (“holes”), which spiral around the applied magnetic field counterclockwise (clockwise) viewed from the tip of the field vector . The “electrons” (“holes”) are excited above (below) the metal’s Fermi energy. These quasiparticles are quotation marked throughout the text. Following Ashcroft and Mermin 3 () we regard the conduction electrons as wave packets.
We consider a graphene, which forms a 2D honeycomb lattice. The Wigner-Seitz (WS) unit cell 4 (), rhombus (shaded) shown in Figure 3 (a), contains two C’s. We showed in our earlier work 5 () that graphene has “electrons” and “holes” based on the rectangular unit cell (dotted lines) shown in Figure 3 (b). We briefly review our calculations. We must choose the rectangular unit cell to establish the Bloch plane waves 7 () in 2D. For a 1D space, there always exists a 1D -space. If one introduces non-orthogonal axes along , then one cannot use Fourier transformation. This difficulty was discussed earlier in our previous work6 (). To establish the electron dynamics we need the orthogonal rectangular unit cell shown in Fig. 3 (b).
We assume that the “electron” (“hole”) wave packet has the charge () and a size of the rectangular unit cell, generated above (below) the Fermi energy . We showed 5 () that (a) the “electron” and the “hole” have different charge distributions and different effective masses, (b) that the “electrons” and “holes” move in different easy channels, (c) that the “electrons” and “holes” are thermally excited with different activation energies, and (d) that the “electron” activation energy is smaller than the “hole” activation energy :
ε1<ε2. (3)
The thermally activated electron densities are then given by
nj(T)=nje−εj/kBT,nj=% constant, (4)
where and 2 represent the “electron” and “hole”, respectively. In view of Eqs. (3) and (4), . Hence the “electrons” are the majority carriers in graphene. Magnetotransport experiments by Zhang et al.9 () indicate that the “electrons” are the majority carriers in graphene in agreement with experiments.
## 3 Fractional Quantum Hall Effect
Fractional QHE were discovered by Tsui, Stormer and Gossard in 1982 Tsui (). In 1983 Laughlin proposed a revolutionary idea 17 () that fractional charges are carried by the elementary excitations for the fractional QHE system. A great number of papers were followed 18 (); 22 (); 16 (); 19 (); 15 (); 20 (). Ezawa wrote books with extensive references for students and researchers 14 (). The prevalent theories 17 (); 18 (); 22 (); 16 (); 19 (); 15 (); 20 () based on the Laughlin wave function 17 () in the Schrödinger picture deal with the QHE at 0 K and immediately above. The system ground state, however, cannot carry a current. To interpret the experimental data it is convenient to introduce composite (c-) particles (bosons, fermions). The c-boson (c-fermion), each containing an electron and an odd (even) number of magnetic flux quanta (fluxons), were introduced by Zhang et al.22 () and others (Jain 16 ()) for the description of the fractional QHE (Fermi liquid). The c-particles will be regarded as quasiparticles (elementary excitations) existing in the system. A classical electron spirals around the applied static magnetic field. The state has a lower energy relative to the original electron energy because the spiraling current (vortex) is diamagnetic. The field-dressed (-attached) electron moves straight. Jain 16 () established a close connection between the integer and the fractional QHE by introducing c-fermions. His c-fermions are essentially the same as our c-fermions. The types of mechanics (classical or quantum) do not change the energy sign. A c-fermion is in a negative energy (bound) state. Fujita and Okamura 21 () discussed the formation of a bound c-fermion and its connection with Jain’s c-fermion. Jain did not include the c-bosons in his book Jain (). We view the c-bosons as equally important as the c-fermions. A c-boson is also in a bound state. Besides, c-bosons can be Bose-Einstein (BE) condensed, which generates a stabilizing (superconducting) energy gap in the excitation spectrum. All QHE states with distinctive Hall plateaus in heterojunction GaAs/AlGaAs are observed below the critical temperature K. The QHE in graphene observed at 300 K is an exception. It is desirable to treat the QHE below and above in a unified manner. The extreme accuracy (precision ) in which each Hall plateau is observed means that the current density must be computed exactly without averaging. In the prevalent theories 17 (); 18 (); 22 (); 16 (); 19 (); 15 (); 20 (), the electron-electron interaction and Pauli’s exclusion principle are regarded as the cause for the QHE. Both are essentially repulsive and cannot account for the fact that the c-particles are bound, that is, they are in negative-energy states. Besides, the prevalent theories have limitations:
• The zero temperature limit is taken at the outset. Then the question why QHE is observed below 0.5 K in GaAs/AlGaAs cannot be answered. We better have a theory for all temperatures.
• The high-field limit is taken at the outset. The integer QHE at filling factor (Landau level occupation number) are observed for small integer only. The question why the QHE for high (weak field) is not observed cannot be answered. We better describe the phenomena for all fields.
• The Hall resistivity value is obtained in a single stroke. To obtain we need two separate measurements of the Hall field and the current density . We must calculate and take the ratio to obtain .
Fujita and Okamura 21 () developed a quantum statistical theory based on phonon exchange attraction, and used Laughlin’s results to describe the fractional QHE. In the present work we complete the description without using Laughlin’s fractional charge idea with the assumtion that any c-fermion has the charge magnitude . See the paper by Fujita, Suzuki and Ho FSH () for more detail. There is a remarkable similarity between the QHE and the High-Temperature Superconductivity (HTSC), both occurring in 2D systems as pointed out by Laughlin 17a (). We regard the phonon exchange attraction as the causes of both QHE and superconductivity. Starting with a reasonable Hamiltonian, we calculate everything using the standard statistical mechanics.
The countability concept of the fluxons, known as the flux quantization:
B=NϕAhe≡nϕΦ0,nϕ≡NϕA, (5)
where sample area, fluxon number (integer), flux quantum, is originally due to Onsager 23 (). The magnetic (electric) field is an axial (polar) vector and the associated fluxon (photon) is a half-spin fermion (full-spin boson). The magnetic (electric) flux line cannot (can) terminate at a sink, which supports the fermionic (bosonic) nature of the associated fluxon (photon). No half-spin fermion can annihilate itself because of angular momentum conservation. The electron spin originates in the relativistic electron equation (Dirac’s theory of electron) 24 (). The discrete (two) quantum numbers cannot change in the continuous limit, and hence the spin must be conserved. The countability and statistics of the fluxon is the fundamental particle properties. We postulate that the fluxon is a half-spin fermion with zero mass and zero charge.
We assume that the magnetic field is applied perpendicular to the 2D plane. The 2D Landau level energy,
ε=ℏωc(NL+12),ωc≡eB/m∗,NL=0,1,2,⋯ (6)
with the states have a great degeneracy; the is the effective mass of an “electron” and the the cyclotron frequency. The Center-of-Mass (CM) of any c-particle moves as a fermion (boson). The eigenvalues of the CM momentum are limited to 0 or 1 (unlimited) if it contains an odd (even) number of elementary fermions. This rule is known as the Ehrenfest-Oppenheimer-Bethe’s (EOB’s) rule24 (); 25 (); 26 (). Hence the CM motion of the composite containing an electron and fluxons is bosonic (fermionic) if is odd (even). The system of the c-bosons condenses below the critical temperature and exhibits a superconducting state while the system of c-fermions shows a Fermi liquid behavior.
A longitudinal phonon, acoustic or optical, generates a density wave, which affects the electron (fluxon) motion through the charge displacement (current). The exchange of a phonon between electron and fluxon generate an attractive transition.
Bardeen, Cooper and Schrieffer (BCS) 28 () assumed the existence of Cooper pairs 29 () in a superconductor, and wrote down a Hamiltonian containing the “electron” and “hole” kinetic energies and the pairing interaction Hamiltonian with the phonon variables eliminated. We start with a BCS-like Hamiltonian for the QHE: 21 ()
H= ∑k′∑sε(1)kn(1)ks+∑k′∑sε(2)kn(2)ks+∑k′∑sε(3)kn(3)ks −∑q′∑k′∑k′′∑sv0[B(1)†k′qsB(1)kqs+B(1)†k′qsB(2)†kqs+B(2)k′qsB(1)kqs+B(2)k′qsB(2)†kqs], (7)
where is the number operator for the “electron” (1) [“hole” (2), fluxon (3)] at momentum and spin with the energy with annihilation (creation) operators satisfying the Fermi anticommutation rules:
{c(i)ks,c(j)†k′s′} ≡c(i)ksc(j)†k′s′+c(j)†k′s′c(i)ks=δk,k′δs,s′δi,j,{c(i)ks,c(j)k′s′}=0. (8)
The fluxon number operator is represented by with satisfying the anticommutation rules:
{aks,a†k′s′}=δk,k′δs,s′,{aks,ak′s′}=0. (9)
The phonon exchange can create electron-fluxon composites, bosonic or fermionic, depending on the number of fluxons. We call the conduction-electron composite with an odd (even) number of fluxons c-boson (c-fermion). The electron (hole)-type c-particles carry negative (positive) charge. Electron (hole)-type Cooper-pair-like c-bosons are generated by the phonon-exchange attraction from a pair of electron (hole)-type c-fermions. The pair operators are defined by
B(1)†kq,s ≡c(1)†k+q/2,sc(1)†−k+q/2,−sfor electrons", B(2)kq,s ≡c(2)−k+q/2,−sc(2)k+q/2,sfor holes". (10)
The prime on the summation in Eq. (7) means the restriction: , Debye frequency. The pairing interaction terms in Eq. (7) conserve the charge. The term , where , sample area, is the pairing strength, generates a transition in the electron-type c-fermion states. Similarly, the exchange of a phonon generates a transition between the hole-type c-fermion states, represented by . The phonon exchange can also pair-create (pair-annihilate) electron (hole)-type c-boson pairs, and the effects of these processes are represented by .
The Cooper pair is formed from two “electrons” (or “holes”). Likewise the c-bosons may be formed by the phonon-exchange attaraction from two like-charge c-fermions. If the density of the c-bosons is high enough, then the c-bosons will be BE-condensed and exhibit a superconductivity.
The pairing interaction terms in Eq. (7) are formally identical with those in the generalized BCS Hamiltonian 30 (). Only we deal here with c-fermions instead of conduction electrons.
The c-bosons, having the linear dispersion relation, can move in all directions in the plane with the constant speed 21 (); 30 (). The supercurrent is generated by c-bosons monochromatically condensed, running along the sample length. The supercurrent density (magnitude) , calculated by the rule: , is given by
j≡e∗n0vd=e∗n02π∣∣v(1)F−v(2)F∣∣, (11)
where is the effective charge of carriers. The Hall field (magnitude) equals . The magnetic flux is quantized as in Eq. (5). Hence we obtain
ρH≡EHj=vdBe∗n0vd=1e∗n0nϕΦ0≡nϕe∗n0(he). (12)
Here, we assumed that the c-fermion containing an electron and an even number of fluxons has a charge magnitude . For the integer QHE, , , then we obtain , explaining the plateau value observed for the integer QHE.
The supercurrent generated by equal numbers of c-bosons condensed monochromatically is neutral. This is reflected in the calculations in Eq. (11). The supercondensate whose motion generates the supercurrent must be neutral. If it has a charge, it would be accelerated indefinitely by the external field because the impurities and phonons cannot stop the supercurrent to grow. That is, the circuit containing a superconducting sample and a battery must be burnt out if the supercondensate is not neutral. In the calculation of in Eq. (12), we used the unaveraged drift velocity , which is significant. Only the unaveraged drift velocity cancels out exactly from numerator/denominator, leading to an exceedingly accurate plateau value.
We now extend our theory to include elementary fermions (electron, fluxon) as members of the c-fermion set. We can then treat the superconductivity and the QHE in a unified manner. The c-boson containing one electron and one fluxon can be used to describe the integer QHE.
Important pairings and the effects are listed below.
• a pair of conduction electrons, superconductivity
• a fluxon and c-fermions, QHE
• a pair of like-charge conduction electrons, each with two fluxons, QHE in graphene.
## 4 The Room Temperature QHE
The QHE behavior observed for graphene is remarkably similar to that for GaAs/AlGaAs. The physical conditions are different however since the gate voltage and the applied magnetic field are varied in the experiments. The present authors regard the QHE in GaAs/AlGaAs as a manifestation of superconductivity generated by the magnetic field. Briefly, the magnetoresistivity for a QH system reaches zero (superconducting) and the accompanied Hall resistivity generates a plateau by the Meissner effect. The QHE state is not easy to destroy because of the superconducting energy gap in the c-boson excitation spectrum. If an extra magnetic field is applied to the system at optimum QHE state (the center of the plateau), then the system remains in the same superconducting state by expelling the extra field. If the field is reduced, then the system stays in the same state by sucking in extra field fluxes, thus generating a Hall conductivity plateau. In the graphene experiments, the gate voltage is varied. A little extra gate voltage relative to the optimum voltage (the center of the plateau) polarizes the system without changing the superconducting state, thus generating a Hall conductivity plateau. This state has an extra electric field energy:
A2ε0(ΔE)2, (13)
where is the sample area, the dielectric constant, and is the extra electric field, positive or negative, depending on the field direction. If the gate voltage is further increased (or decreased), then it will eventually destroy the superconducting state, and the resistivity will rise from zero. A strong current generates high magnetic field around it, which eventually destroys the supercurrent. This explains the flat plateau and the rise in resistivity from zero.
We now examine the data shown in Figure 2. We first observe that the right-left symmetry is broken. “Electrons” and “holes” move in different channels with different masses, breaking symmetry. The applied gate voltage induce the surface conduction electrons and hence changes the Fermi surface. A relatively high voltage 20 V may bring the system to the van Hove singularity points in the neighborhood of which the conduction electron densities are high. This is where the prominent QHE is observed. We note that such discussions are possible only with the rectangular unit cell model, and not with the WS unit cell model, which predicts a gapless semiconductor with the electron-hole symmetry: , .
We wish to derive the quantization rule in Eq. (2). Let us first consider the case of . The QHE requires a BEC of c-bosons. Its favorable environment is near the van Hove singularities, where the Fermi surface changes its curvature sign. For graphene, this happens when the 2D Fermi surface just touches the Brillouin zone boundary and “electrons” or “holes” are abundantly generated. The quantization rule given by Eq. (2) is realized if the c-bosons are formed from a pair of like-charge c-fermions, each containing a conduction electron and two (2) fluxons. By assumption, each c-fermion has the effective charge :
e∗=efor any c-fermion. (14)
After studying the low-field QH states of c-fermions we obtain
n(Q)ϕ=ne/Q,Q=0,2,4,⋯, (15)
for the density of the c-fermions with fluxons, where is the electron density. All fermionic QH states (points) lie on the classical-Hall straight line passing the origin with a constant slope when is plotted as a function of the inverse magnetic field. For higher fields the LL spacing is greater, and hence the fermion formation is more difficult if is greater. The c-boson contains two (2) c-fermions. Using Eq. (12), we obtain
σH≡ρ−1H=jEH=2en0vdvdB=2en0nϕΦ0=2e2h. (16)
Here, the field at is used, where the c-boson density is equal to the flux density . We note that the value obtained here is in agreement with the experiments shown in Fig. 1.
The QHE states with integers are generated on the weaker field side. Their strengths decrease with increasing as shown below. The magnetic field magnitude becomes smaller with increasing . The LL degeneracy is proportional to , and hence LL’s must be considered. First consider the case . Without the phonon-exchange attraction the electrons occupy the lowest two LL’s with spin. The electrons at each level form fundamental (f) c-bosons. In the superconducting state the c-bosons occupy the monochromatically condensed state, which is separated by the superconducting gap from the continuum states (band) as shown in the right-hand figure in Fig. 4.
The c-boson density at each LL is one-half the density at , which is equal to the electron density fixed for the sample. Extending the theory to a general integer , we have
n0=ne/P. (17)
This means that both the critical temperature and the energy gap are smaller, making the plateau width (a measure of ) smaller in agreement with experiments. The c-bosons have lower energies than the conduction electrons. Hence at the extreme low temperatures the supercurrent due to the condensed c-bosons dominates the normal currents due to the conduction electrons and non-condensed c-bosons, giving rise to the dip in . The superconducting energy gap is obtained and discussed earlier. For completeness the derivation of is given in Appendix. Thus, we have obtained Eq. (2) within the framework of our fractional QHE theory in terms of c-particles.
In summary, we established that
• The half-integer FQHE arises from the BEC of c-bosons, each containing a pair of c-fermions with two fluxons.
• The Hall conductivity is quantized at , .
• The strengths of the plateaus become smaller with increasing .
## Appendix: Temperature Dependent Energy Gap εg(T)
The c-bosons can be bound by the interaction Hamiltonian . The fundamental c-bosons (fc-bosons) can undergo a Bose-Einstein condensation (BEC) below the critical temperature . The fc-bosons are condensed at a momentum along the sample length. Above , they can move in all directions in the plane with the Fermi speed . The ground state energy can be calculated by solving the Cooper-like equation:
w0Ψ(k)=εkΨ(k)−v0(2πℏ)2∫′d2k′Ψ(k′), (A1)
where is the reduced wave function for the stationary fc-bosons; the prime on the integral sign means that the restriction: , =Debye frequency. We obtain after simple calculations
w0=−ℏωDexp{1/(v0D0)}−1<0, (A2)
where is the density of states per spin at . Note that the binding energy does not depend on the “electron” mass. Hence, the fc-bosons have the same energy .
At 0 K only stationary fc-bosons are generated. The ground state energy of the system of fc-bosons is
W0=2N0w0, (A3)
where is the (or ) fc-boson number.
At a finite there are moving (non-condensed) fc-bosons, whose energies are obtained from31 ()
w(j)qΨ(k,q)=ε(j)|k+q|Ψ(k,q)−v0(2πℏ)2∫′d2k′Ψ(k′,q). (A4)
For small , we obtain
w(j)q=w0+2πv(j)F|q|, (A5)
where is the Fermi speed. The energy depends linearly on the momentum magnitude .
The system of free massless bosons undergoes a BEC in 2D at the critical temperature :
kBTc=1.945ℏcn1/2, (A6)
where is the boson speed, and the density. Briefly the BEC occurs when the chemical potential vanishes at a finite . The critical temperature can be determined from
n=(2πℏ)−2∫d2p[eβcε−1]−1,βc≡(kBTc)−1. (A7)
After expanding the integrand in powers of and using , we obtain
n=1.654(2π)−1(kBTc/ℏc)2, (A8)
from which we obtain formula (A6). Substituting in Eq. (A6), we obtain
kBTc=1.24ℏvFn1/20,n0≡N0/A. (A9)
The interboson distance calculated from this equation is . The boson size calculated from Eq. (A9), using the uncertainty relation and , is , which is a few times smaller than . Thus the bosons do not overlap in space, and the free boson model is justified.
In the presence of the BE-condensate below , the unfluxed electron carries the energy , where the quasielectron energy gap is the solution of
1=v0D0∫ℏωD0dε1(ε2+Δ2)1/2{1+exp[−β(ε2+Δ2)1/2]}−1,β≡(kBT)−1. (A10)
Note that the gap depends on . At there is no condensate, and hence vanishes.
The moving fc-boson below with the condensate background has the energy , obtained from
˜w(j)qΨ(k,q)=E(j)|k+q|Ψ(k,q)−v0(2πℏ)2∫′d2k′Ψ(k′,q), (A11)
where replaced in Eq. (A4). We obtain
˜w(j)q=˜w0+2πv(j)F|q|=w0+εg+2πv(j)Fq, (A12)
where is determined from
1=D0v0∫ℏωD0dε|˜w0|+(ε2+Δ2)1/2. (A13)
The energy difference
˜w0(T)−w0≡εg(T)>0 (A14)
represents the -dependent energy gap between the moving and stationary fc-bosons. The energy is negative. Otherwise, the fc-boson should break up. This limits to be less than . The energy gap is at 0 K. It declines to zero as the temperature approaches .
The experimental electron density is cm and the Fermi velocity ms. The critical temperature is expected to be much above 300 K. The temperature 50 K can be regarded as a very low temperature relative to . Hence the QH state has an Arhenius-decay type exponential stability factor:
exp[−εg(T=0)/kBT], (A15)
where is the zero-temperature energy gap.
## References
• (1) K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, A. A. Firsov, Nature 438, 97 (2005).
• (2) K. S. Novoselov, Z. Jiang, Y. Zhang, S. V. Morozov, H. I. Stormer, U. Zeitler, J. C. Maan, G. S. Boebinger, P. Kim A. K. Gaim, Science 315, 1379 (2007).
• (3) S. Fujita, A. Suzuki, H. C. Ho, the paper submitted to PTEP.
• (4) S. Fujita and A. Suzuki, Journal of Physics, Conference Serie 490 012064 (2014).
• (5) N. W. Ashcroft, N. D. Mermin, Solid State Physics, (Saunders, Philadelphia,1976), p. 214.
• (6) E. Wigner, F. Seitz, Phys. Rev. 43, 804 (1933).
• (7) S. Fujita, A. Suzuki, J. Appl. Phys. 107, 013711 (2010).
• (8) S. Fujita, Y. Takato, A. Suzuki, Mod. Phys. Lett. B 25, 223 (2011).
• (9) S. Fujita, A. Jovaini, S. Godoy, A. Suzuki, Phys. Lett. A 376, 2808 (2012).
• (10) F. Bloch, Zeits. Phys. 52, 555 (1928).
• (11) Y. Zhang, Y.-W. Tan, H.L. Stormer, P. Kim, Nature 438, 201 (2005).
• (12) D. C. Tsui, H. L. Stormer, A. C. Gossard, Phys. Rev. Lett. 48, 1559 (1982).
• (13) R. B. Laughlin, Phys. Rev. Lett. 50, 1395 (1983).
• (14) S. M. Girvin, A. H. MacDonald, Phys. Rev. Lett. 58, 1252 (1987).
• (15) S. C. Zhang, T. H. Hansson, S. Kivelson, Phys. Rev. Lett. 62, 82 (1989).
• (16) J. K. Jain, Phys. Rev. Lett. 63 (1989) 199; Phys. Rev. B 40, 8079 (1989); ibid. B 41, 7653 (1990); Surf. Sci. 263, 65 (1992).
• (17) N. Read, Phys. Rev. Lett. 62, 86 (1989).
• (18) B. I. Halperin, P. A. Lee, H. Read, Phys. Rev. B 47 (1993) 7312.
• (19) R. Shankar, G. Murthy, Phys. Rev. Lett. 79, 4437 (1997).
• (20) Z. F. Ezawa, Quantum Hall Effects, 2nd ed., (World Scientific, Singapore, 2008).
• (21) S. Fujita, Y. Okamura, Phys. Rev. 369, 155313 (2004).
• (22) J. K. Jain, Composite Fermions, (Cambridge University Press, Cambridge, UK, 2007).
• (23) S. Fujita, A. Suzuki and H. C. Ho, Arxiv. 1304.7631v1 [cond-mat.mes–hall].
• (24) R. B. Laughlin, Science 242, 525 (1988).
• (25) L. Onsager, Phil. Mag. 43, 1006 (1952).
• (26) P. A. M. Dirac, Principles of Quantum Mechanics, 4th ed., (Oxford Univ. Press, Oxford, 1958). pp. 248–252, pp. 253–263, p. 267.
• (27) P. Ehrenfest, J. R. Oppenheimer, Phys. Rev. 37, 333 (1931).
• (28) H. A. Bethe, R. Jackiw, Intermediate Quantum Mechanics, 2nd ed.,(Benjamine, New York, 1968). p. 23.
• (29) S. Fujita, S-P Gau, A. Suzuki, J. Korean Phys. Soc. 38, 456 (2001).
• (30) J. Bardeen, L. N. Cooper, J. R. Schriefler, Phys. Rev. 108, 1175 (1957).
• (31) L.N. Cooper, Phys. Rev. 104, 1189 (1956).
• (32) S. Fujita, K. Ito, S. Godoy, Quantum Theory of Conducting Matter, Superconductivity, (Springer, New York, 2009). pp. 73-75, pp. 77–79.
• (33) S. Fujita and A. Suzuki, Electrical Conduction in Graphene and Nanotubes, (Wiley-VCH, Weinheim, Germany, 2013). pp. 212–215.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters | 2021-03-09 10:09:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7883522510528564, "perplexity": 1678.8579228833416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00125.warc.gz"} |
http://www.gamedev.net/index.php?app=forums&module=extras§ion=postHistory&pid=5076322 | • Create Account
### #ActualBMW
Posted 09 July 2013 - 05:04 AM
Hi all,
In my voxel engine similar to Minecraft, I draw each side of each block separately, like this:
for(unsigned int i = 0; i < BLOCKS_PER_CHUNK; i++)
{
if(blocks[i].type == BLOCKTYPE_AIR)
continue;
if(blocks[i].BackFaceVisible)
{
g_pD3DDevice->DrawPrimitive(D3DPT_TRIANGLESTRIP, (i * VERTICES_PER_BLOCK) + 4, 2);
}
if(blocks[i].FrontFaceVisible)
{
g_pD3DDevice->DrawPrimitive(D3DPT_TRIANGLESTRIP, (i * VERTICES_PER_BLOCK), 2);
}
if(blocks[i].LeftFaceVisible)
{
g_pD3DDevice->DrawPrimitive(D3DPT_TRIANGLESTRIP, (i * VERTICES_PER_BLOCK) + 8, 2);
}
if(blocks[i].RightFaceVisible)
{
g_pD3DDevice->DrawPrimitive(D3DPT_TRIANGLESTRIP, (i * VERTICES_PER_BLOCK) + 12, 2);
}
if(blocks[i].TopFaceVisible)
{
g_pD3DDevice->DrawPrimitive(D3DPT_TRIANGLESTRIP, (i * VERTICES_PER_BLOCK) + 16, 2);
}
if(blocks[i].BottomFaceVisible)
{
g_pD3DDevice->DrawPrimitive(D3DPT_TRIANGLESTRIP, (i * VERTICES_PER_BLOCK) + 20, 2);
}
}
Is it a bad idea to call DrawPrimitive so much like this? Should I use an index buffer and just make one call to DrawIndexedPrimitive?
### #1BMW
Posted 09 July 2013 - 05:03 AM
Hi all,
In my voxel engine similar to Minecraft, I draw each side of each block separately, like this:
for(unsigned int i = 0; i < BLOCKS_PER_CHUNK; i++)
{
if(blocks[i].type == BLOCKTYPE_AIR)
continue;
if(blocks[i].BackFaceVisible)
{
g_pD3DDevice->DrawPrimitive(D3DPT_TRIANGLESTRIP, (i * VERTICES_PER_BLOCK) + 4, 2);
}
if(blocks[i].FrontFaceVisible)
{
g_pD3DDevice->DrawPrimitive(D3DPT_TRIANGLESTRIP, (i * VERTICES_PER_BLOCK), 2);
}
if(blocks[i].LeftFaceVisible)
{
g_pD3DDevice->DrawPrimitive(D3DPT_TRIANGLESTRIP, (i * VERTICES_PER_BLOCK) + 8, 2);
}
if(blocks[i].RightFaceVisible)
{
g_pD3DDevice->DrawPrimitive(D3DPT_TRIANGLESTRIP, (i * VERTICES_PER_BLOCK) + 12, 2);
}
if(blocks[i].TopFaceVisible)
{
g_pD3DDevice->DrawPrimitive(D3DPT_TRIANGLESTRIP, (i * VERTICES_PER_BLOCK) + 16, 2);
}
if(blocks[i].BottomFaceVisible)
{
g_pD3DDevice->DrawPrimitive(D3DPT_TRIANGLESTRIP, (i * VERTICES_PER_BLOCK) + 20, 2);
}
}
Is it a bad idea to call DrawPrimitive so much like this? Should I use an index buffer and just make one call to DrawIndexedPrimitive?
PARTNERS | 2013-12-20 14:07:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3327804505825043, "perplexity": 9489.32218652793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345771844/warc/CC-MAIN-20131218054931-00047-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=E1BMAX_2013_v50n1_37 | LOCAL AND GLOBAL EXISTENCE AND BLOW-UP OF SOLUTIONS TO A POLYTROPIC FILTRATION SYSTEM WITH NONLINEAR MEMORY AND NONLINEAR BOUNDARY CONDITIONS
Title & Authors
LOCAL AND GLOBAL EXISTENCE AND BLOW-UP OF SOLUTIONS TO A POLYTROPIC FILTRATION SYSTEM WITH NONLINEAR MEMORY AND NONLINEAR BOUNDARY CONDITIONS
Wang, Jian; Su, Meng-Long; Fang, Zhong-Bo;
Abstract
This paper deals with the behavior of positive solutions to the following nonlocal polytropic filtration system $\small{\{u_t=(\mid(u^{m_1})_x{\mid}^{{p_1}^{-1}}(u^{m_1})_x)_x+u^{l_{11}}{{\int_0}^a}v^{l_{12}}({\xi},t)d{\xi},\;(x,t)\;in\;[0,a]{\times}(0,T),\\{v_t=(\mid(v^{m_2})_x{\mid}^{{p_2}^{-1}}(v^{m_2})_x)_x+v^{l_{22}}{{\int_0}^a}u^{l_{21}}({\xi},t)d{\xi},\;(x,t)\;in\;[0,a]{\times}(0,T)}}$ with nonlinear boundary conditions $\small{u_x{\mid}{_{x=0}}=0}$, $\small{u_x{\mid}{_{x=a}}=u^{q_{11}}u^{q_{12}}{\mid}{_{x=a}}}$, $\small{v_x{\mid}{_{x=0}}=0}$, $\small{v_x|{_{x=a}}=u^{q21}v^{q22}|{_{x=a}}}$ and the initial data ($\small{u_0}$, $\small{v_0}$), where $\small{m_1}$, $\small{m_2{\geq}1}$, $\small{p_1}$, $\small{p_2}$ > 1, $\small{l_{11}}$, $\small{l_{12}}$, $\small{l_{21}}$, $\small{l_{22}}$, $\small{q_{11}}$, $\small{q_{12}}$, $\small{q_{21}}$, $\small{q_{22}}$ > 0. Under appropriate hypotheses, the authors establish local theory of the solutions by a regularization method and prove that the solution either exists globally or blows up in finite time by using a comparison principle.
Keywords
nonlinear boundary value problem;nonlinear memory;polytropic filtration system;global existence;blow-up;
Language
English
Cited by
References
1.
G. Acosta and J. D. Rossi, Blow-up vs. global existence for quasilinear parabolic systems with a nonlinear boundary condition, Z. Angew. Math. Phys. 48 (1997), no. 5, 711-724.
2.
H. W. Alt and S. Luckhaus, Quasilinear elliptic-parabolic differential equations, Math. Z. 183 (1983), no. 3, 311-341.
3.
J. R. Anderson, Stability and instability for solutions of the convective porous medium equation with a nonlinear forcing at the boundary. I. II, J. Differential Equations 104 (1993), no. 2, 361-408.
4.
F. Andreu, J. M. Mazon, J. Toledo, and J. D. Rossi, Porous medium equation with absorption and a nonlinear boundary condition, Nonlinear Anal. 49 (2002), no. 4, 541-563.
5.
D. G. Aronson, The porous medium equation, Nonlinear diffusion problems (Montecatini Terme, 1985), 146, Lecture Notes in Math., 1224, Springer, Berlin, 1986.
6.
R. S. Cantrell and C. Cosner, Diffusive logistic equations with indefinite weights: population models in disrupted environments. II, SIAM J. Math. Anal. 22 (1991), no. 4, 1043-1064.
7.
Y. Chen, Semilinear blow-up in nonlocal reaction-diffusion systems with nonlinear memory, Nanjing Daxue Xuebao Shuxue Bannian Kan 23 (2006), no. 1, 121-128.
8.
L. Du, Blow-up for a degenerate reaction-diffusion system with nonlinear nonlocal sources, J. Comput. Appl. Math. 202 (2007), no. 2, 237-247.
9.
J. Filo, Diffusivity versus absorption through the boundary, J. Differential Equations 99 (1992), no. 2, 281-305.
10.
J. Furter and M. Crinfeld, Local vs. nonlocal interactions in population dynamics, J. Math. Biol. 27 (1989), no. 1, 65-80.
11.
O. A. Ladyzenskaja, V. A. Solonnikov, and N. N. Uralceva, Linear and quasilinear equations of parabolic type, Translations of Mathematics Monographs, Amer. Math. Soc., Providence, RI, 1968.
12.
A. V. Lair and M. E. Oxley, A necessary and sufficient condition for global existence for a degenerate parabolic boundary value problem, J. Math. Anal. Appl. 221 (1998), no. 1, 338-348.
13.
F. Li, Global existence and blow-up of solutions to a nonlocal quasilinear degenerate parabolic system, Nonlinear Anal. 67 (2007), no. 5, 1387-1402.
14.
H. H. Lu and M. X. Wang, Global solutions and blow-up problems for a nonlinear degenerate parabolic system coupled via nonlocal sources, J. Math. Anal. Appl. 333 (2007), no. 2, 984-1007.
15.
M. Muskat, The Flow of Homogeneous Fluids Through Porous Media, McGraw-Hill, 1937.
16.
M. M. Porzio and V. Vespri, Holder estimates for local solutions of some doubly nonlinear degenerate parabolic equations, J. Differential Equations 103 (1993), no. 1, 146-178.
17.
J. Wang and W. Gao, Existence of nontrivial nonnegative periodic solutions for a class of doubly degenerate parabolic equation with nonlocal terms, J. Math. Anal. Appl. 331 (2007), no. 1, 481-498.
18.
M. X. Wang and Y. H. Wu, Global existence and blow up problems for quasilinear parabolic equations with nonlinear boundary conditions, SIAM J. Math. Anal. 24 (1993), no. 6, 1515-1521.
19.
S. Wang, Doubly nonlinear degenerate parabolic systems with coupled nonlinear boundary conditions, J. Differential Equations 182 (2002), no. 2, 431-469.
20.
X. Wu and W. Gao, Global existence and blow-up of solutions to an evolution p-Laplace system coupled via nonlocal sources, J. Math. Anal. Appl. 358 (2009), no. 2, 229-237.
21.
Z. Q. Wu, J. N. Zhao, J. X. Yin, and H. L. Li, Nonlinear Diffusion Equations, World Scientific, Singapore, 2001.
22.
S. N. Zheng and H. Su, A quasilinear reaction-diffusion system coupled via nonlocal sources, Appl. Math. Comput. 180 (2006), no. 1, 295-308.
23.
J. Zhou and C. Mu, Blow-up for a non-newtonian polytropic filtration system with nonlinear nonlocal source, Commun. Korean Math. Soc. 23 (2008), no. 4, 529-540. | 2018-04-23 19:03:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 19, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5367181301116943, "perplexity": 941.599861259868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946165.56/warc/CC-MAIN-20180423184427-20180423204427-00317.warc.gz"} |
http://mathoverflow.net/revisions/24262/list | Modular forms were actively studied by number theorists Hecke and Siegel in the 1930s, but it was not widely appreciated. Around the same time Hardy, in a series of lectures on Ramanujan's work delivered at Harvard in 1936, called modular forms -- as represented by Ramanujan's interest in the coefficients of the weight 12 form $\Delta(z)$ \Delta(q) = \sum_{n \geq 1} \tau(n)q^n$-- "one of the backwaters of mathematics". The study of modular forms basically died off in the 1940s and 1950s. It was revitalized by Weil, Shimura et al. in the 1960s. See the introduction to Lang's book on modular forms for some relevant historical remarks. [EDIT: As Emerton points out in his comment below, the full quote by Hardy is actually more complimentary, so let me include it here: "We may seem to be straying into one of the backwaters of mathematics, but the genesis of$\tau(n)$as a coefficient in so fundamental a function compels us to treat it with respect." This is at the start of Chapter X of Hardy's "Ramanjuan: Twelve Lectures on Subjects Suggested by his Life and Work."] 1 [made Community Wiki] Modular forms were actively studied by Hecke and Siegel in the 1930s, but it was not widely appreciated. Around the same time Hardy, in a series of lectures on Ramanujan's work delivered at Harvard in 1936, called modular forms -- as represented by Ramanujan's interest in the coefficients of$\Delta(z)\$ -- "one of the backwaters of mathematics". The study of modular forms basically died off in the 1940s and 1950s. It was revitalized by Weil, Shimura et al. in the 1960s. See the introduction to Lang's book on modular forms for some relevant historical remarks. | 2013-05-26 06:02:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7489789128303528, "perplexity": 586.6117552056568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706631378/warc/CC-MAIN-20130516121711-00041-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://mathematica.stackexchange.com/questions/201629/maximize-is-running-forever | Maximize is running forever
I'm trying to maximize the following function $$f$$ with respect to $$r$$ and $$k$$:
$$f=\frac{d^3+3 d^2 (k (r-1)-s)+3 d \left(k^2 ((r-3) r+1)+2 k r s-2 (r-1) s^2\right)+k^3 (r ((r-1) r+3)-1)-3 k^2 r^2 s+3 (r-1) s^3}{6 (r-1) s^2}$$
under the conditions of $$0.
My Mathematica code is:
Maximize[{1/(6 (-1 + r) s^2) (d^3 + k^3 (-1 + r (3 + (-1 + r) r)) + 3 d^2 (k (-1 + r) - s) - 3 k^2 r^2 s + 3 (-1 + r) s^3 + 3 d (k^2 (1 + (-3 + r) r) + 2 k r s - 2 (-1 + r) s^2)), 0 < d < 1, 2 d < s, d <= k <= s, 0<=r<= d/k}, {r, k}]
And it is running forever. Can anyone help please?
• Try $s=1,2,\frac 1 3,\dots$ – user64494 Jul 5 at 20:00
• s = 1/3; Maximize[{. }, {r, k}] // ToRadicals // TeXForm performs $$\left\{ \begin{array}{cc} \{ & \begin{array}{cc} \frac{1}{18} \left(-27 d^3+54 d^2-27 d+4\right) & 0<d<\frac{1}{6} \\ -\infty & \text{True} \\ \end{array} \\ \end{array} ,\left\{r\to \begin{array}{cc} \{ & \begin{array}{cc} 0 & 0<d<\frac{1}{6} \\ \text{Indeterminate} & \text{True} \\ \end{array} \\ \end{array} ,k\to \begin{array}{cc} \{ & \begin{array}{cc} \frac{1}{3} & 0<d<\frac{1}{6} \\ \text{Indeterminate} & \text{True} \\ \end{array} \\ \end{array} \right\}\right\}$$ – user64494 Jul 5 at 20:24
• s=2 and the above command performs $$\left\{ \begin{array}{cc} \{ & \begin{array}{cc} \frac{1}{24} \left(-d^3+12 d^2-36 d+32\right) & 0<d<1 \\ -\infty & \text{True} \\ \end{array} \\ \end{array} ,\left\{r\to \begin{array}{cc} \{ & \begin{array}{cc} 0 & 0<d<1 \\ \text{Indeterminate} & \text{True} \\ \end{array} \\ \end{array} ,k\to \begin{array}{cc} \{ & \begin{array}{cc} 2 & 0<d<1 \\ \text{Indeterminate} & \text{True} \\ \end{array} \\ \end{array} \right\}\right\}$$ – user64494 Jul 5 at 20:26
• Thanks, user64494. Do you know we can solve this without assigning a specific value to s? Why wouldn't Mathematica solve my original code? – ppp Jul 6 at 13:59
• No, I don't know it. My experiment suggests that the (finite) optimal solution is reached at $r=0,k=s$. I think Mma has problems with a complex nonlinear target function and a nonlinear constraint and two parameters. Don't hesitate to ask for further explanation in need. – user64494 Jul 6 at 17:39
One way to find the maxima is to calculate the derivatives and set them equal to zero. Here f[ ] is your function:
f[r_, k_] :=
1/(6 (-1 + r) s^2) (d^3 + k^3 (-1 + r (3 + (-1 + r) r)) +
3 d^2 (k (-1 + r) - s) - 3 k^2 r^2 s + 3 (-1 + r) s^3 +
3 d (k^2 (1 + (-3 + r) r) + 2 k r s - 2 (-1 + r) s^2));
dfdr = D[f[r, k], r];
dfdk = D[f[r, k], k];
Solve[dfdr == 0 && dfdk == 0 && 0 < d < 1 && 2 d < s && d <= k <= s &&
0 <= r <= d/k, {r, k}]
This seems to work, and gives a fairly long set of ConditionalExpressions in terms of Root objects. You should probably also check that the solution is a max and not a min or saddle point. | 2019-10-20 12:32:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 2286.3539282847237}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986707990.49/warc/CC-MAIN-20191020105426-20191020132926-00031.warc.gz"} |
https://proofwiki.org/wiki/Abel%27s_Theorem/Historical_Note | # Abel's Theorem/Historical Note
Carl Gustav Jacob Jacobi remarked that it was the greatest discovery in integral calculus in the $19$th century. | 2020-09-26 21:22:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9751955270767212, "perplexity": 2029.6187372033887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400245109.69/warc/CC-MAIN-20200926200523-20200926230523-00400.warc.gz"} |
https://datascience.stackexchange.com/questions/26669/how-does-k-fold-cross-validation-work | # How does k fold cross validation work?
You split the data in k subsamples. Train it on k-1 subsamples, test it on kth subsample, record the performance with some error merric.
Do it k times for each of the k subsamples, record the error each time. Then choose the model with the lowest error? Is it the same as ensemble technique?
Cross validation is a way to address this. Lets set $k=3$, so the data is split into three sets of 500 points (A, B and C). Use A & B to train a model, and get predictions for C with this model. Use B & C to train a model, to get predictions for A. Finally, use A & C to train a model, and get predictions for B. Now we have a prediction for every point in our labeled data that came from a model trained on different data. By averaging the performance of each of these models, we can end up with a better estimate of how well the model will perform on new data. | 2019-12-13 03:48:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4984964430332184, "perplexity": 467.382031931837}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548537.21/warc/CC-MAIN-20191213020114-20191213044114-00036.warc.gz"} |
https://mathhelpboards.com/threads/condition-for-recurrence-and-transience-of-mc.4386/ | # Condition for recurrence and transience of MC
#### alphabeta89
##### New member
Consider the following model.
[TEX] X_{n+1}[/TEX] given [TEX]X_n, X_{n-1},...,X_0[/TEX] has a Poisson distribution with mean [TEX]\lambda=a+bX_n[/TEX] where [TEX]a>0,b\geq{0}[/TEX]. Show that [TEX]X=(X_n)_{n\in\mathrm{N_0}}[/TEX] is an irreducible M.C & it is recurrent if [TEX]0\leq b <1[/TEX]. In addition, it is transient if [TEX]b\geq 1[/TEX].
How do we approach this question? I was thinking of using the theorem below.
Suppose [TEX]S[/TEX] is irreducible, and [TEX]\phi\geq 0[/TEX] with [TEX]E_x\phi(X_1) \leq \phi(x)[/TEX] for
[TEX]x\notin F[/TEX], a finite set, and [TEX]\phi(x)\rightarrow \infty[/TEX] as [TEX]x\rightarrow \infty[/TEX], i.e., [TEX]{\{x : \phi(x) \leq M}\}[/TEX] is finite for any [TEX]M < \infty[/TEX], then the chain is recurrent.
However I have no idea of how to start. Thanks in advance. | 2021-01-15 23:23:12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9929224848747253, "perplexity": 5484.6268447412995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703497681.4/warc/CC-MAIN-20210115224908-20210116014908-00624.warc.gz"} |
https://in.mathworks.com/help/dsp/ref/dsp.phaseextractor-system-object.html | # dsp.PhaseExtractor
Extract the unwrapped phase of a complex input
## Description
The dsp.PhaseExtractor System object™ extracts the unwrapped phase of a real or a complex input.
To extract the unwrapped phase of a signal input:
1. Create the dsp.PhaseExtractor object and set its properties.
2. Call the object with arguments, as if it were a function.
## Creation
### Description
example
phase = dsp.PhaseExtractor returns a phase extractor System object that extracts the unwrapped phase of an input signal.
example
phase = dsp.PhaseExtractor(Name,Value) returns a phase extractor System object with the specified property name set to the specified value.
## Properties
expand all
Unless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the release function unlocks them.
If a property is tunable, you can change its value at any time.
Specify if the phase is to be unwrapped only within the frame, as a logical scalar.
When you set this property to:
• false –– The object returns the unwrapped phase while ignoring boundaries between input frames.
• true –– The object treats each frame of input data independently, and resets the initial cumulative unwrapped phase value to zero each time a new input frame is received.
## Usage
### Description
example
p = phase(input) extracts the unwrapped phase, p, of the input signal. Each column of the input signal is treated as a separate channel. The System object unwraps the phase of each channel of the input signal independently over time.
### Input Arguments
expand all
Data input, specified as a vector or a matrix. This object supports variable-size input signals. That is, you can change the input frame size (number of rows) even after calling the algorithm. However, the number of channels (number of columns) must remain constant.
Data Types: single | double
Complex Number Support: Yes
### Output Arguments
expand all
Unwrapped phase of the input, returned as a vector or a matrix. The size and data type of the unwrapped phase output match the size and data type of the input signal.
Data Types: single | double
## Object Functions
To use an object function, specify the System object as the first input argument. For example, to release system resources of a System object named obj, use this syntax:
release(obj)
expand all
step Run System object algorithm release Release resources and allow changes to System object property values and input characteristics reset Reset internal states of System object
## Examples
collapse all
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call to the function with the equivalent step syntax. For example, myObject(x) becomes step(myObject,x).
Create a dsp.SineWave System object™. Specify that the object generates an exponential output with a complex exponent.
sine = dsp.SineWave('Frequency',10,...
'ComplexOutput',true,'SamplesPerFrame',128);
Create a dsp.PhaseExtractor System object?. Specify that the object ignores frame boundaries when returning the unwrapped phase.
phase = dsp.PhaseExtractor('TreatFramesIndependently',false);
Extract the unwrapped phase of a sine wave. Plot the phase versus time using a timescope System object.
timeplot = timescope('PlotType','Line','SampleRate',1000,...
'TimeSpanSource','Property','TimeSpan',1.5,'YLimits',[0 80],...
'ShowGrid',true,...
for ii = 1:10
sineOutput = sine();
phaseOutput = phase(sineOutput);
timeplot(phaseOutput)
end
Note: If you are using R2016a or an earlier release, replace each call to the object with the equivalent step syntax. For example, obj(x) becomes step(obj,x).
Create a dsp.TransferFunctionEstimator System object™.
tfe = dsp.TransferFunctionEstimator('FrequencyRange','centered');
Create a dsp.PhaseExtractor System object™. Specify that the object must treat each frame of data independently.
phase = dsp.PhaseExtractor('TreatFramesIndependently',true);
Create a dsp.IIRFilter System object™. Compute the transfer function of a third-order IIR filter. Use the butter function to generate coefficients for the filter.
[b,a] = butter(3,.3);
iir = dsp.IIRFilter('Numerator',b,'Denominator',a);
Extract the phase response of the transfer function. Plot using a dsp.ArrayPlot System object™.
sampleRate = 1e3;
phaseplot = dsp.ArrayPlot('PlotType','Line','XOffset',-sampleRate/2,...
'YLimits',[-15 0],...
'XLabel','Frequency (Hz)',...
'Title','System Phase response');
for ii = 1:100
% Generate input
input = 0.05*randn(1000,1);
% Pass through IIR filter
filterOutput = iir(input);
% Estimate transfer function
transferFunction = tfe(input,filterOutput);
% Plot transfer function phase
phaseOutput = phase(transferFunction);
phaseplot(phaseOutput);
end
## Algorithms
Consider an input frame of length N:
$\left(\begin{array}{l}{x}_{1}\\ {x}_{2}\\ ⋮\\ {x}_{N}\end{array}\right)$
The object acts on this frame and produces this output:
$\left(\begin{array}{l}{\Phi }_{1}\\ {\Phi }_{2}\\ ⋮\\ {\Phi }_{N}\end{array}\right)$
where:
${\Phi }_{i}={\Phi }_{i-1}+\text{angle}\left({x}_{i-1}^{*}{x}_{i}\right)$
Here, i runs from 1 to N. The angle function returns the phase angle in radians.
If the input signal consists of multiple frames:
• If you set TreatFramesIndependently to true, the object treats each frame independently. Therefore, in each frame, the object calculates the phase using the preceding formula where:
• ${\Phi }_{0}$ is 0.
• ${x}_{0}$ is 1.
• If you set TreatFramesIndependently to false, the object ignores boundaries between frames. Therefore, in each frame, the step method calculates the phase using the preceding formula where:
• ${\Phi }_{0}$ is the last unwrapped phase from the previous frame.
• ${x}_{0}$ is the last sample from the previous frame.
## Extended Capabilities
### Blocks
Introduced in R2014b
Get trial now | 2022-01-27 09:10:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3096897304058075, "perplexity": 4005.878325534335}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305242.48/warc/CC-MAIN-20220127072916-20220127102916-00054.warc.gz"} |
http://www.oalib.com/relative/283429 | Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
Page 1 /100 Display every page 5 10 20 Item
F. Lam Physics , 2012, Abstract: The normal-mode analysis of the Reynolds-Orr energy equation governing the stability of viscous motion for general three-dimensional disturbances has been revisited. The energy equation has been solved as an unconstrained minimization problem for the Couette-Poiseuille flow. The minimum Reynolds number for every Couette-Poiseuille velocity profile has been computed and compared with those available in the literature. For fully three-dimensional disturbances, it is shown that the minimum Reynolds number is in general smaller than the corresponding two-dimensional counterpart for all the Couette-Poiseuille profiles except plane Couette flow.
Shalom Sadik Energy and Power Engineering (EPE) , 2018, DOI: 10.4236/epe.2018.109026 Abstract: Following previous work that discussed temperature fluctuations without flowing media a physical model of temperature oscillations into a Couette-Poiseuille flow was built. The temperature distribution into the flow was calculated according to oscillations constraints on the upper and lower plates, and heat dissipation due to shear stresses into the fluid. The physical model deals with different temperature amplitudes and different frequencies constraints on the upper and the lower plates. A physical superposition and complex numbers were used. It was shown that when the constraint frequency increases, its penetration capacity is reduced. Increasing gap width between plates leads to increased fluid temperature values due to enlarged fluid velocity. Increasing thermal diffusivity, increases constrains temperatures penetration intensity.
Conference Papers in Science , 2013, DOI: 10.1155/2013/783510 Abstract: We present the problem of minimum time control of a particle advected in Couette and Poiseuille flows and solve it by using the Pontryagin maximum principle. This study is a first step of an effort aiming at the development of a mathematical framework for the control and optimization of dynamic control systems whose state variable is driven by interacting ODEs and PDEs which can be applied in the control of underwater gliders and mechanical fishes. 1. Introduction This paper represents a first step for the optimal control of dynamic systems whose state evolves through the interaction of ordinary differential equations and the partial differential equations, [1, 2], which will provide a sound basis for the design and control of new advanced engineering systems. In Figure 1, two representative examples of the class of applications are considered: (i) underwater gliders, that is, winged autonomous underwater vehicles (AUVs) which locomote by modulating their buoyancy and their attitude in its environment, and (ii) robotic fishes. Motion modeling of these two types of systems can be found in [3, 4] and [5], respectively. Figure 1: Underwater glider (a), robotic fish (b). In spite of the key roots of the Optimal Control Theory having been established in the sixties for control systems with dynamics given by ordinary differential equations, [6], its sophistication in multiple directions has been progressing unabated (see, among others, [7, 8]). However, there still remains a large gap in what concerns dynamic control systems driven by partial differential equations, [2], and it is largely inexistent for hybrid systems in the sense that the controlled dynamics involve both partial and ordinary differential equations. In this paper, we formulate and solve two optimal control problems. Each one of these problems corresponds to a particular solution of the incompressible Navier-Stokes equation in two spatial dimensions. These particular solutions are, respectively, the steady Couette and Poiseuille flows. The Couette flow is the steady laminar unidirectional and two-dimensional flow due to the relative motion of two infinite horizontal and parallel rigid plates [9]. The liquid between these two plates is driven by the viscous drag force originated by the uniform motion of the upper plate which moves in the x-direction with velocity (the lower plate is at rest). In this case, the velocity of such a flow has a linear profile and is given by with , the plates being distance units apart (Figure 2(a)). Figure 2: Linear (a) and quadratic velocity field (b). The
Physics , 2010, DOI: 10.1017/S0022112010001242 Abstract: We present a detailed study of the linear stability of plane Couette-Poiseuille flow in the presence of a cross-flow. The base flow is characterised by the cross flow Reynolds number, $R_{inj}$ and the dimensionless wall velocity, $k$. Squire's transformation may be applied to the linear stability equations and we therefore consider 2D (spanwise-independent) perturbations. Corresponding to each dimensionless wall velocity, $k\in[0,1]$, two ranges of $R_{inj}$ exist where unconditional stability is observed. In the lower range of $R_{inj}$, for modest $k$ we have a stabilisation of long wavelengths leading to a cut-off $R_{inj}$. This lower cut-off results from skewing of the velocity profile away from a Poiseuille profile, shifting of the critical layers and the gradual decrease of energy production. Cross-flow stabilisation and Couette stabilisation appear to act via very similar mechanisms in this range, leading to the potential for robust compensatory design of flow stabilisation using either mechanism. As $R_{inj}$ is increased, we see first destabilisation and then stabilisation at very large $R_{inj}$. The instability is again a long wavelength mechanism. Analysis of the eigenspectrum suggests the cause of instability is due to resonant interactions of Tollmien-Schlichting waves. A linear energy analysis reveals that in this range the Reynolds stress becomes amplified, the critical layer is irrelevant and viscous dissipation is completely dominated by the energy production/negation, which approximately balances at criticality. The stabilisation at very large $R_{inj}$ appears to be due to decay in energy production, which diminishes like $R_{inj}^{-1}$. Our study is limited to two dimensional, spanwise independent perturbations.
Physics , 2015, Abstract: We show possibility of the Plane Couette (PC) flow instability for Reynolds number Re>Reth=140. This new result of the linear hydrodynamic stability theory is obtained on the base of refusal from the traditionally used assumption on longitudinal periodicity of the disturbances along the direction of the fluid flow. We found that earlier existing understanding on the linear stability of this flow for any arbitrary large Reynolds number is directly related with an assumption on the separation of the variables of the spatial variability for the disturbance field and their periodicity in linear theory of stability. By the refusal from the pointed assumptions also for the Plane Poiseuille (PP) flow, we get a new threshold Reynolds value Reth=1040 that with 4% accuracy agrees with the experiment contrary to more than 500% discrepancy for the earlier known estimate Reth=5772 obtained in the frame of the linear theory but when using the "normal" disturbance form (S. A. Orszag, 1971).
Physics , 2010, DOI: 10.3934/krm.2011.4.361 Abstract: The steady state of a dilute gas enclosed between two infinite parallel plates in relative motion and under the action of a uniform body force parallel to the plates is considered. The Bhatnagar-Gross-Krook model kinetic equation is analytically solved for this Couette-Poiseuille flow to first order in the force and for arbitrary values of the Knudsen number associated with the shear rate. This allows us to investigate the influence of the external force on the non-Newtonian properties of the Couette flow. Moreover, the Couette-Poiseuille flow is analyzed when the shear-rate Knudsen number and the scaled force are of the same order and terms up to second order are retained. In this way, the transition from the bimodal temperature profile characteristic of the pure force-driven Poiseuille flow to the parabolic profile characteristic of the pure Couette flow through several intermediate stages in the Couette-Poiseuille flow are described. A critical comparison with the Navier-Stokes solution of the problem is carried out.
International Journal of Computational Mathematics , 2014, DOI: 10.1155/2014/631749 Abstract: The combined effect of viscous heating and convective cooling on Couette flow and heat transfer characteristics of water base nanofluids containing Copper Oxide (CuO) and Alumina (Al2O3) as nanoparticles is investigated. It is assumed that the nanofluid flows in a channel between two parallel plates with the channel’s upper plate accelerating and exchange heat with the ambient surrounding following the Newton’s law of cooling, while the lower plate is stationary and maintained at a constant temperature. Using appropriate similarity transformation, the governing Navier-Stokes and the energy equations are reduced to a set of nonlinear ordinary differential equations. These equations are solved analytically by regular perturbation method with series improvement technique and numerically by an efficient Runge-Kutta-Fehlberg integration technique coupled with shooting method. The effects of the governing parameters on the dimensionless velocity, temperature, skin friction, pressure drop and Nusselt number are presented graphically, and discussed quantitatively. 1. Introduction Studies related to laminar flow and heat transfer of a viscous fluid in the space between two parallel plates, one of which is moving relative to the other, have received the attention of several researchers due to their numerous industrial and engineering applications. This type of flow is named in honour of Maurice Marie Alfred Couette, a professor of physics at the French University of Angers in the late 19th century [1]. Couette flow has been used to estimate the drag force in many wall driven applications such as lubrication engineering, power generators and pumps, polymer technology, petroleum industry, and purification of crude oil. Literature survey indicates that interest in the Couette flows has grown during the past decades. Jana and Datta [2] examined the effects of Coriolis force on the Couette flow and heat transfer between two parallel plates in a rotating system. Singh [3] studied unsteady free convection flow of an incompressible viscous fluid between two vertical parallel plates, in which one is fixed and the other is impulsively started in its own plane. Kearsley [4] investigated the problem of steady state Couette flow with viscous heating. Jha [5] numerically examined the effects of magnetic field on Couette flow between two vertical parallel plates. The combined effects of variable viscosity and thermal conductivity on generalized Couette flow and heat transfer in the presence of transversely imposed magnetic field have been studied numerically by Makinde and
Asterios Pantokratoras Physics , 2007, Abstract: In the above paper by Bechtel, Cai, Rooney and Wang, Physics of Fluids, 2004, 16, 3955-3974 six different theories of a Newtonian viscous fluid are investigated and compared, namely, the theory of a compressible Newtonian fluid, and five constitutive limits of this theory: the incompressible theory, the limit where density changes only due to changes in temperature, the limit where density changes only with changes in entropy, the limit where pressure is a function only of temperature, and the limit of pressure a function only of entropy. The six theories are compared through their ability to model two test problems: (i) steady flow between moving parallel isothermal planes separated by a fixed distance with no pressure gradient in the flow direction (Couette flow), and (ii) steady flow between stationary isothermal parallel planes with a pressure gradient (Poiseuille flow). The authors found, among other, that the incompressible theory admits solutions to these problems of the plane Couette/Poiseuille flow form: a single nonzero velocity component in a direction parallel to the bounding planes, and velocity and temperature varying only in the direction perpendicular to the planes.
Statistics , 2000, DOI: 10.1023/A:1010317207358 Abstract: A model kinetic equation is solved exactly for a special stationary state describing nonlinear Couette flow in a low density system of inelastic spheres. The hydrodynamic fields, heat and momentum fluxes, and the phase space distribution function are determined explicitly. The results apply for conditions such that viscous heating dominates collisional cooling, including large gradients far from the reference homogeneous cooling state. Explicit expressions for the generalized transport coefficients (e.g., viscosity and thermal conductivity) are obtained as nonlinear functions of the coefficient of normal restitution and the shear rate.These exact results for the model kinetic equation are also shown to be good approximations to the corresponding state for the Boltzmann equation via comparison with direct Monte Carlo simulation for the latter
Physics , 2008, DOI: 10.1103/PhysRevB.78.024524 Abstract: An equation previously proposed to describe the evolution of vortex line density in rotating counterflow turbulent tangles in superfluid helium is generalized to incorporate nonvanishing barycentric velocity and velocity gradients. Our generalization is compared with an analogous approach proposed by Lipniacki, and with experimental results by Swanson et al. in rotating counterflow, and it is used to evaluate the vortex density in plane Couette and Poiseuille flows of superfluid helium.
Page 1 /100 Display every page 5 10 20 Item | 2019-11-15 20:44:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7013365030288696, "perplexity": 716.4566791088401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668712.57/warc/CC-MAIN-20191115195132-20191115223132-00304.warc.gz"} |
https://electronics.stackexchange.com/questions/399667/isolate-a-digital-signal-that-changes-its-amplitude | # Isolate a digital signal that changes its amplitude
I have to isolate a digital signal that has the following characteristics:
• frequency: between 1 and 1000 Hz
• amplitude pk-pk: 12V
• offset: 0-6V
With "offset" I mean the actual voltage may change from 0-12V to 6-18V. It seems, but I cannot know this for 100% sure that the output circuit is something like an npn bjt with a 5k pull-up resistor (and something else, otherwise I cannot explain this offset, measured with an oscilloscope).
The goal is to isolate it with an optocoupler and get a steady 0-12V signal.
Here my attempt:
simulate this circuit – Schematic created using CircuitLab
Because I have to replicate this section x16 times, I wonder if there's a smarter way to do the same using less components.
• So is the offset signal or noise? – Matt Young Oct 6 '18 at 14:12
• Do you want to preserve the shape of the input signal or just have an output pulse whenever the input rises? How do you distinguish between a rise in the offset voltage and a rise in the signal voltage...do you have some threshold voltage that separates a 1 from a 0? – Elliot Alderson Oct 6 '18 at 14:18
• The offset voltage changes slowly, let's say about 0.1 Hz. The offset might be defined as "noise" because I'm not interested in it. It would be better to preserve the original shape (close to 50%) because after I need to feed another device with it. – Mark Oct 6 '18 at 14:20
• Just to be clear: the signal is pretty close to a square wave w/ 50% DC and 12V pk-pk. – Mark Oct 6 '18 at 14:27
• You actually have a decoupling capacitor at the input of your circuit, you could just use a decoupling capacitor followed by a follower op amp – Damien Oct 6 '18 at 14:33
I mean the actual voltage may change from 0-12V to 6-18V
Use an analogue comparator circuit that triggers at 9 volts - anything above 9 volts produces a digital 1 output and anything below 9 volts is digital 0.
simulate this circuit – Schematic created using CircuitLab
Figure 1. (a) Simplest option. (b) Constant current sink option.
0 ----- 6 ----- 12 ----- 18 V
Min ==================
Max ===================
_______________
Output __________|
Figure 2. The voltage ranges of interest overlap so a mid-overlap switching point might work.
By addition of D2 with the appropriate rating you may be able to get the opto-LED to turn on at > 9 V and this would work across the range of voltages of interest. The design problem is that the current through the LED will vary greatly between your low-voltage and high-voltage. You would need to do your calculations to see if this can be made to work. If you do this then don't forget to look at the current-transfer-ratio of the opto-isolator and see if you can get the required output voltage swing at low currents by increasing the value of R4.
Figure 1b adds in current limiting at 10 mA. That solves some of the problems of 1a but at the expense of complexity which you wish to avoid. You could replace the CC sink with a constant current diode (but watch the heat dissipation calculations).
• What is the zener for? The offset is 0 to 6V but is not 6V constant – Damien Oct 6 '18 at 14:40
• See Figure 2. The Zener prevents the opto-LED turning on until the input voltage > 6 V. 9 V switching threshold would probably be ideal as that is mid-way in the overlap region of the lowest to highest input signals. – Transistor Oct 6 '18 at 15:08
• These circuit wouldn't work for the porpose because offset voltage can be as low as 0.1V from the description. – Damien Oct 6 '18 at 15:15
• If the offset is 0.1 V that means the input waveform switches between 0.1 V to 12.1 V at the input frequency. If the switching threshold is 9 V the output will switch with the input. – Transistor Oct 6 '18 at 15:28
• Nope. It's a digital signal. It's either off or on (but the off voltage can vary beween 0 and 6 V and the on between 12 and 18 V). My circuit treats anything below 9 V as off and above as on. – Transistor Oct 6 '18 at 15:34 | 2020-05-30 00:10:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31301605701446533, "perplexity": 1149.5600390872778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347406785.66/warc/CC-MAIN-20200529214634-20200530004634-00295.warc.gz"} |
https://castingoutnines.wordpress.com/tag/math/ | # Tag Archives: Math
## Any questions about this video?
As part of preparing for our impending move from Indy to Grand Rapids, my family and I have made a couple of visits to the area. These by necessity combine business with pleasure, since our three kids (ages 2, 5, and 7) don’t handle extended amounts of business well. On the last visit, we spent some time at the Grand Rapids Childrens Museum, the second floor of which is full of stuff that could occupy children — and mathematicians — for hours. This “exhibit” was, for me, one of the most evocative. Have a look:
I asked this on Twitter a few days ago, but I’ll repost it here: In the spirit of Dan Meyer’s Any Questions? meme, what questions come to mind as you watch this? Particularly math, physics, etc. questions.
One other thing — just after I wrapped up the video on this, someone put one of the little discs rolling on the turntable and it did about a dozen graceful, perfect three-point hypocycloids before falling off the table.
3 Comments
Filed under Geometry, Math, Problem Solving
## The “golden moment”
We’re in final exams week right now, and last night students in the MATLAB course took their exam. It included some essay questions asking for their favorite elements of the course and things that might be improved in the course. I loved what one of my students had to say about the assignment in the course he found to be the most interesting, so I’ve gotten permission from him to share it. The lab problem he’s referring to was to write a MATLAB program to implement the bisection method for polynomials.
It is really hard to decide which project I found most interesting; there are quite a few of them. If I had to choose just one though, I would probably have to say the lab set for April 6. I was having a really hard time getting the program to work, I spent a while tweaking it this way and that way. But when you’re making a program that does not work yet, there is this sort of golden moment, a moment when you realize what the missing piece is. I remember that moment on my April 6 lab set. After I realized what it was, I could not type it in fast enough I was so excited just to watch the program work. After hitting the play button, that .3 seconds it takes for MATLAB to process the program felt like forever. I actually was devastated that I got an error, and thought I had done it all wrong once again, but then I remembered I had entered the error command so it would display an error. I actually started laughing out loud in the lab, quite obnoxiously actually.
Yes! As somebody once said, true learning consists in the debugging process. And that’s where the fun in learning happens to lie, too. Let’s give students as many shots as possible to experience this process themselves.
2 Comments
## Understanding “understanding”
This past Saturday, I was grading a batch of tests that weren’t looking so great at the time, and I tweeted:
I do ask these two questions a lot in my classes, and despite what I tweeted, I will probably continue to do so. Sometimes when I do this, I get questions, and sometimes only silence. When it’s silence, I am often skeptical, but I am willing to let students have their end of the responsibility of seeking help when they need it and handling the consequences if they don’t.
But in many cases, such as with this particular test, the absence of questions leads to unresolved issues with learning, which compound themselves when a new topic is connected to the old one, compounded further when the next topic is reached, and so on. Unresolved questions are like an invasive species entering an ecosystem. Pretty soon, it becomes impossible even to ask or answer questions about the material in any meaningful way because the entire “ecosystem” of a student’s conceptual framework for a subject is infected with unresolved questions.
Asking if students understand something or if they have questions is, I am realizing, a poor way to combat this invasion. It’s not the students’ fault — though persistence in asking questions is a virtue more students could benefit from. The problem is that students, and teachers too, don’t really know what it means to “understand” something. We tend to base it on emotions — “I understand the Chain Rule” comes to mean “I have a feeling of understanding when I look at the Chain Rule” — rather than on objective measures. This explains the common student refrain of “It made sense when you did it in class, but when I tried it I didn’t know where to start“. Of course not! When you see an expert do a calculation, it feels good, but that feeling does not impart any kind of neural pathway towards your being able to do the same thing.
So what I mean by my tweet is that instead of asking “Do you understand?” or “Do you have any questions?” I am going to try in the future to give students something to do that will let me gauge their real understanding of a topic in an objective way. This could be a clicker question that hits at a main concept, or a quick and simple problem asking them to perform a calculation (or both). If a student can do the task correctly, they’re good for now on the material. If not, then they aren’t, and there is a question. Don’t leave it up to students to self-identify, and don’t leave it up to me to read students’ minds. Let the students do something simple, something appropriate for the moment, and see what the data say instead.
This may have the wonderful side effect of teaching some metacognition as well — to train students how to tell when they do or do not know something.
4 Comments
Filed under Education, Teaching
## Technology making a distinction but not a difference?
This article is the second one that I’ve done for Education Debate at Online Schools. It first appeared there on Tuesday this week, and now that it’s fermented a little I’m crossposting it here.
The University of South Florida‘s mathematics department has begun a pilot project to redesign its lower-level mathematics courses, like College Algebra, around a large-scale infusion of technology. This “new way of teaching college math” (to use the article’s language) involves clickers, lecture capture, software-based practice tools, and online homework systems. It’s an ambitious attempt to “teach [students] how to teach themselves”, in the words of professor and project participant Fran Hopf.
It’s a pilot project, so it remains to be seen if this approach makes a difference in improving the pass rates for students in lower-level math courses like College Algebra, which have been at around 60 percent. It’s a good idea. But there’s something unsettling about the description of the algebra class from the article:
Hopf stands in front of an auditorium full of students. Several straggle in 10 to 15 minutes late.
She asks a question involving an equation with x’s, h’s and k’s.
Silence. A few murmurs. After a while, a small voice answers from the back.
“What was that?” Hopf asks. “I think I heard the answer.”
Every now and then, Hopf asks the students to answer with their “clickers,” devices they can use to log responses to multiple-choice questions. A bar graph projected onto a screen at the front of the room shows most students are keeping up, though not all.
[…]
As Hopf walks up and down the aisles, she jots equations on a hand-held digital pad that projects whatever she writes on the screen. It allows her to keep an eye on students and talk to them face-to-face throughout the lesson.
Students start drifting out of the 75-minute class about 15 minutes before it ends. But afterward, Hopf is exuberant that a few students were bold enough to raise their hands and call out answers.
To be fair: This is a very tough audience, and the profs involved have their work cut out for them. The USF faculty are trying with the best of intentions to teach students something that almost assuredly none of them really want to learn, and this is exceedingly hard and often unrewarding work. I used to teach remedial algebra (well short of “college algebra”) at a two-year institution, and I know what this is like. I also know that the technology being employed here can, if used properly, make a real difference.
But if there’s one main criticism to make here, it’s that underneath the technology, what I’m seeing — at least in the snapshot in the article — is a class that is really not that different than that of ten or twenty years ago. Sure, there’s technology present, but all it seems to be doing is supporting the kinds of pedagogy that were already being employed before the technology, and yielded 60% pass rates. The professor is using handheld sketching devices — to write on the board, in a 250-student, 75-minute long lecture. The professor is using clickers to get student responses — but also still casting questions out to the crowd and receiving the de rigeur painful silence following the questions, and the clickers are not being used in support of learner-centered pedagogies like peer instruction. The students have the lectures on video — but they also still have to attend the lectures, and class time is still significantly instructor-centered. (Although apparently there’s no penalty for arriving 15 minutes late and leaving 15 minutes early. That behavior in particular should tell USF something about what really needs to change here.)
What USF seems not to have fully apprehended is that something about their remedial math system is fundamentally broken, and technology is neither the culprit nor the panacea. Moving from an instructor-centered model of learning without technology to an instructor-centered model of learning with technology is not going to solve this problem. USF should instead be using this technology to create disruptive change in how it delivers these courses by refocusing to a student-centered model of learning. There are baby steps here — the inclusion of self-paced lab activities is promising — but having 75-minute lectures (on college algebra, no less) with 225 students signals a reluctance to change that USF’s students cannot afford to keep.
## An M-file to generate easy-to-row-reduce matrices
In my Linear Algebra class we use a lot of MATLAB — including on our timed tests and all throughout our class meetings. I want to stress to students that using professional-grade technological tools is an essential part of learning a subject whose real-life applications closely involve the use of those tools. However, there are a few essential calculations in linear algebra, the understanding of which benefits from doing by hand. One of those calculations is row-reduction. Nobody does this by hand; but doing it by hand is useful for understanding elementary row operations and for getting a feel for the numerical processes that are going on under the hood. And it helps with understanding later concepts, notably that of the LU factorization of a matrix.
I have students take a mastery exam where they have to reduce a 3×5 or 4×6 matrix to reduced echelon form by hand. They are not allowed any technology on that exam. I’ve learned that making up good matrices for this exam is surprisingly tricky. My first attempt at writing the exam resulted in a nice-looking matrix whose reduced echelon form had mind-bendingly big fractions in it. I want the exam to be about row reduction and not fraction arithmetic, so I sat down this morning and wrote this MATLAB function called easyRR.m which automatically spits out $m \times n$ random integer matrices whose row-reduction process might involve fractions but which aren’t horrendous:
%% Function to create an mxn matrix that is easy to row-reduce by hand.
% Basic idea: Construct this matrix by building an LU factorization for it
% where both L and U have small integer values.
% R. Talbert, Feb 15, 2011
function A = easyRR(m,n)
%% Create the L in the LU factorization. This matrix encodes the elementary
%% row operations needed to get A to echelon form.
% Start with a random integer square matrix:
L = randi([-10, 10], [m,m]);
% Replace diagonal elements with 1's:
for i=1:m
L(i,i) = 1;
end
% Zero out all entries above the diagonal:
L = tril(L);
%% Now create the U in the LU factorization, using smaller integers so that
%% the back substitution phase isn't too bad.
% This creates an mxn random integer matrix and zeros out all entries below
% the diagonal.
U = triu(randi([-5,5], [m,n]));
%% The easy-to-reduce matrix is the product of L and U.
A = L*U;
Here’s a screenshot:
The fractions involved here have denominators no larger than 25, which is way more doable for students than what I had been having them work with (sorry, guys).
And, if you happen to have the Symbolic Toolbox for MATLAB, you can add the line latex(sym(A)) to the end and the function will spit out the $\LaTeX$ code for that matrix, for easy copy/paste into the exam.
Anyway, I thought this was useful and so I’m giving it away!
2 Comments
Filed under LaTeX, Linear algebra, Math, MATLAB, Teaching
## Another thought from Papert
Image via Wikipedia
Like I said yesterday, I’m reading through Seymour Papert’s Mindstorms: Children, Computers, and Powerful Ideas right now. It is full of potent ideas about education that are reverberating in my brain as I read it. Here’s another quote from the chapter titled “Mathophobia: The Fear of Learning”:
Our children grow up in a culture permeated with the idea that there are “smart people” and “dumb people.” The social construction of the individual is as a bundle of aptitudes. There are people who are “good at math” and people who “can’t do math.” Everything is set up for children to attribute their first unsuccessful or unpleasant learning experiences to their own disabilities. As a result, children perceive failure as relegating them either to the group of “dumb people” or, more often, to a group of people “dumb at x” (where, as we have pointed, x often equals mathematics). Within this framework children will define themselves in terms of their limitations, and this definition will be consolidated and reinforced throughout their lives. Only rarely does some exceptional event lead people to reorganize their intellectual self-image in such a way as to open up new perspectives on what is learnable.
Haven’t all of us who teach seen this among the people in our classes? The culture in which our students grow up unnaturally, and incorrectly, breaks people into “good at math” or “bad at math”, and students who don’t have consistent, lifelong success will put themselves in the second camp, never to break out unless some “exceptional event” takes place. Surely each person has real limitations — I, for example, will never be on the roster of an NFL team, no matter how much I believe in myself — but when you see what students are capable of doing when put into a rich intellectual environment that provides them with challenges and support to meet them, you can’t help but wonder how many of those “limitations” are self-inflicted and therefore illusory.
It seems to me that we teachers are in the business of crafting and delivering “exceptional events” in Papert’s sense.
## Bound for New Orleans
Happy New Year, everyone. The blogging was light due to a nice holiday break with the family. Now we’re all back home… and I’m taking off again. This time, I’m headed to the Joint Mathematics Meetings in New Orleans from January 5 through January 8. I tend to do more with my Twitter account during conferences than I do with the blog, but hopefully I can give you some reporting along with some of the processing I usually do following good conference talks (and even some of the bad ones).
I’m giving two talks while in New Orleans:
• On Thursday at 3:55, I’m speaking on “A Brief Fly-Through of Cryptology for First-Semester Students using Active Learning and Common Technology” in the MAA Session on Cryptology for Undergraduates. That’s in the Great Ballroom E, 5th Floor Sheraton in case you’re there and want to come. This talk is about a 5-day minicourse I do as a guest lecturer in our Introduction to the Mathematical Sciences activity course for freshmen.
• On Friday at 11:20, I’m giving a talk called “Inverting the Linear Algebra Classroom” in the MAA Session on Innovative and Effective Ways to Teach Linear Algebra. Thats in Rhythms I, 2nd floor Sheraton. This talk is an outgrowth of this blog post I did back in the spring following the first non-MATLAB attempt at the inverted classroom approach I did and will touch on the inverted classroom model in general and how it can play out in Linear Algebra in particular.
Both sessions I’m speaking in are loaded with what look to be excellent talks, so I’m excited about participating. I’d be remiss if I didn’t mention that Gil Strang and David Lay are two of the organizers of the linear algebra setting, which is like a council of the linear algebra gods.
I’ll give Casting Out Nines readers a sneak peek at my two talks by telling you I’ve set up a web site that has the Prezis for both talks along with links to the materials I mention in the talks. And if you’re there in New Orleans, come by my talks if you have the slots free or just give me a ring on my Twitter and I’d love to meet up with you.
Comments Off on Bound for New Orleans | 2017-06-22 16:41:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3746846318244934, "perplexity": 1254.6811923111668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319636.73/warc/CC-MAIN-20170622161445-20170622181445-00167.warc.gz"} |
https://www.zbmath.org/?q=an%3A0619.46064 | # zbMATH — the first resource for mathematics
Interpolation with a parameter function. (English) Zbl 0619.46064
The (Lions-Peetre) real interpolation spaces $$\bar A{}_{\theta,q}$$ are defined by using the function norm $$\Phi (\phi)=(\int^{\infty}_{0}(\phi (t)/t^{\theta})^ qdt/t)^{1/q}$$. By replacing $$t^{\theta}$$ by a more general (parameter) function $$\rho =\rho (t)$$ we obtain the spaces $$\bar A{}_{\rho,q}$$. In this paper we shall point out the fact that most of the classical (and some new) theorems for the spaces $$\bar A{}_{\theta,q}$$ can be formulated also for the more general spaces $$\bar A{}_{\rho,q}$$. Sometimes we only need to adjust some recent results to the present situation but sometimes we must give separate proofs of our statements. Every result is given in a form which is very adjusted to immediate applications. This paper can be seen as a follow-up and unification of several results of this kind in the literature.
##### MSC:
46M35 Abstract interpolation of topological vector spaces 46E30 Spaces of measurable functions ($$L^p$$-spaces, Orlicz spaces, Köthe function spaces, Lorentz spaces, rearrangement invariant spaces, ideal spaces, etc.)
Full Text: | 2021-02-25 23:26:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130849838256836, "perplexity": 482.5926447123204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355937.26/warc/CC-MAIN-20210225211435-20210226001435-00329.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-3-section-3-7-rates-of-change-in-the-natural-and-social-sciences-3-7-exercises-page-233/10 | ## Calculus: Early Transcendentals 8th Edition
a. $t = 0$ and $t = 5$ b. $t \approx 3.08$
$s = t^{4} - 4t^{3} - 20t^{2} + 20t, t\geq0$ $s' = 20 \frac{m}{s}$ a. $s' = \frac{d(t^{4})}{dt} - \frac{d(4t^{3})}{dt} - \frac{d(20t^{2})}{dt} + \frac{d(20t)}{dt}$ $s' = 4t^{3} - 12t^{2} - 40t + 20$ $20 = 4(t)^{3} - 12(20)^{2} - 40t + 20$ $4t^{3} - 12(t)^{2} - 40t + 20-20 = 0$ $4t^{3} - 12t^{2} - 40t = 0$ Now simplify by $4t$: $4t(t^{2} - 3t - 10)= 0$ $t^{2} - 3t - 10 = 0$ Now factor to zero: $4t(t - 5)(t+2)$ $t = 0, t = 5, t = -2$ We eliminate $t = -2$ because the problem says that $t\geq0$ So the answers are: $t = 0$ and $t = 5$ b. Find the time of acceleration at 0: First find the second derivative of $f(x)$ $f'(x) = 4t^{3} - 12t^{2} - 40t$ $f''(x) = \frac{d(4t^{3})}{dt} - \frac{d(12t^{2})}{dt} - \frac{d(40t)}{dt}$ $f''(x) = 12t^{2} - 24t - 40$ Now divide by $4$: $f''(x) = 4(3t^{2} - 6t - 10)$ Now equal to $0$: $4(3t^{2} - 6t - 10) = 0$ To factorize use the Quadratic formula: $\begin{array}{*{20}c} {t = \frac{{ - b \pm \sqrt {b^2 - 4ac} }}{{2a}}} \\ \end{array}$ $a = 3$, $b = -6$ and $c = -10$ $\begin{array}{*{20}c} {t = \frac{{ - (-6) \pm \sqrt {(-6)^2 - 4(3)(-10)} }}{{2(3)}}} \\ \end{array}$ $\begin{array}{*{20}c} {t = \frac{{ 6 \pm \sqrt {36 - 4(-30)} }}{{6}}} \\ \end{array}$ $\begin{array}{*{20}c} {t = \frac{{ 6 \pm \sqrt {36 +120} }}{{6}}} \\ \end{array}$ $\begin{array}{*{20}c} {t = \frac{{ 6 \pm \sqrt {156} }}{{6}}} \\ \end{array}$ Before simplifying we eliminate the negative solution because the problem says that $t\geq0$. $\begin{array}{*{20}c} {t = \frac{{ 6 + \sqrt {156} }}{{6}}} \\ \end{array}$ $\begin{array}{*{20}c} {t = \frac{{ 6 + \sqrt {39(4)} }}{{6}}} \\ \end{array}$ $\begin{array}{*{20}c} {t = \frac{{ 6 + 2\sqrt {39} }}{{6}}} \\ \end{array}$ $\begin{array}{*{20}c} {t = \frac{{ 3 + \sqrt {39} }}{{2}}} \\ \end{array}\approx3.08$ | 2018-07-16 07:16:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9341427087783813, "perplexity": 135.20842619273643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589222.18/warc/CC-MAIN-20180716060836-20180716080836-00087.warc.gz"} |
https://www.hackmath.net/en/math-problem/2282 | # Triangle ABC
Triangle ABC has side lengths m-1, m-2, m-3. What has to be m to be triangle
a) rectangular
b) acute-angled?
Result
m(a) = 6
m(b) = 7
#### Solution:
$(m-1)^2 = (m-2)^2 +(m-3)^2; m>3 \ \\ m^2-2m+1 = m^2-4m+4+m^2-6m+9 \ \\ m^2-8m+12 = 0 \ \\ \ \\ m_{1,2} = \dfrac{ -b \pm \sqrt{ D } }{ 2a } = \dfrac{ 8 \pm \sqrt{ 16 } }{ 2 } \ \\ m_{1,2} = \dfrac{ 8 \pm 4 }{ 2 } \ \\ m_{1,2} = 4 \pm 2 \ \\ m_{1} = 6 \ \\ m_{2} = 2 \ \\ \ \\ m>3 \ \\ m(a) = 6 \ \\$
Checkout calculation with our calculator of quadratic equations.
$b) \ m>6$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
Tips to related online calculators
Looking for help with calculating roots of a quadratic equation?
Pythagorean theorem is the base for the right triangle calculator.
Cosine rule uses trigonometric SAS triangle calculator.
## Next similar math problems:
1. Triangle ABC
In a triangle ABC with the side BC of length 2 cm The middle point of AB. Points L and M split AC side into three equal lines. KLM is isosceles triangle with a right angle at the point K. Determine the lengths of the sides AB, AC triangle ABC.
2. If the
If the tangent of an angle of a right angled triangle is 0.8. Then its longest side is. .. .
3. Medians of isosceles triangle
The isosceles triangle has a base ABC |AB| = 16 cm and 10 cm long arm. What are the length of medians?
4. The pond
We can see the pond at an angle 65°37'. Its end points are 155 m and 177 m away from the observer. What is the width of the pond?
5. Catheti
The hypotenuse of a right triangle is 41 and the sum of legs is 49. Calculate the length of its legs.
6. Angles by cosine law
Calculate the size of the angles of the triangle ABC, if it is given by: a = 3 cm; b = 5 cm; c = 7 cm (use the sine and cosine theorem).
7. Bisectors
As shown, in △ ABC, ∠C = 90°, AD bisects ∠BAC, DE⊥AB to E, BE = 2, BC = 6. Find the perimeter of triangle △ BDE.
8. Isosceles IV
In an isosceles triangle ABC is |AC| = |BC| = 13 and |AB| = 10. Calculate the radius of the inscribed (r) and described (R) circle.
9. RT and circles
Solve right triangle if the radius of inscribed circle is r=9 and radius of circumscribed circle is R=23.
10. Laws
From which law follows directly the validity of Pythagoras' theorem in the right triangle? ?
11. Vector 7
Given vector OA(12,16) and vector OB(4,1). Find vector AB and vector |A|.
12. Theorem prove
We want to prove the sentence: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started?
13. Roots
Determine the quadratic equation absolute coefficient q, that the equation has a real double root and the root x calculate: ?
14. Equation
Equation ? has one root x1 = 8. Determine the coefficient b and the second root x2. | 2020-04-06 08:35:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7374296188354492, "perplexity": 1278.5556783234856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371620338.63/warc/CC-MAIN-20200406070848-20200406101348-00185.warc.gz"} |
https://www.jamesuanhoro.com/post/2018/05/07/simulating-data-from-regression-models/ | # Simulating data from regression models
## 2018/05/07
Categories: stats rstats Tags: model-validation regression GLM
My preferred approach to validating regression models is to simulate data from them, and see if the simulated data capture relevant features of the original data. A basic feature of interest would be the mean. I like this approach because it is extendable to the family of generalized linear models (logistic, Poisson, gamma, …) and other regression models, say t-regression. It’s something Gelman and Hill cover in their regression text.1 Sadly, the default method of simulating data from regression models in R misses what one might consider an important source of model uncertainty - variance in estimated regression coefficients.
Your standard regression model assumes there are true/fixed parameters relating the predictors to the outcome. However, when we perform regression, we only estimate these parameters. Hence, regression software returns standard errors which represent coefficient uncertainty. All other things being equal, smaller sample sizes lead us to greater coefficient uncertainty meaning larger standard errors. The default method for simulating data from a model ignores this uncertainty. Is this a big problem? Maybe not so much. But it would be nice if this source of model uncertainty was not ignored.
I’ll demonstrate what I mean using an example.
## Demonstration
I’ll use Poisson regression to demonstrate this. I simulate two predictors, one continuous, xc, and one binary, xb. And use a small sample size of 50.
library(MASS) # For multivariate normal distribution, handy later on
n <- 50
set.seed(18050518)
dat <- data.frame(xc = rnorm(n), xb = rbinom(n, 1, .5))
Coefficient will be .5 for xc and 1 for xb. I exponentiate the prediction and use the rpois() function to generate a Poisson distributed outcome.
# Exponentiate prediction and pass to rpois()
dat <- within(dat, y <- rpois(n, exp(.5 * xc + xb)))
summary(dat)
xc xb y
Min. :-2.903259 Min. :0.00 Min. :0.00
1st Qu.:-0.648742 1st Qu.:0.00 1st Qu.:1.00
Median :-0.011887 Median :0.00 Median :2.00
Mean : 0.006109 Mean :0.38 Mean :2.02
3rd Qu.: 0.808587 3rd Qu.:1.00 3rd Qu.:3.00
Max. : 2.513353 Max. :1.00 Max. :7.00
Next is to run the model.
summary(fit.p <- glm(y ~ xc + xb, poisson, dat))
Call:
glm(formula = y ~ xc + xb, family = poisson, data = dat)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.9065 -0.9850 -0.1355 0.5616 2.4264
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.20839 0.15826 1.317 0.188
xc 0.46166 0.09284 4.973 6.61e-07 ***
xb 0.80954 0.20045 4.039 5.38e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 91.087 on 49 degrees of freedom
Residual deviance: 52.552 on 47 degrees of freedom
AIC: 161.84
Number of Fisher Scoring iterations: 5
The estimated coefficients were not too distant from the population model, .21 for the intercept instead of 0, .46 instead of .5, and 0.81 instead of 1.
Next to simulate data from the model, I’d like 10,000 simulated datasets because why not? To capture the uncertainty in the regression coefficients, I assume the coefficients arise from a multivariate normal distribution with the estimated coefficients acting as means and the variance-covariance matrix of the regression coefficients as the variance-covariance matrix for the multivariate normal distribution.2
coefs <- mvrnorm(n = 10000, mu = coefficients(fit.p), Sigma = vcov(fit.p))
Out of curiosity, I check how well the simulated coefficients match the original coefficients. First the means:
coefficients(fit.p)
(Intercept) xc xb
0.2083933 0.4616605 0.8095403
colMeans(coefs) # means of simulated coefficients
(Intercept) xc xb
0.2088947 0.4624729 0.8094507
Pretty good, and next the standard errors:
sqrt(diag(vcov(fit.p)))
(Intercept) xc xb
0.15825667 0.09284108 0.20044809
apply(coefs, 2, sd) # standard deviation of simulated coefficients
(Intercept) xc xb
0.16002806 0.09219235 0.20034148
Also pretty good.
Next step is to simulate data from the model. We do this by multiplying each row of the simulated coefficients by the original predictors. Then we pass the predictions to rpois() so it generates a Poisson distributed response:
# One row per case, one column per simulated set of coefficients
sim.dat <- matrix(nrow = n, ncol = nrow(coefs))
fit.p.mat <- model.matrix(fit.p) # Obtain model matrix
# Cross product of model matrix by coefficients, exponentiate result,
# then use to simulate Poisson-distributed outcome
for (i in 1:nrow(coefs)) {
sim.dat[, i] <- rpois(n, exp(fit.p.mat %*% coefs[i, ]))
}
rm(i, fit.p.mat) # Clean house
Now one is done with simulation, compare the simulated datasets to the original dataset on at least the mean and variance of the outcome:
c(mean(dat$y), var(dat$y)) # Mean and variance of original outcome
[1] 2.020000 3.366939
c(mean(colMeans(sim.dat)), mean(apply(sim.dat, 2, var))) # average of mean and var of 10,000 simulated outcomes
[1] 2.050724 4.167751
The average mean of the simulated outcomes was a little higher than that of the original data, the average variance was much higher. On average, one can expect the variance to be more off target than the mean. The variance will also be positively skewed with some extremely high values, at the same time, it is bounded at zero, so the median might be a better reflection of the center of the data:
median(apply(sim.dat, 2, var))
[1] 3.907143
The median variance is much closer to the variance of the original outcome.
Here’s the distribution of the simulated means and variances:
par(mfrow = c(1, 2))
hist(colMeans(sim.dat), main = "Means")
hist(apply(sim.dat, 2, var), main = "Variances")
par(mfrow = c(1, 1))
The above is how I would simulate data from a model and conduct basic checks. It could also be useful to plot histograms of a few of the 10,000 simulated datasets and compare those to a histogram of the original outcome. One could also test the mean difference on the outcome between xb = 1 and xb = 0 in the original data and in the simulated datasets. If the data were over-dispersed, variance comparisons as done above or looking at a few histograms would reveal the inadequacy of a Poisson model if capturing the variance was important. Whatever features the investigator considers important can be examined and compared in this manner.
Back to base R, it has a simulate() function for doing the same thing:
sim.default <- simulate(fit.p, 10000)
This code is equivalent to:
sim.default <- replicate(10000, rpois(n, fitted(fit.p)))
fitted(fit.p) is the prediction on the response scale, or the exponentiated linear predictor since this is Poisson regression. Hence, we’d be using the single set of predicted values from the model to repeatedly create the simulated outcomes.
What does it suggest about recovery of data features:
c(mean(colMeans(sim.default)), mean(apply(sim.default, 2, var)),
median(apply(sim.default, 2, var)))
[1] 2.020036 3.931580 3.810612
The mean and variance are closer to the mean and variance of the original outcome than when one ignores coefficient uncertainty. This approach will always result in a lower variance than when one considers the uncertainty in regression coefficients. It is much faster and requires zero programming to implement, but I am not comfortable ignoring uncertainty in regression coefficients, making the model seem more adequate than it is.
Another problem is most packages that utilize lm() and glm() and simulate data from the model would probably not implement their own simulate() function. For example, DHARMa, a great package for simulation-based residual diagnostics, also relies on the simulate() function when evaluating GLMs.
1. Gelman, A., & Hill, J. (2007). Data analysis using regression and multilevel/hierarchical models. Cambridge University Press.
2. Gelman and Hill perform a procedure that is a little more complicated and has its basis in Bayesian inference. It is implemented in the sim() function in the arm package. | 2020-07-14 11:48:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.727795422077179, "perplexity": 2375.6218526159455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655880665.3/warc/CC-MAIN-20200714114524-20200714144524-00529.warc.gz"} |
http://physics.stackexchange.com/questions/96381/higgs-field-the-vacuum-expectation-value | # Higgs field - the vacuum expectation value
I have already asked questions about the Higgs mechanism. But what still interests me is the following: The vacuum expectation value of the Higgs field is responsible for the emergence of the elementary particle masses, together with the Yukawa coupling constant. What causes it comes to the spontaneous symmetry breaking? Is it right when I simply write phi = v + h ? When I write phi = v + h the symmetry is broken? And the vacuum expectation value is educated?
And am I right when I say that all fermion masses and gauge boson masses arises through the v (vacuum expectation value). Not through the higgs-boson.
-
Can you reword this sentence: "How come now to the spontaneous symmetry breaking and thus the emergence of the vacuum expectation value?" I don't understand it. – JeffDror Feb 2 at 10:00
I have edit my question. – user37415 Feb 2 at 10:13
Spontaneous symmetry breaking happens, if the vacuum of the theory lies not at $\phi = 0$, but at $\phi = v$, (i. e. $V'(0) \neq 0$, but $V'(v) = 0$).
Therefore, you re-write $\phi(x) = v + h(x)$ (sometimes with factors of $1/\sqrt{2}$ floating around). Expanding the new dynamical field $h(x)$ around $h = 0$ is an expansion around the vacuum of the theory and can tehrefore be dealt with perturbatively. The shift $v$ now appears in all couplings of the old higgs field $\phi$ and gives masses to the gauge bosons (through the gauge couplings) and the fermions (via Yukawa couplings). Furthermore, the new dynamical field $h$ obtains very much the same couplings that $\phi$ previousely had.
Exactly. I put $V'(v) = 0$, which is the condition for a Minimum at a field value $\Phi = v$. – Neuneck Mar 4 at 22:17 | 2014-12-18 05:55:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8900532126426697, "perplexity": 561.1415111273161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765616.69/warc/CC-MAIN-20141217075245-00125-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www.studysmarter.us/textbooks/business-studies/operations-and-supply-chain-management-14th/manufacturing-processes/q13oq-b-a-graphics-reproduction-firm-has-four-automatic-equi/ | Suggested languages for you:
Q13OQ (b).
Expert-verified
Found in: Page 255
### Operations And Supply Chain Management
Book edition 14th
Author(s) F. Robert Jacobs
Pages 800 pages
ISBN 9780078024023
# A graphics reproduction firm has four automatic equipment units but occasionally becomes inoperative because of the need for supplies, maintenance, or repair. Each Unit requires service roughly twice each hour, or, more precisely, each Unit of equipment runs an average of 30 minutes before needing service. Service times vary widely, ranging from a simple service (such as pressing a restart switch or repositioning paper) to more detailed equipment disassembly. The average service time, however, is minutes. Equipment downtime results in a loss of $20 per hour. The one equipment attendant is Paid$6 per hour.Using finite queuing analysis, answer the following questions:B. What is the average number of units still in operation?
In lining framework, benefit time is characterized as the time required to serve a client.
See the step by step solution
## Operation
Operation the complementary of standard benefit time is called cruel benefit rate and is represented as the number of clients served amid a settled time. In traditional terms, the benefit rate can be characterized as the capacity of the server or machine in the number of units per time.
## Calculating the average number of Units in operation using the formula
The average number of Units in operation = Number of unit-down Unit
But down Unit = L+H
The formula becomes;
The average number of units in operation = Number of units-(L+H)
= 4 - (0.256+0.535)
= 4-0.791
= 3.209
The average number of Units in operation = 3.209 | 2022-11-30 16:36:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4038478136062622, "perplexity": 3250.770049268602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710765.76/warc/CC-MAIN-20221130160457-20221130190457-00540.warc.gz"} |
http://www.bot-thoughts.com/2012/12/getting-started-with-dspic33f.html | ## Friday, December 14, 2012
### Getting started with dsPIC33F
Here's how I got started developing on a Microchip dsPIC33F.
I wanted to speed up the flame-detecting vision system on Pokey from 3fps to 30fps by upgrading from an 8-bit, 20MHz ATmega328P to a 16-bit, 80MHz dsPIC33F. The main performance boost comes from the dsPIC's very fast ADC.
Pokey with Game Boy Camera
I chose a dsPIC33FJ128GP802-I/P in a 28-PDIP package. It features 128KB flash, 16KB RAM, 16 remappable pins, a bunch of peripherals, DMA, and 1.1MSPS 10-bit ADC, among other cool features that make it a good choice for a low resolution grayscale vision system.
If you're just starting out, this Dangerous Prototypes Introduction will be quite helpful. I'll cover some of the same stuff in this post.
## MPLAB IDE
I'm using the MPLAB IDE v8.6.0 which features a pretty familiar interface not unlike AVRStudio, Netbeans, Eclipse, IAR EWARM, and the like.
The IDE features workspaces in which you develop projects. The left panel features a file explorer for the workspace. The bottom panel features various types of output.
Code is written in the big panel. Toolbar buttons are used to build code, program the MCU, manage files, project, and perform other routine tasks. As with AVR Studio and other standard IDEs, you can double-click on compiler errors to find the offending source.
Turning on line numbers is helpful, sometimes. Select Properties from the Edit menu, click the 'C' File Types tab, and click the Line Numbers check box.
## Programmer
I'm using the PICkit3. I took advantage of promotional pricing at the time. I figured, why hassle with a clone when I can increase my chances of success without spending much more.
They now cost around $45 so it's not exactly a bargain but it's acceptable compared to other programmers (I'm thinking particularly of the JTAG ICE MkII I've been using). You could also get a Microstick for the dsPIC which has a built in programmer and debugger and costs$25.
Taking baby steps, first is breadboarding the dsPIC. The hello world programs will come after that.
In the picture above, Pin 1 is the lower left. That's the !MCLR (reset) pin. Note the tiny 10K pullup resistor, partially hidden.
Pin 8 is VSS (ground). Pin 13 is VDD (3.3V) with a 0.1uF decoupling capacitor installed.
Pin 28 is the AVDD (analog 3.3V) pin, 27 is the AVSS pin (the brown ground wire is hidden somewhat). I put a 0.1uF decoupling capacitor here, too.
The 10uF capacitor at pins 19 and 20 bypass VCAP to VSS. And that's all you need. You don't even need an external crystal, although you could use one.
To program the chip you need power. I've used a Sparkfun FTDI breakout to provide 3.3V. It is also connected to UART1 which is setup for transmit on pin 25 (PB15), receive on pin 24 (PB14).
Here's the full pinout reproduced for educational purposes:
28-DIP dsPIC33 pinout
To program the device, connect the PICkit3. Pin 1 with the arrow connects to !MCLR, the green jumper above. Pin 2 is target VDD (telling the PICkit3 if the device is working), the red jumper at the bottom of the breadboard. Pin 3 is VSS, the bottom black jumper. Pin 3 is PGD (PGED1, dsPIC pin 4), the orange jumper. Pin 4 is PGC (PGEC1, dsPIC pin 5), the white jumper.
PICkit pinout
## Hello World
The first version of Hello World for an embedded system is blinking an LED. You can learn a lot about an MCU just by getting it to that point. An AVR is dead simple, an ARM7 like the one on my LPC2103 breakout board, is a nightmare--it took days. The dsPIC33F is pretty easy.
### MCU Setup
The dsPIC uses an interesting method of configuring the clock source, PLL, watchdog, and other options. One places a set of statements before your code. Then, in the main() routine, one sets up the PLL multipliers and divisors. The following code shows setting up a dsPIC33F for 80MHz operation.
### LEDs and GPIO Pins
Three registers control GPIO on the dsPIC. A tri-state register, TRISx, controls direction, a latch register, LATx, and a port register, PORTx. You can write to the PORTx register which writes to the latch, or you can write to LATx directly. Reading from LAT reads the latch, reading from PORTx reads the actual pin state.
On the dsPIC33 I'm using, there's an A and B port. So you'd use TRISA and TRISB, PORTA and PORTB, LATA and LATB. Let's use RA0 for our LED.
All pins start out as analog inputs after a reset. To disable the analog functionality of a pin, write a 1 to the corresponding bit in the AD1PCFGL register.
Next, configure the RA0 pin as an output by writing a 1 to the TRISA register's bit for RA0. In C, use TRISAbits.TRISA0 to reference the A0 bit of the TRISA register.
In general you can reference bits of any register like this. To turn the pin on, write a 1 to TRISAbits.TRISA0 To turn the pin off, write a 0 instead.
You can toggle a pin using LATAbits.LATA0 = ~LATAbits.LATA0.
a shorter notation for the above bits is to use LD1_TRIS, LD1_I, LD1_O defined as follows in HardwareProfile.h
#define LD1_TRIS (TRISAbits.TRISA0)
#define LD1_I (PORTAbits.RA0)
#define LD1_O (LATAbits.LATA0)
### Delays
If you want to blink an LED in a simple loop you need a delay. Ideally one that is carefully and correctly timed. Microchip provides such a function, and macros, through the C30 compiler. In the C30 compiler directory under the folder src, unzip the libpic30.zip archive to the folder pic30 and find delay32.s.
I copied this into my project folder. I also had to copy null_signature.s in as well. Then, add both files to the project source.
Then, in your C program, define FCY the instruction cycle frequency. If you're running a high frequency, define this as a long long.
After this definition, include libpic30.h Finally, you can call __delay_ms() with an integer parameter representing the number of milliseconds to delay. Other functions include __delay_us() for microsecond delays and __delay32() delays the specified number of instruction cycles.
### UART
The second version of Hello World actually prints the string "Hello World" over a serial connection. I like to play with UART next as it's useful for debugging, and you can learn a little about the MCU's peripherals without too much complexity. Set up the pins, the baud rate and maybe a couple other options, and then start sending data.
Setting baud rate with the UART peripheral can be done using the high precision baud rate generator for higher speeds or the low precision generator for lower speeds. Select which to use via the BRGH bit of the UxMODE register. Set it to 1 for high precision, 0 for low.
For the high precision generator, baud rate divisor, UxBRG, is calculated by (FCY/(4*baud)) - 1 where FCY is the frequency of the instruction cycle clock.
The low precision baud rate generator uses BRG = (FCY/(16*baud)) - 1
Here are the values I calculated given FCY=40000000ULL
BRGH=0 BRGH=1 Baud BRG 1200 2082 8332 2400 1040 4165 4800 519 2082 9600 259 1040 19200 129 519 28800 85 346 38400 64 259 57600 42 172 115200 20 85
Once baud rate is set, clear the status register, UxSTA, enable the UART with the UARTEN bit of the UxMODE register, setting it to 1. Finally, clear the receive flag, UxRXIF bit of the IFS0 register.
You're not quite ready. The dsPIC offers a really cool capability, namely, mapping any peripheral to any pin. Even the ARMs I've worked with can't boast that kind of flexibility! So, let's setup pin 14 as the UART receive pin, and 15 as the transmit. Set the pin assignment in the UxRXR_I and UxTX_O registers, respectively.
RP15_O is shorthand for RPOR7bits.RP15R and is defined in HardwareProfile.h
Finally, you're ready to send and receive. The bit U1STAbits.URXDA indicates data is ready to be read out of the U1RXREG data register. When it's time to transmit, put a byte in the UARTxTX register to send it. Here's the code to do all the above.
## That's All
So, that's about it. I'll post more articles as I progress, featuring other peripherals, tips, and tricks, and the like. | 2018-04-26 19:01:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2669757008552551, "perplexity": 6775.329889724253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948464.28/warc/CC-MAIN-20180426183626-20180426203626-00046.warc.gz"} |
https://lw2.issarice.com/posts/5bd75cc58225bf0670374fb9/quasi-optimal-predictors | # Quasi-optimal predictors
post by Vanessa Kosoy (vanessa-kosoy) · 2015-12-25T14:17:05.000Z · score: 2 (2 votes) · LW · GW · None comments
## Contents
Definition 1
Theorem 1
Theorem 2
Theorem 3
Theorem 4
Theorem 5
Theorem 6
Definition 2
Theorem 7
Definition 3
Theorem 8
Theorem 9
Appendix
Lemma 1
Proof of Lemma 1
Lemma 2
Proof of Lemma 2
Proof of Theorem 1
Lemma 3
Proof of Lemma 3
Proof of Theorem 2
Proof of Theorem 4
Proof of Theorem 7
Lemma 4
Proof of Lemma 4
Proof of Theorem 6
Proof of Theorem 8
Proof of Theorem 9
None
In this post I define the concept of quasi-optimal predictors which is a weaker variant on the theme of optimal predictors. I explain the properties of quasi-optimal predictors that I currently understand (which are completely parallel to the properties of optimal predictors) and give an example where there is a quasi-optimal predictor but there is no optimal predictor.
All proofs are given in the appendix and are mostly analogous to proofs of corresponding theorems for optimal predictors.
# Definition 1
Given a distributional decision problem, a quasi-optimal predictor for is a family of polynomial size Boolean circuits s.t. for any family of polynomial size Boolean circuits we have
where .
# Theorem 1
Consider a distributional decision problem and a quasi-optimal predictor for . Suppose , are s.t.
Then:
# Theorem 2
Consider a word ensemble and , disjoint languages. Suppose is a quasi-optimal predictor for and is a quasi-optimal predictor for . Then, is a quasi-optimal predictor for .
# Theorem 3
Consider a word ensemble and , disjoint languages. Suppose is a quasi-optimal predictor for and is a quasi-optimal predictor for . Then, is a quasi-optimal predictor for .
# Theorem 4
Consider , distributional decision problems with respective quasi-optimal predictors and . Define as the family of circuits computing . Then, is a quasi-optimal predictor for .
# Theorem 5
Consider and a word ensemble. Assume is a quasi-optimal predictor for and is a quasi-optimal predictor for . Then is a quasi-optimal predictor for
# Theorem 6
Consider and a word ensemble. Assume . Assume is a quasi-optimal predictor for and is a quasi-optimal predictor for . Define as the circuit family computing
Then, is a quasi-optimal predictor for .
# Definition 2
Consider a word ensemble and two circuit families. We say is quasisimilar to relative to (denoted ) when .
# Theorem 7
Consider a distributional decision problem, a quasi-optimal predictor for and a polynomial size family. Then, is a quasi-optimal predictor for if and only if .
# Definition 3
Consider , distributional decision problems, a polynomial size family of circuits. is called a (non-uniform) strong pseudo-invertible reduction of to when there is a polynomial s.t. the following conditions hold:
(i)
(ii) There is s.t.
(iii) There is a polynomial and a family of polynomial size circuits s.t.
(iv) There are polynomial size circuits s.t.
# Theorem 8
Consider , distributional decision problems, a strong pseudo-invertible reduction of to and a quasi-optimal predictor for . Define as the family of circuits computing . Then, is a quasi-optimal predictor for .
# Theorem 9
Consider a one-to-one non-uniformly hard one-way function. Define . Then, is a quasi-optimal predictor for .
# Lemma 1
Consider a distributional decision problem and a family of polynomial size. Then, is a quasi-optimal predictor if and only if there is a function s.t.
(i) is non-decreasing in the second argument.
(ii) For any polynomial :
In the following, we will call functions satisfying conditions (i) and (ii) quasinegligible.
(iii) for any we have
Define
# Lemma 2
Consider a distributional decision problem and a corresponding quasi-optimal predictor. Then, there is a function s.t.
(i) is non-decreasing in the second and third arguments.
(ii) For all polynomials :
(iii) for all , and we have
# Proof of Lemma 2
Given , denote
Consider circuit computing the following function:
There is a polynomial s.t. . By Lemma 1,
for quasinegligible.
Integrating the inequality with respect to from to , we get
# Proof of Theorem 1
Define
Assume to the contrary that there is and an infinite set s.t.
Define as the circuits computing
is bounded by a polynomial since produces binary fractions of polynomial size therefore it is possible to compare them to the fixed numbers using a polynomial size circuit even if the latter have infinite binary expansions.
We have
Define to be truncated to the first significant binary digit. Define as the circuits computing
By the assumption, has binary notation of bounded size, therefore is bounded by a polynomial.
Applying Lemma 2 we get
for vanishing at infinity.
Obviously , therefore
The expression on the left hand side is a quadratic polynomial in which attains its maximum at and has roots at and . is between and , but not closer to than . Therefore, the inequality is preserved if we replace by .
Substituting the equation for we get
Thus vanishes at infinity on , which is a contradiction.
# Lemma 3
Consider a distributional decision problem. If is a quasi-optimal predictor for then there are and a quasinegligible function s.t. for any we have
Conversely, suppose and is a polynomial size family for which there is a quasinegligible function s.t. for any we have
Define to be s.t. computing is equivalent to computing rounded to digits after the binary point. Then, is a quasi-optimal predictor.
# Proof of Lemma 3
Assume is an optimal predictor. Consider and where and . The function can be approximated by a circuit of size for some fixed polynomial , within rounding error s.t. . By Lemma 1,
where is quasinegligible. is bounded by a negligible function and therefore can be ignored by redefining . As in the proof of Theorem 1, can be dropped.
The expression on the left hand side is a quadratic polynomial in . Explicitly:
Moving to the right hand side and dividing both sides by we get
Take where is the rounding error. We get
Conversely, assume that for any
Consider . We have
can be computed by a circuit of size polynomial in and . Applying the assumption we get
where is quasinegligible. Noting that and we get
Observing that is bounded by a negligible function, we get the desired result.
# Proof of Theorem 2
Consider . We have
Using Lemma 3:
Therefore
Using Lemma 3 again we get the desired result.
# Proof of Theorem 4
We have
Therefore, for any
By Lemma 3, it is sufficient to show an appropriate bound for each of the terms on the right hand side. For the first term, we have
For any given , can be computed by a circuit with input of size polynomial in and . Applying Lemma 3 to , we get
where is a polynomial and is quasinegligible. Since is bounded by a polynomial in for , we get the bound we need.
For the second term, we have
For any given , can be computed by a circuit with input of size polynomial in , and . Applying Lemma 3 to , we get
Again, we got the required bound.
# Proof of Theorem 7
Assume is a quasi-optimal predictor. Applying Lemma 3 to predictor and circuits computing , we get
for some vanishing at infinity. Applying Lemma 3 to predictor and circuits computing , we get
for some vanishing at infinity. We have
Conversely, assume . Consider some . We have | 2019-06-26 05:02:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9282382130622864, "perplexity": 1529.728529312705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000164.31/warc/CC-MAIN-20190626033520-20190626055520-00270.warc.gz"} |
https://bookdown.org/a_shaker/STM1001_Topic_2B_Sci/1-10-DataEntry.html | ## 1.10 Preparing software for data entry
Most statistical software (including R, jamovi and SPSS ) uses the same approach for collating the data1:
• Each row represents one unit of analysis. Hence, the number of rows will equal the number of units of analysis.
• Each column represents one variable. Hence, the number of columns will equal the number of variables. (There may also be a column of identifying information (such as the person's name).)
In statistical software, the names of the variables are not placed in a separate row (say, in Row 1 above the data itself), which might happen when using a spreadsheet.
The names of the variables become the names of the columns.
Example 1.29 (Preparing statistical software) In Sect. 1.8, this RQ was posed:
Among Australian teenagers with a common cold, is the average duration of cold symptoms shorter for teens given a daily dose of echinacea compared to teenagers given no medication?
For this RQ, the variables are (Examples 1.22 and 1.23):
• 'Duration of cold symptoms' (response), and
• 'Type of treatment' (explanatory).
To set up the software for data entry:
• The number of rows of data would be the number of people in the study.
• The number of columns would be two: one column to record the duration of each individual's cold symptoms, and the other to record whether the individual received a dose of echinacea or received no medication.
In addition, there may be a column recording the name or ID of each individual.
The variable names (say, Duration and Treatment) would not be in a row of their own; they would be the columns names (Fig 1.5).
While spreadsheets (such as Excel) can be used for analysing data, significant problems can, and do, emerge with using spreadsheets. Great care is needed when using spreadsheets for data analysis!
### References
IBM Corp. 2016. IBM SPSS Statistics for Windows, Version 24.0. Armonk, NY: IBM Corp.
R Core Team. 2018. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.
The jamovi Project. n.d. jamovi (Version 1.0) [Computer Software]. https://www.jamovi.org.
1. Though there are exceptions for some types of analyses.↩︎ | 2022-06-25 10:44:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45741167664527893, "perplexity": 2854.9815842953026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00008.warc.gz"} |