url
stringlengths
6
1.61k
fetch_time
int64
1,368,856,904B
1,726,893,854B
content_mime_type
stringclasses
3 values
warc_filename
stringlengths
108
138
warc_record_offset
int32
9.6k
1.74B
warc_record_length
int32
664
793k
text
stringlengths
45
1.04M
token_count
int32
22
711k
char_count
int32
45
1.04M
metadata
stringlengths
439
443
score
float64
2.52
5.09
int_score
int64
3
5
crawl
stringclasses
93 values
snapshot_type
stringclasses
2 values
language
stringclasses
1 value
language_score
float64
0.06
1
https://www.geeksforgeeks.org/maximum-subsequence-sum-of-at-most-k-distant-adjacent-elements/
1,713,882,891,000,000,000
text/html
crawl-data/CC-MAIN-2024-18/segments/1712296818711.23/warc/CC-MAIN-20240423130552-20240423160552-00315.warc.gz
715,926,253
53,580
# Maximum subsequence sum of at most K-distant adjacent elements Last Updated : 24 Jan, 2023 Given an array arr[] consisting of integers of length N and an integer K (1 <= K <= N), the task is to find the maximum sub-sequence sum in the array such that adjacent elements in that sub-sequence have at most a difference of K in their indices in the original array. In other words, if i and j are indices of two consecutive elements of sub-sequence in original array then |i-j| <= K  . Examples: Input: arr[] = {1, 2, -2, 4, 3, 1}, K = 2 Output: 11 Explanation : The subsequence with maximum sum is {1, 2, 4, 3, 1} (difference between indices <=2) Input: arr[] = {4, -2, -2, -1, 3, -1}, K = 2 Output: Explanation : The sub-sequence with maximum sum is {4, -2, 3} (difference between indices <=2) Naive approach: Generate all possible subsets of the array and check for each of the subsets whether it satisfies the condition such that two adjacent elements have at most difference of K in their indices. If yes, then compare its sum with the largest sum obtained till now and update the sum if it is greater than the obtained sum till now. Efficient Approach: This problem can be solved using Dynamic Programming Create a table dp[], where dp[i] stores the largest possible sum for the sub-sequence till the ith index. • If the current element is the first element of the sub-sequence then: dp[i] =arr[i] • Otherwise, we have to check previous results and check what is the maximum result of dp in a window of K behind this index : dp[i] = max(arr[i] + dp[x]) where x is from [i-k, i-1] • For every index choose that condition which gives the maximum sum at that index, So final recurrence relation will be: dp[i] = max(arr[i], arr[i] + dp[x])  where   i-k <= x <= i-1 So, for checking what is the maximum value behind this index in a window of K, either we can iterate from dp[i-1] to dp[i-k] and find the maximum value, in which case the time complexity will be O(N*K) or it can be reduced by taking an ordered map and keep maintaining the computed dp[i] values for every index. This reduces the complexity to O(N*log(K)). Below is the implementation of the above approach. ## C++ `// C++ implementation to` `// C++ program to` `// find the maximum sum` `// subsequence such that` `// two adjacent element` `// have atmost difference` `// of K in their indices`   `#include ` `#include ` `#include ` `using` `namespace` `std;`   `int` `max_sum(``int` `arr[], ``int` `N,` `            ``int` `K)` `{`   `    ``// DP Array to store the` `    ``// maximum sum obtained` `    ``// till now` `    ``int` `dp[N];`   `    ``// Ordered map to store DP` `    ``// values in a window ok K` `    ``map<``int``, ``int``> mp;`   `    ``// Initializing dp[0]=arr[0]` `    ``dp[0] = arr[0];`   `    ``// Inserting value of DP[0]` `    ``// in map` `    ``mp[dp[0]]++;`   `    ``// Initializing final answer` `    ``// with dp[0]` `    ``int` `ans = dp[0];`   `    ``for` `(``int` `i = 1;` `         ``i < N; i++) {`   `        ``// when ifirst` `                            ``+ arr[i],` `                        ``arr[i]);`   `            ``// Inserting dp value in map` `            ``mp[dp[i]]++;` `        ``}` `        ``else` `{`   `            ``auto` `it = mp.end();` `            ``it--;` `            ``dp[i] = max(it->first` `                            ``+ arr[i],` `                        ``arr[i]);`   `            ``mp[dp[i]]++;`   `            ``// Deleting dp[i-k] from` `            ``// map because window size` `            ``// has become K+1` `            ``mp[dp[i - K]]--;`   `            ``// Erase the key from if` `            ``// count of dp[i-K] becomes` `            ``// zero` `            ``if` `(mp[dp[i - K]] == 0) {`   `                ``mp.erase(dp[i - K]);` `            ``}` `        ``}`   `        ``// Calculating final answer` `        ``ans = max(ans, dp[i]);` `    ``}` `    ``return` `ans;` `}`   `// Driver code` `int` `main()` `{` `    ``int` `arr[] = { 1, 2, -2,` `                  ``4, 3, 1 };` `    ``int` `N = ``sizeof``(arr) / ``sizeof``(``int``);` `    ``int` `K = 2;` `    ``cout << max_sum(arr, N, K);` `    ``return` `0;` `}` ## Java `// Java program to` `// find the maximum sum` `// subsequence such that` `// two adjacent element` `// have atmost difference` `// of K in their indices` `import` `java.util.*;` `class` `GFG` `{` `  ``static` `int` `max_sum(``int``[] arr, ``int` `N, ``int` `K)` `  ``{`   `    ``// DP Array to store the` `    ``// maximum sum obtained` `    ``// till now` `    ``int``[] dp = ``new` `int``[N];`   `    ``// Ordered map to store DP` `    ``// values in a window ok K` `    ``HashMap mp = ``new` `HashMap<>();`   `    ``// Initializing dp[0]=arr[0]` `    ``dp[``0``] = arr[``0``];`   `    ``// Inserting value of DP[0]` `    ``// in map` `    ``if``(mp.containsKey(dp[``0``]))` `    ``{` `      ``mp.put(dp[``0``], mp.get(dp[``0``]) + ``1``);   ` `    ``}` `    ``else``{` `      ``mp.put(dp[``0``], ``1``);` `    ``}`   `    ``// Initializing final answer` `    ``// with dp[0]` `    ``int` `ans = dp[``0``];     ` `    ``for` `(``int` `i = ``1``; i < N; i++)` `    ``{`   `      ``// when i keySet = mp.keySet();` `        ``ArrayList Keys = ``new` `ArrayList(keySet); ` `        ``dp[i] = Math.max(Keys.get(Keys.size()-``1``) + arr[i], arr[i]);`   `        ``// Inserting dp value in map`   `        ``if``(mp.containsKey(dp[i]))` `        ``{` `          ``mp.put(dp[i], mp.get(dp[i]) + ``1``); ` `        ``}` `        ``else``{` `          ``mp.put(dp[i], ``1``);` `        ``}` `      ``}` `      ``else` `{` `        ``Set keySet = mp.keySet();` `        ``ArrayList Keys = ``new` `ArrayList(keySet);` `        ``dp[i] = Math.max(Keys.get(Keys.size()-``1``) + arr[i], arr[i]);`   `        ``if``(mp.containsKey(dp[i]))` `        ``{` `          ``mp.put(dp[i], mp.get(dp[i]) + ``1``);   ` `        ``}` `        ``else``{` `          ``mp.put(dp[i], ``1``);` `        ``}`   `        ``// Deleting dp[i-k] from` `        ``// map because window size` `        ``// has become K+1` `        ``if``(mp.containsKey(dp[i - K]))` `        ``{` `          ``mp.put(dp[i - K], mp.get(dp[i - K]) - ``1``);` `        ``}` `        ``else``{` `          ``mp.put(dp[i - K], - ``1``);` `        ``}`   `        ``// Erase the key from if` `        ``// count of dp[i-K] becomes` `        ``// zero` `        ``if` `(mp.get(dp[i - K]) == ``0``) ` `        ``{   ` `          ``mp.remove(dp[i - K]);` `        ``}` `      ``}`   `      ``// Calculating final answer` `      ``ans = Math.max(ans, dp[i]);` `    ``}` `    ``return` `ans;` `  ``}`   `  ``// Driver code` `  ``public` `static` `void` `main(String[] args) {` `    ``int``[] arr = { ``1``, ``2``, -``2``,` `                 ``4``, ``3``, ``1` `};` `    ``int` `N = arr.length;` `    ``int` `K = ``2``;` `    ``System.out.println(max_sum(arr, N, K));` `  ``}` `}`   `// This code is contributed by divyesh072019` ## Python3 `# Python3 program to` `# find the maximum sum` `# subsequence such that` `# two adjacent element` `# have atmost difference` `# of K in their indices`   `def` `max_sum(arr, N, K):` ` `  `    ``# DP Array to store the` `    ``# maximum sum obtained` `    ``# till now` `    ``dp ``=` `[``0` `for` `i ``in` `range``(N)]` ` `  `    ``# Ordered map to store DP` `    ``# values in a window ok K` `    ``mp ``=` `dict``()` ` `  `    ``# Initializing dp[0]=arr[0]` `    ``dp[``0``] ``=` `arr[``0``];` ` `  `    ``# Inserting value of DP[0]` `    ``# in map` `    ``mp[dp[``0``]] ``=` `1` ` `  `    ``# Initializing final answer` `    ``# with dp[0]` `    ``ans ``=` `dp[``0``];` `    `  `    ``for` `i ``in` `range``(``1``, N):` ` `  `        ``# when i ## C# `// C# program to` `// find the maximum sum` `// subsequence such that` `// two adjacent element` `// have atmost difference` `// of K in their indices` `using` `System;` `using` `System.Collections.Generic;  ` `using` `System.Linq;` `class` `GFG ` `{`   `  ``static` `int` `max_sum(``int``[] arr, ``int` `N, ``int` `K)` `  ``{`   `    ``// DP Array to store the` `    ``// maximum sum obtained` `    ``// till now` `    ``int``[] dp = ``new` `int``[N];`   `    ``// Ordered map to store DP` `    ``// values in a window ok K` `    ``Dictionary<``int``, ``int``> mp = ``new` `Dictionary<``int``, ``int``>(); `   `    ``// Initializing dp[0]=arr[0]` `    ``dp[0] = arr[0];`   `    ``// Inserting value of DP[0]` `    ``// in map` `    ``if``(mp.ContainsKey(dp[0]))` `    ``{` `      ``mp[dp[0]]++;   ` `    ``}` `    ``else``{` `      ``mp[dp[0]] = 1;` `    ``}`   `    ``// Initializing final answer` `    ``// with dp[0]` `    ``int` `ans = dp[0];     ` `    ``for` `(``int` `i = 1; i < N; i++)` `    ``{`   `      ``// when i ## Javascript `// JavaScript program to` `// find the maximum sum` `// subsequence such that` `// two adjacent element` `// have atmost difference` `// of K in their indices` `function` `max_sum(arr, N, K) {` `  `  `    ``// DP Array to store the` `    ``// maximum sum obtained` `    ``// till now` `    ``let dp = ``new` `Array(N).fill(0);` `  `  `    ``// Map to store DP values` `    ``// in a window of K` `    ``let mp = ``new` `Map();` `  `  `    ``// Initializing dp[0]=arr[0]` `    ``dp[0] = arr[0];` `  `  `    ``// Inserting value of DP[0]` `    ``// in map` `    ``mp.set(dp[0], 1);` `  `  `    ``// Initializing final answer` `    ``// with dp[0]` `    ``let ans = dp[0];` `  `  `    ``for` `(let i = 1; i < N; i++) {` `  `  `        ``// when i Output: `11` Time Complexity: O(N*log(K)) Auxiliary Space: O(N) because it using extra space for array dp and map mp Previous Next
3,405
9,662
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.953125
4
CC-MAIN-2024-18
latest
en
0.887356
https://mathsgee.com/10519/certain-african-converted-american-article-articles-bought?show=13127
1,621,113,153,000,000,000
text/html
crawl-data/CC-MAIN-2021-21/segments/1620243991378.52/warc/CC-MAIN-20210515192444-20210515222444-00060.warc.gz
388,042,154
18,386
Network: Global | Joburg Libraries | MOOCs | StartUpTribe | Zimbabwe | Donate MathsGee is Zero-Rated (You do not need data to access) on: Telkom |Dimension Data | Rain | MWEB 0 like 0 dislike 50 views At a certain point in time the South African rand (R) is converted to the American dollar (\$) at a rate of \$1.00=R 6.50. If an article costs \$15 in the USA, what is the number of articles that can be bought for R2 535? | 50 views ## 1 Answer 1 like 0 dislike Best answer If \$1 = R 6.50 Then \$15 = R 97.50 which implies that each article cost R 97.50 in rands. Therefore number of articles that can be bought for R 2535 is Number of articles = Total sum in available in rands / Cost of each article in rands $= \dfrac{R 2535 }{R 97.50}$ =$26\$ articles by Wooden (2,654 points) selected by 0 like 0 dislike
245
815
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.671875
4
CC-MAIN-2021-21
latest
en
0.872021
https://www.askiitians.com/forums/Mechanics/a-body-of-mass-40-kg-stands-on-a-weighing-machine_191148.htm
1,585,908,598,000,000,000
text/html
crawl-data/CC-MAIN-2020-16/segments/1585370510846.12/warc/CC-MAIN-20200403092656-20200403122656-00018.warc.gz
801,657,829
30,525
Click to Chat 1800-1023-196 +91-120-4616500 CART 0 • 0 MY CART (5) Use Coupon: CART20 and get 20% off on all online Study Material ITEM DETAILS MRP DISCOUNT FINAL PRICE Total Price: Rs. There are no items in this cart. Continue Shopping Menu Grade: 9 ` A body of mass 40 kg stands on a weighing machine in a accelerated lift. The reading on the scale of the weighing machine is 300N. Find the magnitude and direction of acceleration. (g=9.8 m/s^2)` 2 years ago ## Answers : (1) Arun 23781 Points ``` Dear student M=40kgF=300Nsuppose that g=9.8m/s²Now, when the lift goes down, 2 types of accelerations are applied, 1. gravitational acceleration2.acceleration of lifeBy thinking about the FBD of the lift,if the lift moves down, the sum of acceleration = g-aand if it moves up, it is g+a[why? Becuase when the life moves, an illusory acceleration is applied on it, which is 'a' here]That means, 1. if the lift is moving upwards,F=m(g+a)300=40(10+a)a= -2/5 m/s²2. if lift is going down,F=m(g-a)300=40(10-a)a= +2/5 m/s²Now the directions: 1.if the lift moves downwards, the magnitude of acceleration is +2/5 m/s²2.if the lift moves downwards, the magnitude of acceleration is -2/5 m/s² RegardsArun (askIITians forum expert) ``` 2 years ago Think You Can Provide A Better Answer ? Answer & Earn Cool Goodies ## Other Related Questions on Mechanics View all Questions » ### Course Features • 101 Video Lectures • Revision Notes • Previous Year Papers • Mind Map • Study Planner • NCERT Solutions • Discussion Forum • Test paper with Video Solution ### Course Features • 110 Video Lectures • Revision Notes • Test paper with Video Solution • Mind Map • Study Planner • NCERT Solutions • Discussion Forum • Previous Year Exam Questions ## Ask Experts Have any Question? Ask Experts Post Question Answer ‘n’ Earn Attractive Gift Vouchers To Win!!! Click Here for details
543
1,895
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.484375
3
CC-MAIN-2020-16
latest
en
0.70752
https://www.leansigmacorporation.com/xbar-r-charts-with-jmp/
1,656,360,268,000,000,000
text/html
crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00114.warc.gz
938,680,526
10,106
# Xbar R Charts with JMP ### Xbar R Chart The Xbar R chart is a control chart for continuous data with a constant subgroup size between two and ten. • The Xbar chart plots the average of a subgroup as a data point. • The R chart plots the difference between the highest and lowest values within a subgroup as a data point. The Xbar chart monitors the process mean and the R chart monitors the variation within subgroups. The Xbar is valid only if the R chart is in control. The underlying distribution of the Xbar-R chart is normal distribution. ### Xbar Chart Equations Xbar chart Data Point: Center Line: Control Limits: Where: • m is the subgroup size • k is the number of subgroups. A2 is a constant depending on the subgroup size. ### R Chart Equations R chart (Rage Chart) Data Point: Center Line: Upper Control Limit: Lower Control Limit: Where: • m is the subgroup size and k is the number of subgroups • D3 and D4 are constants depending on the subgroup size. ### Use JMP to Plot Xbar-R Charts Data File: “Xbar-R” tab in “Sample Data.xlsx” ### Steps to plot Xbar-R charts in JMP: 1. Analyze -> Quality & Process -> Control Chart -> Xbar Fig 1.1 Analyze>Quality & Process>Control Chart>Xbar 1. Select Measurement in Process 2. Select Subgroup ID in Sample Label 3. Be sure the check Xbar and R check boxes Fig 1.2 Control Chart selections 1. Click OK Fig 1.3 Test Selection ### Xbar-R Charts Diagnosis Xbar-R Charts: Since the R chart is in control, the Xbar chart is valid. Fig 1.4 Xbar-R Chart Diagnosis In both charts, there are not any data points failing any tests for special causes (i.e. all the data points fall between the control limits and spread around the center line with a random pattern). We conclude that the process is in control.
434
1,788
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.53125
3
CC-MAIN-2022-27
latest
en
0.813523
http://www.chegg.com/homework-help/introductory-algebra-10th-edition-chapter-r.5-solutions-9780321269478
1,472,123,842,000,000,000
text/html
crawl-data/CC-MAIN-2016-36/segments/1471982293150.24/warc/CC-MAIN-20160823195813-00081-ip-10-153-172-175.ec2.internal.warc.gz
372,213,281
17,200
View more editions Introductory Algebra (10th Edition)Solutions for Chapter R.5 • 7236 step-by-step solutions • Solved by publishers, professors & experts • iOS, Android, & web Looking for the textbook? Over 90% of students who use Chegg Study report better grades. May 2015 Survey of Chegg Study Users Chapter: Problem: SAMPLE SOLUTION Chapter: Problem: • Step 1 of 2 Consider the expression Write the above expression in exponential notation. Definition of exponential notation: For any natural number n greater than or equal to 2, In this case the exponent is n and the base is b. • Step 2 of 2 Now by the definition of exponential notation This is read “five to the power four”. Call the number 4 an exponent and 5 is the base. Therefore Corresponding Textbook Introductory Algebra | 10th Edition 9780321269478ISBN-13: 0321269470ISBN: Marvin L. BittingerAuthors: Alternate ISBN: 9780321305985, 9780321305992, 9780321306005, 9780321690166
252
955
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.078125
3
CC-MAIN-2016-36
latest
en
0.716465
http://www.chegg.com/homework-help/statistics-for-business-and-economics-8th-edition-chapter-5-problem-38e-solution-9780132745659
1,455,372,206,000,000,000
text/html
crawl-data/CC-MAIN-2016-07/segments/1454701166650.78/warc/CC-MAIN-20160205193926-00023-ip-10-236-182-209.ec2.internal.warc.gz
325,028,011
17,327
View more editions # TEXTBOOK SOLUTIONS FOR Statistics for Business and Economics 8th Edition • 1321 step-by-step solutions • Solved by publishers, professors & experts • iOS, Android, & web Over 90% of students who use Chegg Study report better grades. May 2015 Survey of Chegg Study Users Chapter: Problem: STEP-BY-STEP SOLUTION: Chapter: Problem: • Step 1 of 4 Continuous Random Variable: If a random variable X is called as continuous random variable if it is taking the values in terms of intervals and the range of the values is not finite. Probability Density Function of Continuous Random Variable X : The probability density function of a continuous random variable X, denoted by satisfies the following properties: 1) For all the values of x. 2) If we integrate over the range of values for the continuous random variable X then it is equal to 1. 3) Suppose we are interested in calculating the probability that X lies between the two constants a and b then this is calculated as shown below: . 4) The cumulative distribution function denoted by is calculated as the area under the probability density function from the minimum value to .It is defined as • Chapter , Problem is solved. Corresponding Textbook Statistics for Business and Economics | 8th Edition 9780132745659ISBN-13: 0132745658ISBN: Authors:
294
1,330
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.015625
3
CC-MAIN-2016-07
latest
en
0.86196
https://conversietopper.nl/rhinogold-5-7-crack-fullbfdcm-free/
1,660,101,052,000,000,000
text/html
crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00622.warc.gz
197,156,370
21,853
Conversietopper – Website laten maken # Rhinogold 5 7 Crack Fullbfdcm Free ### Gratis prijsopgave Rhinogold 5 7 Crack Fullbfdcm Free Rhinogold 5 7 Crack Fullbfdcm support phone chat time recording video recording of message audiuonimnarubo · Mr. Wong’s review of Tantalize 5 is outstanding.Q: Finding the closest point on the gradient Given the 2d-transformed point $T(x,y)$, I’m trying to find the point $(x_0,y_0)$ on the gradient $abla(w)$ that is the closest to the transformed point (or vice-versa) using iterations. What I know from the gradient $$abla(w) = \frac{\partial w}{\partial x}\frac{\partial x}{\partial x_0} + \frac{\partial w}{\partial y}\frac{\partial y}{\partial x_0} + \frac{\partial w}{\partial x}\frac{\partial x}{\partial y_0} + \frac{\partial w}{\partial y}\frac{\partial y}{\partial y_0}$$ Is that if we perform this transformation $T(x_0,y_0)$ where $(x_0,y_0)$ is on the gradient, then we can use any linear mapping and the result will be the closest point on the gradient $(x_0,y_0)$. So, for example, if $T(x,y) = [x_0 + ax, y_0+by]$, I can choose $$\begin{cases} x_0 = 0 \\ x = x_0 \\ a = \frac{1}{x_0} \\ a=0 \end{cases}$$ and get $(x,y) = (0,b)$ as the closest point on the gradient. However, this would be tedious to apply to every other $T(x,y)$ I may be applying. Is there a better way to find $(x_0,y_0)$ that is on the gradient $abla(w)$ and that also works for any arbitrary $T(x,y)$? A: I presume the gradient in question is the one for the problem $u=f(T(x,y))$ given by \$f(t)=\frac{\partial u}{\partial x} \frac{\partial T}{\ I have downloaded this old version, but I’m kinda lost. Dont know the steps to follow, and even. rhinogold 5 7 crack fullbfdcm · Revista Música de Caeira (Portuguese) 2-3.04. Rhinogold 5 7 crack fullbfdcm · Murad Resimler (Turkish) 2-3.04. P2P software for Windows 7. Rhinogold 5 7 crack fullbfdcm · Ruta Buenos Aires 1-3.04.. BLACKLISTED GAMES BOOK (321) TO DOWNLOAD NFO.. rhinogold 5 7 crack fullbfdcm · Vertigine (Italian) 2-3.04.. Rhinogold 5 7 crack fullbfdcm · Le Télé. rhinogold 5 7 crack fullbfdcm · Revista Música de Caeira (Portuguese) 2-3.04. rar aiko sangenis sa Rhinogold 5 7 crack fullbfdcm · Murad Resimler (Turkish) 2-3.04. HERE ARE THE VODAFONE CANADA. rhinogold 5 7 crack fullbfdcm · Macarone (Portuguese) 2-3.04. » « APKPure APK Tool 1.7.12. Rhinogold 5 7 crack fullbfdcm · Anime (Japanese) 2-3.04. new chtigcombestgold’s Ownd. MickieDead is back and this time it is 100% FREE! Thanks for your time and support. . rhinogold 5 7 crack fullbfdcm · Le Télé. Rhinogold 5 7 crack fullbfdcm · Vertigine (Italian) 2-3.04. Chat. rhinogold 5 7 crack fullbfdcm · le Télé. For the. rhinogold 5 7 crack fullbfdcm · Madras Cafe movie . Download. rhinogold 5 7 crack fullbfdcm · Murad Resimler (Turkish) 2-3.04. kimin This is a list of d0c515b9f4 ⚹Why walk through Asia with a video camera when you can drive. rhinogold 5 7 crack fullbfdcm. Your account has been verified as a crackexploit.com user and has a verified email address which. rhinogold 5 7 crack fullbfdcm. · a, We will process. rhinogold 5 7 crack fullbfdcm . rhinogold 5 7 crack fullbfdcm whalexpress.com: 1103457 rqrgq.rar: Uranium P2P paid 40450. webml: 103516. rhinogold 5 7 crack fullbfdcm . rhinogold 5 7 crack fullbfdcm rhinogold 5 7 crack fullbfdcm Crack7.rar PDF For Nexus 7 (13.0.2): 21 ⚹ Crack7.rar PDF For Nexus 7 (13.0.2): 22 ⚹. rhinogold 5 7 crack fullbfdcm . rhinogold 5 7 crack fullbfdcm BitCoin Cracked – How to Catch Rich People online |. rhinogold 5 7 crack fullbfdcm. ·. rhinogold 5 7 crack fullbfdcm . rhinogold 5 7 crack fullbfdcm . rhinogold 5 7 crack fullbfdcm . rhinogold 5 7 crack fullbfdcm . rhinogold 5 7 crack fullbfdcm video screenshot taken in mac osx. rhinogold 5 7 crack fullbfdcm. Episode 3.. rhinogold 5 7 crack fullbfdcm Crack7.rar PDF For Nexus 7 (13.0.2): 26 ⚹ Crack7.rar PDF For Nexus 7 (13.0.2): 27 ⚹. rhinogold 5 7 crack fullbfdcm . rhinogold 5 7 crack fullbfdcm Crack7.rar PDF For Nexus 7 (13.0.2): 28 ⚹. rhinogold 5 7 crack fullbfdcm . rhinogold 5 7 crack fullbfd All latest cracks. Get all. rhinogold 5 7 crack fullbfdcm Manos’ hair: the style course full version keygen. Hackety Hack 5.14.8 APK + Mod (Unlimited Money. rhinogold 5 7 crack fullbfdcm. Any of this should be possible, although that part is a little more tricky. You just have to find my email address on Google and email me. It’s a private message on my Google accounts forums. All the information I have on this topic is right there in that thread. I will be very sad to hear you never get your phone fixed. Cobra Viper 7 Killer – New.docx or Email me I can also email you directly with all the information. It will say where it came from. Now you can show it to your friends to show them that they are wrong. A search on Google, Yahoo, or Bing or whatever shows it is an open.docx file with your phone number on it. If they ask and it says you used it to call our company, say this: “I will be happy to prove this to you, as Google on my phone said I used it to call your company and we are connected now.” If they ask and it says you used it to send a text message to our company, say this: “I will be happy to prove this to you
1,788
5,252
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.484375
3
CC-MAIN-2022-33
latest
en
0.721294
http://acm.scu.edu.cn/soj/problem/3200/
1,571,566,690,000,000,000
text/html
crawl-data/CC-MAIN-2019-43/segments/1570986705411.60/warc/CC-MAIN-20191020081806-20191020105306-00160.warc.gz
8,089,297
1,115
``` Time Limit: 1000 MS Memory Limit: 65536 K Description In order to sharpen their basic arithmetic skills, kids often try to represent numbers using match sticks. As one is only given a limited number, one student is curious how high he can count with his matches. Each one digit number is represented as follows: - number 8 uses seven matches. - numbers 0, 6 and 9 each use six matches. - numbers 2, 3 and 5 each use five matches. - numbers 4 and 7 each use four matches. - number 1 uses two matches. Given the number of matches he has at his disposal, Can you tell me the smallest positive integer that cannot be represented? Input The first line of the input will be a integer to represent the number of test cases. For each case there is only one line contains only one integer N denoting the number of matches. ( 1 <= N <= 10^5 ) There is a blank line before each test case. Output For each test case output the answer on a line: The smallest positive integer that cannot be represented. Sample Input 3 1 9 11 Sample Output 1 20 28 Source ```
259
1,069
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.609375
3
CC-MAIN-2019-43
latest
en
0.89629
https://plainmath.net/17736/without-graphing-determine-function-represents-exponential-determination
1,627,845,945,000,000,000
text/html
crawl-data/CC-MAIN-2021-31/segments/1627046154219.62/warc/CC-MAIN-20210801190212-20210801220212-00233.warc.gz
485,823,762
7,580
Question # Without graphing, determine whether the function y = 0.3(1.25)x represents exponential growth or decay. State how you made the determination. Exponential growth and decay Without graphing, determine whether the function $$y = 0.3(1.25)x$$ represents exponential growth or decay. State how you made the determination. For $$a>0$$, the exponential function $$y=ab^x$$ is a growth if $$b>1$$ or a decay if 0.1, then $$y=0.3(1.25)^x$$ is an exponential growth.
130
469
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.390625
3
CC-MAIN-2021-31
latest
en
0.741108
https://www.altsci.com/3/s/Every-uniformly-continuous-function-between-metric
1,582,341,770,000,000,000
text/html
crawl-data/CC-MAIN-2020-10/segments/1581875145648.56/warc/CC-MAIN-20200222023815-20200222053815-00503.warc.gz
645,532,630
8,751
Page "Uniform continuity" ¶ 1 from Wikipedia ## Some Related Sentences Every and uniformly Every contraction mapping is Lipschitz continuous and hence uniformly continuous ( for a Lipschitz continuous function, the constant k is no longer necessarily less than 1 ). Every entire function can be represented as a power series that converges uniformly on compact sets. Every continuous function on a compact set is uniformly continuous. Every topological group can be viewed as a uniform space in two ways ; the left uniformity turns all left multiplications into uniformly continuous maps while the right uniformity turns all right multiplications into uniformly continuous maps. * Every Lipschitz continuous map is uniformly continuous, and hence a fortiori continuous. Every special uniformly continuous real-valued function defined on the metric space is uniformly approximable by means of Lipschitz functions. Every experiment in such a free-falling environment has the same results as it would for an observer at rest or moving uniformly in deep space, far from all sources of gravity. * Every uniformly convergent sequence of bounded functions is uniformly bounded. Every and continuous * Every continuous functor on a small-complete category which satisfies the appropriate solution set condition has a left-adjoint ( the Freyd adjoint functor theorem ). Every character is automatically continuous from A to C, since the kernel of a character is a maximal ideal, which is closed. * Every continuous map from a compact space to a Hausdorff space is closed and proper ( i. e., the pre-image of a compact set is compact. * Pseudocompact: Every real-valued continuous function on the space is bounded. Every continuous map f: X → Y induces an algebra homomorphism C ( f ): C ( Y ) → C ( X ) by the rule C ( f )( φ ) = φ o f for every φ in C ( Y ). Every space filling curve hits some points multiple times, and does not have a continuous inverse. * Every separable metric space is isometric to a subset of C (), the separable Banach space of continuous functions → R, with the supremum norm. Every continuous function in the function space can be represented as a linear combination of basis functions, just as every vector in a vector space can be represented as a linear combination of basis vectors. Every embedding is injective and continuous. Every map that is injective, continuous and either open or closed is an embedding ; however there are also embeddings which are neither open nor closed. * Every compact Hausdorff space of weight at most ( see Aleph number ) is the continuous image of ( this does not need the continuum hypothesis, but is less interesting in its absence ). Every place south of the Antarctic Circle experiences a period of twenty-four hours ' continuous daylight at least once per year, and a period of twenty-four hours ' continuous night time at least once per year. Every and function : Every set has a choice function. Every such subset has a smallest element, so to specify our choice function we can simply say that it maps each set to the least element of that set. ** Every surjective function has a right inverse. : Every effectively calculable function is a computable function. Every effectively calculable function ( effectively decidable predicate ) is general recursive italics Every effectively calculable function ( effectively decidable predicate ) is general recursive. Every bijective function g has an inverse g < sup >− 1 </ sup >, such that gg < sup >− 1 </ sup > = I ; Every holomorphic function can be separated into its real and imaginary parts, and each of these is a solution of Laplace's equation on R < sup > 2 </ sup >. Every holomorphic function is analytic. Every completely multiplicative function is a homomorphism of monoids and is completely determined by its restriction to the prime numbers. Every polynomial P in x corresponds to a function, ƒ ( x ) Every primitive recursive function is a general recursive function. Every time another object or customer enters the line to wait, they join the end of the line and represent the “ enqueue ” function. Every function is a method and methods are always called on an object. Every type that is a member of the type class defines a function that will extract the data from the string representation of the dumped data. Every output of an encoder can be described by its own transfer function, which is closely related to the generator polynomial. Every and between Every morning early, in the summer, we searched the trunks of the trees as high as we could reach for the locust shells, carefully detached their hooked claws from the bark where they hung, and stabled them, a weird faery herd, in an angle between the high roots of the tulip tree, where no grass grew in the dense shade. Every year both clubs play the " Klassieker " (" The Classic "), a derby match between the teams from the two largest cities of the Netherlands. Disraeli wrote a personal letter to Gladstone, asking him to place the good of the party above personal animosity: " Every man performs his office, and there is a Power, greater than ourselves, that disposes of all this ..." In responding to Disraeli Gladstone denied that personal feelings played any role in his decision then and previously to accept office, while acknowledging that there were differences between him and Derby " broader than you may have supposed. Every information exchange between living organisms — i. e. transmission of signals that involve a living sender and receiver can be considered a form of communication ; and even primitive creatures such as corals are competent to communicate. Every six months the presidency rotates between the states, in an order predefined by the Council's members, allowing each state to preside over the body. During the Renaissance, there arose a critical attitude that sharply distinguished between apostolic tradition and what George Every calls " subsidiary mythology "— popular legends surrounding saints, relics, the cross, etc .— suppressing the latter. George Every discusses the connection between the cosmic center and Golgotha in his book Christian Mythology, noting that the image of Adam's skull beneath the cross appears in many medieval representations of the crucifixion. Every module over a division ring has a basis ; linear maps between finite-dimensional modules over a division ring can be described by matrices, and the Gaussian elimination algorithm remains applicable. Every node has a location, which is a number between 0 and 1. Every Mac made between 1986 and 1998 has a SCSI port on the back, making external expansion easy ; also, " toaster " Compact Macs did not have easily accessible hard drive bays ( or, in the case of the Mac Plus, any hard drive bay at all ), so on those models, external SCSI disks were the only reasonable option. Every two months IMU publishes an electronic newsletter, IMU-Net, that aims to improve communication between IMU and the worldwide mathematical community by reporting on decisions and recommendations of the Union, major international mathematical events and developments, and on other topics of general mathematical interest. British playwright Tom Stoppard wrote Every Good Boy Deserves Favour about the relationship between a patient and his doctor in one of these hospitals. Every citation he found described an encounter between males where one party, the master, physically abused another, the slave. * Every Lie group is parallelizable, and hence an orientable manifold ( there is a bundle isomorphism between its tangent bundle and the product of itself with the tangent space at the identity ) Every homomorphism f: G → H of Lie groups induces a homomorphism between the corresponding Lie algebras and. Every LORAN chain in the world uses a unique Group Repetition Interval, the number of which, when multiplied by ten, gives how many microseconds pass between pulses from a given station in the chain. * Every episode of James Joyce's modernist novel Ulysses ( 1922 ) has an assigned theme, technique and correspondences between its characters and those of Homer's Odyssey. Every quark in the universe does not attract every other quark in the above distance independent manner, since colour-confinement implies that the strong force acts without distance-diminishment only between pairs of single quarks, and that in collections of bound quarks ( i. e., hadrons ), the net colour-charge of the quarks cancels out, as seen from far away. Every smooth ( or differentiable ) map φ: M → N between smooth ( or differentiable ) manifolds induces natural linear maps between the corresponding tangent spaces: They draw three conclusions from Austin: ( 1 ) A performative utterance does not communicate information about an act second-hand — it is the act ; ( 2 ) Every aspect of language (" semantics, syntactics, or even phonematics ") functionally interacts with pragmatics ; ( 3 ) There is no distinction between language and speech. Every year the mayor and the 24 échevins would swear an oath of allegiance " between the hands " of the king or his representative, usually the lieutenant général or the sénéchaussée. Every car uses either a 5. 0 L Ford " Boss 302 " SVO or a 5. 0 L Chevrolet small block race-engine ( depending on the make )-capable of producing between 460 and 485 kW ( 620 — 650 bhp ) of power, but generally quoted as a little over 450 kW ( 600 bhp ) in race trim. Every morphism in a concrete category whose underlying function is injective is a monomorphism ; in other words, if morphisms are actually functions between sets, then any morphism which is a one-to-one function will necessarily be a monomorphism in the categorical sense. 0.258 seconds.
2,003
9,763
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.890625
3
CC-MAIN-2020-10
longest
en
0.925117
http://financialhighway.com/compounding-interest-best-investment-strategy-invest-early/
1,471,986,183,000,000,000
text/html
crawl-data/CC-MAIN-2016-36/segments/1471982290442.1/warc/CC-MAIN-20160823195810-00069-ip-10-153-172-175.ec2.internal.warc.gz
90,934,372
17,195
# Compounding Interest: Best Investment Strategy -Invest Early Shares Compounding is often either ignored or forgotten about by most people when it comes to investing. Compound interest is when you earn interest on top of interest. We have discussed compound interest earlier however when discussing investing the magic of compounding can never be overstated. The earlier you invest the more you can benefit from compounding interest; this is exactly why investing early is so critical. People often ask about what is the best investing strategy, the answer is best investing strategy is to invest early! How does Compounding benefit me? As mentioned previously compounding is when interest is earned on top of interest. Example: You have an investment of \$1000 that pays 10% interest 1st payment: \$100 (\$1000 X 0.1) 2nd payment: \$110 (\$1100 X 0.1) 3rd payment: \$121 (\$1210 X 0.1) 4th payment: \$133.1 (\$1331 X 0.1) Did you notice how the interest increased over time? That’s because the interest payment is added back to the principle and the new interest payment is calculated on the new amount, hence compounding the interest. The above example is fairly simple and is intend to explain the concept of compounding interest, let’s take a look at the real life impact of compounding. I have made a compounding spreadsheet, that can calculate compounding interest over a long period of time, in this calculator I have included five scenarios. You can download the compounding calculator and plug-in your numbers and try different cases, for now we’ll look at the following cases. We have five individuals with different investment strategies; let’s see who has the best investment strategy. George:  Invests \$2000/yr consistently for 20 years, he stops contributing after 20 years, and let’s compounding do the work. Frank: Frank is a little slow so he starts investing 20 years AFTER George, he consistently invests \$2000/yr for 20 years. Lisa: Lisa, like George, invests the \$2000/year consistently but unlike George she contributes for 40 years. Toni: Toni missed the first 2 years, but after that he invests \$2000/year for the next 38 years. Rebeca:  Rebeca has a different style, she skips the first 20 years, but then contributes DOUBLE the amount (\$4000/year) for the next 20 years. Forty years into the future and let’s see who’s investment strategy was the best: George value: \$317K              Total invested: \$40K Frank value: \$82K                   Total invested: \$40k Lisa value: \$399K                    Total invested: \$80K Toni value: \$345K                   Total invested: \$76K Rebecca value: \$163K             Total invested; \$80K Let’s skip the obvious results and look at a couple more interesting outcomes. Lisa and Rebecca both invested \$80K, but look at Lisa’s portfolio is more than double Rebecca’s portfolio. They both invested \$80K, so why would Lisa’s portfolio be worth much more? That is the power of compounding! For an even more staggering comparison let’s look at the following case: Lisa vs Toni: Lisa only invested \$4000 (5%) more than Toni did, but her portfolio is worth \$54K (15%) more than Toni’s, why? The Magic of Compounding. Do not underestimate the power of compounding interest, best investment strategy is to invest early. What are your thoughts on compounding interest? Shares ### 17 Responses to Compounding Interest: Best Investment Strategy -Invest Early 1. See I find people know and believe in compounding interest but they miss a couple of things about it. Yes it is a great and powerful thing. People tend to forget equity returns. Through in a down year and it tends to mess up a nice compound interest curve. Now through in 2008 returns and see what happens. Does it hurt? 2. @ EW Yes years like 2008 hurt, but investors need to focus on the long term returns rather than just a single year. Yes I know 2008 wiped out about 10 years of equity returns but after every down market there is a bull market. 3. So you loose 60% of the value. You do realize you need to get a 150% rate of return just to get to where you were. That’s one hell of a bull market. If it’s wiping out 10 years of returns, is it a good way to estimate future values? Financial planners all the time use compound interest curves to show growth of investments without regard to the way in which equities fluctuate. It’s scary to me that people are misinformed into a false sense of security about the future of their money. How does it make you feel? By the way which 10 years does it wipe out on a compound interest curve? 4. loose 60% of your value? If you lost 60% of your portfolio while the market lost about 40% than you are obviously doing something very wrong. Financial planners use compounded interest and ignore market fluctuations because nobody knows exactly how much it will fluctuate, just as there are bad years like last year, there will be good years you can not possibly take all of that into account when trying to estimate your growth. So you take an average usually 6-8% (yes we’ll have years with -20% but we’ll also have years with 20%) that is the best way one can estimate growth. That is why it is important for investors to educate themselves and ask questions, nobody can control the markets all you can control is your emotions. Although markets are down for the past 10 years, if you look back long term say 20 years we S&P is still up over 200% since august 1989, this includes 3 bear markets. If someone is scared of losing money in the markets and maybe they should stay away from it, eventually inflation will eat away purchasing power. 5. The market lost 40% in the calendar year of 2008. If you go from the market close on Jan 3rd 2008 of 1447 to market close of Mar 9th 2009 of 678, you get a drop of about 55%. If you use intraday it gets worse. So yes the S&P 500 lost nearly 60% of it’s value. Just so you know if you gain 20% one year and lose 20% the next year you aren’t even. You are still down 4% without even taking into account inflation. That puts you pretty far off from 6-8% every year. I do have two questions however, 1. Is the only way to combat inflation using the market? So are people helpless without the market? 2. If financial planners ingnore the unknown (market fluctuations) as you say, wouldn’t that be setting people up for failure? There must be a better way, right? Or is praying for no fluctuations the way to succeed? PS – if i’m bothering you let me know and I’ll go away. 6. B Simple says: You make a great point why you should do it and how it benefits you. But the key is showing people how to do early so they can benefit from compounding. The easiest and simplest way to do it is through a retirement plan where you are making regular contributions. If you automate the process, you can really benefit from compounding. 7. @EW 1. You are no bother. Maybe you keep missing the point, I said financial planners, and pretty much everyone in the investment world uses a AVERAGE expected rate of return and often this number is between 6-8% yes so we did have a bad year, but we also have had good years and will have them again. Again forget just one year look back on S&P 500 over the past 20 or 30 years (long term investing) over the past 20 years S&P AVERAGED over 10% with an annualized return of over 8% and in the past 30 years AVERAGED 12% with an annualized return of 11%. These include 3 bear markets and including the tech bubble and the recent bear market. So even though people lost 40% or more in each of those bear markets the market still returned an AVERAGE of over 8%. 8. Thanks for the mention. As I noted in my post, calculations such as thing almost always overstate the effect of compounding because inflation is also compounding and working against the investor. I would never advocate not saving early. However, IMO, it also makes sense, perhaps more so, to pay down all debt first. People start out their financial lives under massive amounts of debt — mortgage debt, student loans etc. Paying this down provides a high, guaranteed, after-tax rate of return — often times, better than what stocks can provide. 9. I understand what you are saying about averages. You are right about averages. My question is are averages the best way to do financial planning? Are they accurate for future growth? Or are you setting people up for financial failure? In a lot larger scheme, can you use this simple math on/with money. My opinion/point/face (whichever you choose to call it) is that you can’t. That math is not money. I wrote a post in July about it here: http://evolutionofwealth.com/2009/07/29/money-is-not-math/ I use your average annual rates of return over 20 years and it fails at predicting actual values in accounts. I look forward to hearing your thoughts/criticisms. PS – I hope you are enjoying this conversation as much as I am. 10. Retirement Savior says: I would say that the average 401k (allocated 60% stocks/ 40% bonds) probably lost around 25% in 2008. And while the average US annual return captured the US moving from an emerging market to the most powerful nation on earth, it did also capture the Great Depression, the 70’s bear market and the tech crash, and still provided 10% returns. A portfolio allocated between the major asset classes and rebalanced periodically, should still provide a reasonable rate of return over every 30-40 year period. And that is not even taking into account active strategies. 11. Claire Elstun Coffel says: Where can I invest for compound interest safely? I invested with Bulow Funds and they stole my money. I used Liberty Reserve for an internation transaction. I did not recover a dime. Claire 12. Barbara Fussmuller says: “Example: You have an investment of \$1000 that pays 10% interest” I understand the value of compound interest. It’s a great thing, if one can get an interest rate worth a darn. But I’m really tired of example after example being posted online showing regular folks getting annual rates of 7-10%. Sure, it illustrates the value of compound interest. But this isn’t 1983 here. The best CD rate I can currently find is still under 2%. So, while using high interest rates is a good way to show the effect of compounding, the examples are so unrealistic as to be near worthless. • That is a good point Barbara, however we are not talking CD rates here but long term equity returns. I used 10% to make things easier, but 8% return is reasonable for a diversified portfolio over the long term. 13. Barbara Fussmuller says: “8% return is reasonable for a diversified portfolio” is not interest. Describing how a diversified portfolio might grow over time is not describing the effect of compound interest. Entirely different matters. • Again point is not INTEREST, it is to show the power of compound. I don’t want to run and explain a Monte Carlo simulation and how different asset classes can play out under different assumptions etc. Portfolio construction, analysis and management is a fairly complicated matter. The point is very simple, start investing early. Just because one cannot get a 10% interest on their savings account does not mean one cannot benefit from compounding. The “analysis” relates to long term portfolios and not your daily savings account. • John says: Oh you can benefit from compounding…you will get \$50 in a year from the current interest rates (0.00% – 1.95%). It’s not really a benefit, it’s actually a loss because you can be using your money in a better investment. 14. dripman says: wouldn’t investing in a drip plan (a dividend reivestment plan) in a ROTH IRA instrument eventually achieve the compounding( in the number of shares) you are looking to achieve? If it’s not subject to capital gains taxes, income taxes or even transactions fees on the reinvestments, it is going to compound pretty well after 25 – 30 years right? What is wrong with this approach?
2,729
11,986
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.78125
3
CC-MAIN-2016-36
latest
en
0.954795
https://chem.libretexts.org/Core/Analytical_Chemistry/Electrochemistry/Basics_of_Electrochemistry/Electrochemistry/Nernst_Equation
1,500,965,357,000,000,000
text/html
crawl-data/CC-MAIN-2017-30/segments/1500549425082.56/warc/CC-MAIN-20170725062346-20170725082346-00238.warc.gz
609,702,125
18,376
# Nernst Equation Skills to Develop • Explain and distinguish the cell potential and standard cell potential. • Calculate cell potentials from known conditions (Nernst Equation). • Calculate the equilibrium constant from cell potentials. ### Nernst Equation Electrochemistry deals with cell potential as well as energy of chemical reactions. The energy of a chemical system drives the charges to move, and the driving force gives rise to the cell potential of a system called galvanic cell. The energy aspect is also related to the chemical equilibrium. All these relationships are tied together in the concept of the Nernst equation. Walther H. Nernst (1864-1941) received the Nobel prize in 1920 "in recognition of his work in thermochemistry". His contribution to chemical thermodynamics led to the well known equation correlating chemical energy and the electric potential of a galvanic cell or battery. #### Electric Work and Gibb's Free Energy Energy takes many forms: mechanical work (potential and kinetic energy), heat, radiation (photons), chemical energy, nuclear energy (mass), and electric energy. A summary is given regarding the evaluation of electric energy, as this is related to electrochemistry. ##### Electric Work Energy drives all changes including chemical reactions. In a redox reaction, the energy released in a reaction due to movement of charged particles gives rise to a potential difference. The maximum potential difference is called the electromotive force (EMF), E, and the maximum electric work W is the product of charge q in Coulomb (C), and the potential DE in Volt (= J / C) or EMF. $$W \mathrm{\color{Periwinkle} J} = q\: \textrm D E\: \mathrm{\color{Periwinkle} C\: J/C\: (units)}$$ Note that the EMF DE is determined by the nature of the reactants and electrolytes, not by the size of the cell or amounts of material in it. The amount of reactants is proportional to the charge and available energy of the galvanic cell. ##### Gibb's Free Energy The Gibb's free energy DG is the negative value of maximum electric work, \begin{align} \ce \Delta G &= - W\\ &= - q \:\ce \Delta E \end{align} A redox reaction equation represents definite amounts of reactants in the formation of also definite amounts of products. The number (n) of electrons in such a reaction equation is related to the amount of charge transferred when the reaction is completed. Since each mole of electron has a charge of 96485 C (known as the Faraday's constant, F), $$q = n F$$ and, $$\ce \Delta G = - n F \:\ce \Delta E$$ At standard conditions, $$\ce \Delta G^\circ = - n F \:\ce \Delta E^\circ$$ ##### The General Nernst Equation The general Nernst equation correlates the Gibb's Free Energy DG and the EMF of a chemical system known as the galvanic cell. For the reaction $$\ce{a\, A + b\, B \rightleftharpoons c\, C + d\, D}$$ and $$Q = \mathrm{\dfrac{[C]^c [D]^d}{[A]^a [B]^b}}$$ It has been shown that $$\textrm D G = \textrm D G^\circ + R T \ln Q$$ and $$\textrm D G = - n F\:\textrm D E$$ Therefore $$- n F \:\textrm D E = - n F \: \textrm D E^\circ + R T \ln Q$$ where R, T, Q and F are the gas constant (8.314 J mol-1 K-1), temperature (in K), reaction quotient, and Faraday constant (96485 C) respectively. Thus, we have $$\textrm D E = \textrm D E^\circ - \dfrac{R T}{n F} \ln \mathrm{\dfrac{[C]^c [D]^d}{[A]^a [B]^b}}$$ This is known as the Nernst equation. The equation allows us to calculate the cell potential of any galvanic cell for any concentrations. Some examples are given in the next section to illustrate its application. It is interesting to note the relationship between equilibrium and the Gibb's free energy at this point. When a system is at equilibrium, DE = 0, and Qeq = K. Therefore, we have, $$\textrm D E^\circ = \dfrac{R T}{n F} \ln \mathrm{\dfrac{[C]^c [D]^d}{[A]^a [B]^b}},\: \textrm{(for equilibrium concentrations)}$$ Thus, the equilibrium constant and DE° are related. ##### The Nernst Equation at 298 K At any specific temperature, the Nernst equation derived above can be reduced into a simple form. For example, at the standard condition of 298 K (25°), the Nernst equation becomes $$\textrm D E = \textrm D E^\circ - \dfrac{0.0592\: \textrm V}{n} \log \mathrm{\dfrac{[C]^c [D]^d}{[A]^a [B]^b}}$$ Please note that log is the logarithm function based 10, and ln, the natural logarithm function. For the cell $$\ce{Zn \,|\, Zn^2+ \,||\, H+ \,|\, H2 \,|\, Pt}$$ we have a net chemical reaction of $$\ce{Zn_{\large{(s)}} + 2 H+ \rightarrow Zn^2+ + H_{2\large{(g)}}}$$ and the standard cell potential DE° = 0.763. If the concentrations of the ions are not 1.0 M, and the $$\ce{H2}$$ pressure is not 1.0 atm, then the cell potential DE may be calculated using the Nernst equation: $$\textrm D E = \textrm D E^\circ - \dfrac{0.0592\: \ce V}{n} \log \ce{\dfrac{P(H2) [Zn^2+]}{[H+]^2}}$$ with n = 2 in this case, because the reaction involves 2 electrons. The numerical value is 0.0592 only when T = 298 K. This constant is temperature dependent. Note that the reactivity of the solid $$\ce{Zn}$$ is taken as 1. If the $$\ce{H2}$$ pressure is 1 atm, the term $$\ce{P(H2)}$$ may also be omitted. The expression for the argument of the log function follows the same rules as those for the expression of equilibrium constants and reaction quotients. Indeed, the argument for the log function is the expression for the equilibrium constant K, or reaction quotient Q. When a cell is at equilibrium, DE = 0.00 and the expression becomes an equilibrium constant K, which bears the following relationship: $$\log K = \dfrac{n \textrm D E^\circ}{0.0592}$$ where DE° is the difference of standard potentials of the half cells involved. A battery containing any voltage is not at equilibrium. The Nernst equation also indicates that you can build a battery simply by using the same material for both cells, but by using different concentrations. Cells of this type are called concentration cells. Example 1 Calculate the EMF of the cell $$\mathrm{Zn_{\large{(s)}} \,|\, Zn^{2+}\: (0.024\: M) \,||\, Zn^{2+}\: (2.4\: M) \,|\, Zn_{\large{(s)}}}$$ SOLUTION $$\mathrm{Zn^{2+}\: (2.4\: M) + 2 e^- \rightarrow Zn \hspace{30px} Reduction}\\ \mathrm{\underline{Zn \rightarrow Zn^{2+}\: (0.024\: M) + 2 e^- \hspace{15px} Oxidation \hspace{35px}}}\\ \mathrm{Zn^{2+}\: (2.4\: M) \rightarrow Zn^{2+}\: (0.024\: M)},\:\:\: \textrm D E^\circ = 0.00 \leftarrow \textrm{Net reaction}$$ Using the Nernst equation: \begin{align} \textrm D E &= 0.00 - \dfrac{0.0592}{2} \log \dfrac{0.024}{2.4}\\ &= (-0.296)(-2.0)\\ &= \textrm{0.0592 V} \end{align} DISCUSSION Understandably, the $$\ce{Zn^2+}$$ ions try to move from the concentrated half cell to a dilute solution. That driving force gives rise to 0.0592 V. From here, you can also calculate the energy of dilution. If you write the equation in the reverse direction, $$\mathrm{Zn^{2+}\: (0.024\: M) \rightarrow Zn^{2+}\: (2.4\: M)}$$, its voltage will be -0.0592 V. At equilibrium concentrations in the two half cells will have to be equal, in which case the voltage will be zero. Example 2 Show that the voltage of an electric cell is unaffected by multiplying the reaction equation by a positive number. SOLUTION Assume that you have the cell $$\ce{Mg \,|\, Mg^2+ \,||\, Ag+ \,|\, Ag}$$ and the reaction is: $$\ce{Mg + 2 Ag+ \rightarrow Mg^2+ + 2 Ag}$$ Using the Nernst equation $$\textrm D E = \textrm D E^\circ - \dfrac{0.0592}{2} \log \ce{\dfrac{[Mg^2+]}{[Ag+]^2}}$$ If you multiply the equation of reaction by 2, you will have $$\ce{2 Mg + 4 Ag+ \rightarrow 2 Mg^2+ + 4 Ag}$$ Note that there are 4 electrons involved in this equation, and n = 4 in the Nernst equation: $$\textrm D E = \textrm D E^\circ - \dfrac{0.0592}{4} \log \ce{\dfrac{[Mg^2+]^2}{[Ag+]^4}}$$ which can be simplified as $$\textrm D E = \textrm D E^\circ - \dfrac{0.0592}{2} \log \ce{\dfrac{[Mg^2+]}{[Ag+]^2}}$$ Thus, the cell potential DE is not affected. Example 3 The standard cell potential dE° for the reaction $$\ce{Fe + Zn^2+ \rightarrow Zn + Fe^2+}$$ is -0.353 V. If a piece of iron is placed in a 1 M $$\ce{Zn^2+}$$ solution, what is the equilibrium concentration of $$\ce{Fe^2+}$$? SOLUTION The equilibrium constant K may be calculated using \begin{align} K &= 10^{\large{(n \textrm D E^\circ)/0.0592}}\\ &= 10^{-11.93}\\ &= 1.2\times10^{-12}\\ &= \mathrm{[Fe^{2+}]/[Zn^{2+}]} \end{align} Since $$\mathrm{[Zn^{2+}] = 1\: M}$$, it is evident that $$\ce{[Fe^2+]} = \textrm{1.2E-12 M}$$. Example 4 From the standard cell potentials, calculate the solubility product for the following reaction: $$\ce{AgCl \rightarrow Ag+ + Cl-}$$ SOLUTION There are $$\ce{Ag+}$$ and $$\ce{AgCl}$$ involved in the reaction, and from the table of standard reduction potentials, you will find: $$\ce{AgCl + e^- \rightarrow Ag + Cl-},\hspace{15px} E^\circ = \mathrm{0.2223\: V} \tag{1}$$ Since this equation does not contain the species $$\ce{Ag+}$$, you need, $$\ce{Ag+ + e^- \rightarrow Ag}, \hspace{15px} E^\circ = \mathrm{0.799\: V} \tag{2}$$ Subtracting (2) from (1) leads to, $$\ce{AgCl \rightarrow Ag+ + Cl-} \hspace{15px} \textrm D E^\circ = - 0.577$$ Let Ksp be the solubility product, and employ the Nernst equation, \begin{align} \log K_{\ce{sp}} &= \dfrac{-0.577}{0.0592} = -9.75\\ K_{\ce{sp}} &= 10^{-9.75} = 1.8\times10^{-10} \end{align} This is the value that you have been using in past tutorials. Now, you know that Ksp is not always measured from its solubility. ### Questions 1. In the lead storage battery, $$\ce{Pb \,|\, PbSO4 \,|\, H2SO4 \,|\, PbSO4,\: PbO2 \,|\, Pb}$$ would the voltage change if you changed the concentration of $$\ce{H2SO4}$$? (yes/no) 2. Choose the correct Nernst equation for the cell $$\ce{Zn_{\large{(s)}} \,|\, Zn^2+ \,||\, Cu^2+ \,|\, Cu_{\large{(s)}}}$$. 1. $$\textrm D E = \textrm D E^\circ - 0.0296 \log\left(\ce{\dfrac{[Zn^2+]}{[Cu^2+]}}\right)$$ 2. $$\textrm D E = \textrm D E^\circ - 0.0296 \log\left(\ce{\dfrac{[Cu^2+]}{[Zn^2+]}}\right)$$ 3. $$\textrm D E = \textrm D E^\circ - 0.0296 \log\left(\ce{\dfrac{Zn}{Cu}}\right)$$ 4. $$\textrm D E = \textrm D E^\circ - 0.0296 \log\left(\ce{\dfrac{Cu}{Zn}}\right)$$ 3. The standard cell potential DE° is 1.100 V for the cell, $$\ce{Zn_{\large{(s)}} \,|\, Zn^2+ \,||\, Cu^2+ \,|\, Cu_{\large{(s)}}}$$. If $$\mathrm{[Zn^{2+}] = 0.01\: M}$$, and $$\mathrm{[Cu^{2+}] = 1.0\: M}$$, what is DE or EMF? 4. The logarithm of the equilibrium constant, log K, of the net cell reaction of the cell $$\ce{Zn_{\large{(s)}} \,|\, Zn^2+ \,||\, Cu^2+ \,|\, Cu_{\large{(s)}}} \hspace{15px} \textrm D E^\circ = \mathrm{1.100\: V}$$ is 1. 1.100 / 0.0291 2. -1.10 / 0.0291 3. 0.0291 / 1.100 4. -0.0291 / 1.100 5. 1.100 / 0.0592 ### Solutions Hint... The net cell reaction is $$\ce{Pb + PbO2 + 2 HSO4- + 2 H+ \rightarrow 2 PbSO4 + 2 H2O}$$ and the Nernst equation $$\textrm D E = \textrm D E^\circ - \left(\dfrac{0.0592}{2}\right)\log\ce{\dfrac{1}{[HSO4- ]^2[H+]^2}}$$. Hint... The cell as written has Reduction on the right: $$\ce{Cu^2+ + 2 e^- \rightarrow Cu}$$ Oxidation on the left: $$\ce{Zn \rightarrow Zn^2+ + 2 e^-}$$ Net reaction of cell is $$\ce{Zn_{\large{(s)}} + Cu^2+ \rightarrow Cu_{\large{(s)}} + Zn^2+}$$ Hint... A likely wrong result is 1.041 V. The term that modifies DE is $$-\left(\dfrac{0.059}{n}\right)\log\ce{\dfrac{[Zn^2+]}{[Cu^2+]}}$$ (n = 2 in this case). Understandably, if the concentration of $$\ce{Zn^2+}$$ is low, there is more tendency for the reaction, $$\ce{Zn \rightarrow Zn^2+ + 2 e^-}$$. $$0 = 1.100 - 0.0296 \log \left(\ce{\dfrac{[Zn^2+]}{[Cu^2+]}}\right)$$
3,949
11,620
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.5
4
CC-MAIN-2017-30
latest
en
0.911169
http://kldns.net/error-propagation/standard-deviation-vs-error-propagation.html
1,508,233,053,000,000,000
text/html
crawl-data/CC-MAIN-2017-43/segments/1508187821017.9/warc/CC-MAIN-20171017091309-20171017111309-00727.warc.gz
189,123,831
5,994
## Fix Standard Deviation Vs Error Propagation (Solved) Home > Error Propagation > Standard Deviation Vs Error Propagation # Standard Deviation Vs Error Propagation ## Contents doi:10.1007/s00158-008-0234-7. ^ Hayya, Jack; Armstrong, Donald; Gressis, Nicolas (July 1975). "A Note on the Ratio of Two Normally Distributed Variables". f k = ∑ i n A k i x i  or  f = A x {\displaystyle f_ ρ 5=\sum _ ρ 4^ ρ 3A_ ρ 2x_ ρ 1{\text{ or }}\mathrm The uncertainty in the weighings cannot reduce the s.d. I would like to illustrate my question with some example data. this contact form By using this site, you agree to the Terms of Use and Privacy Policy. Clearly I can get a brightness for the star by calculating an average weighted by the inverse squares of the errors on the individual measurements, but how can I get the If you have the time to help me get my thoughts straight; in a situation where the sample sizes had been equal, my proposed method above would have been correct, right? Table 1: Arithmetic Calculations of Error Propagation Type1 Example Standard Deviation ($$\sigma_x$$) Addition or Subtraction $$x = a + b - c$$ $$\sigma_x= \sqrt{ {\sigma_a}^2+{\sigma_b}^2+{\sigma_c}^2}$$ (10) Multiplication or Division $$x = this ## Error Propagation Calculator Please try the request again. doi:10.1287/mnsc.21.11.1338. The best you can do is to estimate that σ. Multivariate error analysis: a handbook of error propagation and calculation in many-parameter systems. I really appreciate your help. What's needed is a less biased estimate of the SDEV of the population. JCGM 102: Evaluation of Measurement Data - Supplement 2 to the "Guide to the Expression of Uncertainty in Measurement" - Extension to Any Number of Output Quantities (PDF) (Technical report). Error Propagation Excel From your responses I gathered two things. Since Rano quotes the larger number, it seems that it's the s.d. Error Propagation Physics Retrieved 2016-04-04. ^ "Propagation of Uncertainty through Mathematical Operations" (PDF). If the statistical probability distribution of the variable is known or can be assumed, it is possible to derive confidence limits to describe the region within which the true value of https://en.wikipedia.org/wiki/Propagation_of_uncertainty The uncertainty in the weighings cannot reduce the s.d. Any insight would be very appreciated. Error Propagation Average SOLUTION To actually use this percentage to calculate unknown uncertainties of other variables, we must first define what uncertainty is. I would like to illustrate my question with some example data. In matrix notation, [3] Σ f = J Σ x J ⊤ . {\displaystyle \mathrm {\Sigma } ^{\mathrm {f} }=\mathrm {J} \mathrm {\Sigma } ^{\mathrm {x} }\mathrm {J} ^{\top }.} That ## Error Propagation Physics Some error propagation websites suggest that it would be the square root of the sum of the absolute errors squared, divided by N (N=3 here). http://lectureonline.cl.msu.edu/~mmp/labs/error/e2.htm But of course! Error Propagation Calculator A way to do so is by using a Kalman filter: http://en.wikipedia.org/wiki/Kalman_filter In your case, for your two measurements a and b (and assuming they both have the same size), you Error Propagation Chemistry Starting with a simple equation: $x = a \times \dfrac{b}{c} \tag{15}$ where \(x$$ is the desired results with a given standard deviation, and $$a$$, $$b$$, and $$c$$ are experimental variables, each JSTOR2281592. ^ Ochoa1,Benjamin; Belongie, Serge "Covariance Propagation for Guided Matching" ^ Ku, H. http://kldns.net/error-propagation/standard-deviation-propagation-of-error.html How to describe very tasty and probably unhealthy food Why are only passwords hashed? I know I can determine the propagated error doing: $$SD=\sqrt{SD_A^2+SD_B^2}$$ but how can I propagate standard errors (since I'm dealing with averages of measurements) instead of standard deviations? Now that we have done this, the next step is to take the derivative of this equation to obtain: (dV/dr) = (∆V/∆r)= 2cr We can now multiply both sides of the Error Propagation Definition what really are: Microcontroller (uC), System on Chip (SoC), and Digital Signal Processor (DSP)? In this case, expressions for more complicated functions can be derived by combining simpler functions. p.5. http://kldns.net/error-propagation/standard-deviation-using-propagation-error.html In your particular case when you estimate SE of $C=A-B$ and you know $\sigma^2_A$, $\sigma^2_B$, $N_A$, and $N_B$, then $$\mathrm{SE}_C=\sqrt{\frac{\sigma^2_A}{N_A}+\frac{\sigma^2_B}{N_B}}.$$ Please note that another option that could potentially sound reasonable is ISBN0470160551.[pageneeded] ^ Lee, S. Error Propagation Calculus Resistance measurement A practical application is an experiment in which one measures current, I, and voltage, V, on a resistor in order to determine the resistance, R, using Ohm's law, R rano, May 27, 2012 May 27, 2012 #9 viraltux rano said: ↑ But I guess to me it is reasonable that the SD in the sample measurement should be propagated to ## Journal of Sound and Vibrations. 332 (11): 2750–2776. The mean of this transformed random variable is then indeed the scaled Dawson's function 2 σ F ( p − μ 2 σ ) {\displaystyle {\frac {\sqrt {2}}{\sigma }}F\left({\frac {p-\mu }{{\sqrt Propagation of Error http://webche.ent.ohiou.edu/che408/S...lculations.ppt (accessed Nov 20, 2009). haruspex, May 28, 2012 May 28, 2012 #17 TheBigH Hi everyone, I am having a similar problem, except that mine involves repeated measurements of the same same constant quantity. Propagation Of Errors Pdf Retrieved 2016-04-04. ^ "Strategies for Variance Estimation" (PDF). Everyone who loves science is here! The uncertainty u can be expressed in a number of ways. Advanced Astrophotography Digital Camera Buyer’s Guide: Real Cameras Anyon Demystified So I Am Your Intro Physics Instructor 11d Gravity From Just the Torsion Constraint Name the Science Photo Solving the Cubic his comment is here But in this case the mean ± SD would only be 21.6 ± 2.45 g, which is clearly too low. JSTOR2629897. ^ a b Lecomte, Christophe (May 2013). "Exact statistics of systems with uncertainties: an analytical theory of rank-one stochastic dynamic systems". I should not have to throw away measurements to get a more precise result. GUM, Guide to the Expression of Uncertainty in Measurement EPFL An Introduction to Error Propagation, Derivation, Meaning and Examples of Cy = Fx Cx Fx' uncertainties package, a program/library for transparently For example, repeated multiplication, assuming no correlation gives, f = A B C ; ( σ f f ) 2 ≈ ( σ A A ) 2 + ( σ B more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed JSTOR2629897. ^ a b Lecomte, Christophe (May 2013). "Exact statistics of systems with uncertainties: an analytical theory of rank-one stochastic dynamic systems". If you could clarify for me how you would calculate the population mean ± SD in this case I would appreciate it. ISSN0022-4316. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view 2. The uncertainty u can be expressed in a number of ways. But I note that the value quoted, 24.66, is as though what's wanted is the variance of weights of rocks in general. (The variance within the sample is only 20.1.) That sigma-squareds) for convenience and using Vx, Vy, Ve, VPx, VPy, VPe with what I hope are the obvious meanings, your equation reads: VPx = VPy - VPe If there are m These correspond to SDEV and SDEVP in spreadsheets. In general this problem can be thought of as going from values that have no variance to values that have variance. The value of a quantity and its error are then expressed as an interval x ± u. Retrieved 3 October 2012. ^ Clifford, A. Please try the request again. The standard error of the mean of the first group is 0.1, and of the second it is 1. These instruments each have different variability in their measurements. Let's say we measure the radius of an artery and find that the uncertainty is 5%. Has an SRB been considered for use in orbit to launch to escape velocity?
2,041
8,236
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.296875
3
CC-MAIN-2017-43
longest
en
0.815975
http://oeis.org/A045923
1,582,797,799,000,000,000
text/html
crawl-data/CC-MAIN-2020-10/segments/1581875146681.47/warc/CC-MAIN-20200227094720-20200227124720-00178.warc.gz
103,903,255
4,318
The OEIS Foundation is supported by donations from users of the OEIS and by a grant from the Simons Foundation. Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!) A045923 Number of irreducible representations of symmetric group S_n for which every matrix has determinant 1. 2 1, 1, 1, 2, 2, 7, 7, 10, 10, 34, 40, 53, 61, 103, 112, 143, 145, 369, 458, 579, 712, 938, 1127, 1383, 1638, 2308, 2754, 3334, 3925, 5092, 5818, 6989, 7759, 12278, 14819, 17881, 21477, 25887, 30929, 36954, 43943, 52918, 62749, 74407, 87854, 104534, 122706, 144457 (list; graph; refs; listen; history; text; internal format) OFFSET 1,4 COMMENTS Irreducible representations of S_n contained in the special linear group were first considered by L. Solomon (unpublished). REFERENCES R. P. Stanley, Enumerative Combinatorics, vol. 2, Cambridge University Press, Cambridge and New York, 1999, Exercise 7.55. LINKS Amritanshu Prasad, Table of n, a(n) for n = 1..999 A. Ayyer, A. Prasad and S. Spallone, Representations of symmetric groups with non-trivial determinant, arXiv:1604.08837 [math.RT] (2016). FORMULA a(n) = A000041(n) - A272090(n). - Amritanshu Prasad, May 11 2016 EXAMPLE a(5)=2, since only the irreducible representations indexed by the partitions (5) and (3,2) are contained in the special linear group. MATHEMATICA b[1] = 0; b[n_] := Module[{bb, e, pos, k, r}, bb = Reverse[IntegerDigits[n, 2]]; e = bb[[1]]; pos = DeleteCases[Flatten[Position[bb, 1]], 1] - 1; r = Length[pos]; Do[k[i] = pos[[i]], {i, 1, r}]; 2^Sum[k[i], {i, 2, r}] (2^(k[1] - 1) + Sum[2^((v + 1) (k[1] - 2) - v (v - 1)/2), {v, 1, k[1] - 1}] + e 2^(k[1] (k[1] - 1)/2)) ]; a[n_] := PartitionsP[n] - b[n]; Array[a, 50] (* Jean-François Alcover, Aug 09 2018, after Amritanshu Prasad *) CROSSREFS Cf. A000041, A272090. Sequence in context: A064288 A054085 A021443 * A306238 A318086 A244049 Adjacent sequences:  A045920 A045921 A045922 * A045924 A045925 A045926 KEYWORD nonn,nice AUTHOR EXTENSIONS a(31)-a(48) from Amritanshu Prasad, May 11 2016 STATUS approved Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent The OEIS Community | Maintained by The OEIS Foundation Inc. Last modified February 27 04:56 EST 2020. Contains 332299 sequences. (Running on oeis4.)
858
2,365
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.296875
3
CC-MAIN-2020-10
latest
en
0.641342
http://gmatclub.com/forum/my-gmat-progress-122260.html?fl=similar
1,484,968,528,000,000,000
text/html
crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00344-ip-10-171-10-70.ec2.internal.warc.gz
122,594,422
50,118
My GMAT Progress : General GMAT Questions and Strategies Check GMAT Club Decision Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack It is currently 20 Jan 2017, 19:15 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track Your Progress every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # My GMAT Progress new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message Manager Joined: 10 Sep 2011 Posts: 52 Location: Georgia GMAT 1: 540 Q44 V21 GPA: 3.68 WE: Public Relations (Non-Profit and Government) Followers: 0 Kudos [?]: 18 [4] , given: 22 My GMAT Progress [#permalink] ### Show Tags 22 Oct 2011, 05:37 4 This post received KUDOS HI guys!! need advice...i started from 5 September GMAT and i have covered 5 strategies..also i am making GMAT CLub tests and some Kaptests... how do you think am i too slow?? i am gonna take exam around at 15 December... _________________ Flying from shadow to the light GMAT Forum Moderator Status: Accepting donations for the mohater MBA debt repayment fund Joined: 05 Feb 2008 Posts: 1884 Location: United States Concentration: Operations, Finance Schools: Ross '14 (M) GMAT 1: 610 Q0 V0 GMAT 2: 710 Q48 V38 GPA: 3.54 WE: Accounting (Manufacturing) Followers: 58 Kudos [?]: 787 [4] , given: 234 Re: My GMAT Progress [#permalink] ### Show Tags 24 Oct 2011, 05:24 4 This post received KUDOS You're not really telling us much. Are you improving (score, approach to problems, timing, etc.)? Two months is enough for some people, but going over six months is not advisable as one can back track and forget previously covered material. Sept-Dec is not that long and should be fine. _________________ Strategy Discussion Thread | Strategy Master | GMAT Debrief| Please discuss strategies in discussion thread. Master thread will be updated accordingly. | GC Member Write Ups GMAT Club Premium Membership - big benefits and savings Manager Joined: 10 Sep 2011 Posts: 52 Location: Georgia GMAT 1: 540 Q44 V21 GPA: 3.68 WE: Public Relations (Non-Profit and Government) Followers: 0 Kudos [?]: 18 [3] , given: 22 Re: My GMAT Progress [#permalink] ### Show Tags 24 Oct 2011, 08:52 3 This post received KUDOS i don't when about my progress but i think i am doing pretty well...today i did OG PS questions and i made 5mistake from 100...but it was too easy i think.... _________________ Flying from shadow to the light Re: My GMAT Progress   [#permalink] 24 Oct 2011, 08:52 Similar topics Replies Last post Similar Topics: 2 How to progress my studies ?? 9 20 Sep 2014, 07:21 2 Please Rate My Progress 8 01 Sep 2012, 02:01 4 GMAT retake - My plan and the progress 26 12 Sep 2011, 21:20 My GMAT prep progress....test in 2 weeks 7 28 Nov 2010, 22:45 1 Analysis of my GMAT prep progress 4 31 Aug 2009, 23:34 Display posts from previous: Sort by # My GMAT Progress new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Moderators: WaterFlowsUp, HiLine Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
1,004
3,672
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.96875
3
CC-MAIN-2017-04
latest
en
0.86199
https://oeis.org/A119730/internal
1,579,425,425,000,000,000
text/html
crawl-data/CC-MAIN-2020-05/segments/1579250594333.5/warc/CC-MAIN-20200119064802-20200119092802-00249.warc.gz
593,381,175
2,986
The OEIS Foundation is supported by donations from users of the OEIS and by a grant from the Simons Foundation. Thanks to everyone who made a donation during our annual appeal! To see the list of donors, or make a donation, see the OEIS Foundation home page. Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!) A119730 Primes p such that p+1, p+2, p+3, p+4 and p+5 have equal number of divisors. 0 %I %S 13781,19141,21493,50581,142453,152629,253013,298693,307253,346501, %T 507781,543061,845381,1079093,1273781,1354501,1386901,1492069,1546261, %U 1661333,1665061,1841141,2192933,2208517,2436341,2453141,2545013 %N Primes p such that p+1, p+2, p+3, p+4 and p+5 have equal number of divisors. %e 13781 is OK since 13782, 13783, 13784, 13785 and 13786 all have 8 divisors: %e {1,2,3,6,2297,4594,6891,13782}, {1,7,11,77,179,1253,1969,13783}, %e {1,2,4,8,1723,3446,6892,13784}, {1,3,5,15,919,2757,4595,13785} and %e {1,2,61,113,122,226,6893,13786}. %t Select[Prime@Range[1000000],DivisorSigma[0,#+1]==DivisorSigma[0,#+2]==DivisorSigma[0,#+3]==DivisorSigma[0,#+4]==DivisorSigma[0,#+5]&] %t endQ[n_]:= Length[Union[DivisorSigma[0, (n + Range[5])]]]==1; Select[Prime[ Range[ 200000]],endQ] (* _Harvey P. Dale_, Jan 16 2019 *) %Y Cf. A008329, A049234. %K nonn %O 1,1 %A _Zak Seidov_, Jul 29 2006 Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent The OEIS Community | Maintained by The OEIS Foundation Inc. Last modified January 19 04:09 EST 2020. Contains 331031 sequences. (Running on oeis4.)
633
1,656
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.328125
3
CC-MAIN-2020-05
latest
en
0.688837
https://www.vlsroulette.com/index.php?topic=13547.0
1,722,750,745,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722640389685.8/warc/CC-MAIN-20240804041019-20240804071019-00581.warc.gz
842,413,517
8,249
The American wheel Started by John1234, December 10, 2009, 07:57:52 PM 0 Members and 1 Guest are viewing this topic. John1234 There seems to be very little discussion on the possibility of a flaw in the American wheel. Over the last month I looked into a lot of possibilities. Here is a quick system that I came up with Use these numbers only. Look at the wheel that I posted and look at the board. 0,00, 7,8,9, 10, 11, 12 and 25,26,27,28,29,30 ON THE AMERICAN WHEEL. these numbers are not circled on the board. Now look at where they sit on the wheel. I am not exactly sure about the most effective way to play these numbers. One Idea that I tried with a bit of success was this: Wait for one of the numbers to hit. the number will be the trigger. Bet the splits. So you will cover all 12 numbers plus the split of 0 and 00. You will bet until you lose. When you lose you wait for a new trigger. Progression is 1,1,2,3,4 then Idk what. I suck at designing progressions. The letters on the wheel are as follows: A= column 1 B = column 2 C= column 3 The two pictures are attachments. elmo give me 5-10 minutes john1234 and I will show you a good way to play. elmo o.k. here we go! Wait for the 7-12 or 25-30 to come out 4 times at least. What must NOT come out is the 1-6 or 31-36. If the 13-18 or 19-24 come out, just ignore them, so as an example. 11 13 19 7 14 20 25 13 30 o.k. there you have had 9 spins and the 7-12 and 25-30 have come out the required 4 times. The 1-6 and 31-36 have went missing which is excactly what we were looking for. Now we will use an 8 step progression but in a unique way. put 1 unit on the 1-3 street. put 1 unit on the 4-6 street. put 1 unit on the 13-18 double street. put 1 unit on the 19-24 double street. put 1 unit on the 31-33 street. put 1 unit on the 34-36 street. Now to make a profit, we are hoping the 1-3, 4-6, 31-33, or 34-36 street appears but if the 13-18 double street or 19-24 double street appear, we get returned our chips. So in effect we are either going to win 6 or draw even. The only thing that can beat us is if the 7-12 double street or 25-30 double street appear, in which case we go to step 2 in the progression of a total of 8 steps. This is a good way of playing because even if we dont hit our 1-3, 4-6, 31-33 or 34-36 street for a profit, we will many times just get our stake back and not have to move up 1 in the progression. What this all means is that for us to lose completely, we are going to have to witness a dozen (which our 4 potential winning streets are) not coming out for something like over 20+ spins. Remember just for the qualifying in this instance, it took 9 spins and we saw no 1-3, 4-6, 31-33 or 34-36. And now we can play an 8 step progression where even if the 13-18 or 19-24 show up a lot, we are always going to be returned our chips and we do not have to move up 1 in the progression. So it is possible you could go through 25-30 spins and still only be at level 3 or 4 in the progression when you hit your winner. It is fair to say that you will very seldom ever lose a game playing this way and you will build up some really nice profits. John1234 Thanks for showing me a way to play using these numbers. I am not too familiar with designing systems for roulette so it is always nice to have someone like you help out. I am actually focusing on baccarat right now but I always like to have a roulette system in my back pocket for when I get board playing baccarat. I"ll have to look more into what you suggested, it looks very good. Here's an idea for you John1234.  When a number hits on the second column (trigger), bet columns 1 and 3 with a 1, 3, 9 progression.  Wheel-order, columns 1 and 3 are connected on the wheel-order while column 2 is almost totally seperate. kattila Elmo, which would be  the next 7 steep progression? Thanks. -
1,101
3,862
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.53125
4
CC-MAIN-2024-33
latest
en
0.943553
https://www.queryhome.com/puzzle/41257/drawn-random-randomly-chosen-what-maximum-probability-drawn
1,721,612,349,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763517805.92/warc/CC-MAIN-20240722003438-20240722033438-00048.warc.gz
814,653,169
24,901
# A ball is drawn at random from a randomly chosen bag. What is the maximum probability the drawn ball is red? 182 views Each of three bags b1, b2, b3 contains only red and blue balls in the quantities: Bag b1 red = 2 balls blue = x2 – 6x + 13 balls Bag b2 red = 3 balls blue = x2 – 6x + 12 balls Bag b3 red = 4 balls blue = x2 – 6x + 11 balls A ball is drawn at random from a randomly chosen bag. What is the maximum probability the drawn ball is red? posted Dec 3, 2021 Looking for solution? Promote on: Similar Puzzles +1 vote A bag contains 5 white, 4 red and 3 black balls. A ball is drawn at random from the bag, what is the probability that it is not black?
205
673
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.21875
3
CC-MAIN-2024-30
latest
en
0.929501
https://brainmass.com/math/calculus-and-analysis/center-of-mass-of-a-four-dimensional-pyramid-145096
1,481,305,380,000,000,000
text/html
crawl-data/CC-MAIN-2016-50/segments/1480698542714.38/warc/CC-MAIN-20161202170902-00085-ip-10-31-129-80.ec2.internal.warc.gz
830,709,991
16,957
Share Explore BrainMass # Center of mass of a four-dimensional pyramid Find the center of mass of a four-dimensional pyramid. Again, no calculus allowed. #### Solution Preview Answer is as follows. Thank you for using Brainmass. ========================================== We will use the knowledge of the 1D, 2D, 3D pyramid's center of masses to find the center of mass of a 4D ... #### Solution Summary I have deduced the center of mass of a four-dimensional pyramid using the center of mass of one, two and three dimensional pyramids. \$2.19
124
551
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.6875
3
CC-MAIN-2016-50
latest
en
0.841713
https://www.gradesaver.com/textbooks/math/algebra/introductory-algebra-for-college-students-7th-edition/chapter-2-section-2-5-an-introduction-to-problem-solving-exercise-set-page-165/11
1,537,554,522,000,000,000
text/html
crawl-data/CC-MAIN-2018-39/segments/1537267157351.3/warc/CC-MAIN-20180921170920-20180921191320-00332.warc.gz
750,218,974
12,840
## Introductory Algebra for College Students (7th Edition) The number is $37$. Let $x$ = the unknown number Then "five times a number" means $5x$ "seven subtracted from five times a number" means $5x-7$ Thus, the equation that represents the situation is: $5x-7=178$ Add $7$ on both sides of the equation to obtain: $5x=185$ Divide $5$ on both sides of the equation to obtain: $x=\dfrac{185}{5} \\x=37$
122
403
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.3125
4
CC-MAIN-2018-39
longest
en
0.863556
https://www.jiskha.com/display.cgi?id=1291962745
1,503,436,231,000,000,000
text/html
crawl-data/CC-MAIN-2017-34/segments/1502886112682.87/warc/CC-MAIN-20170822201124-20170822221124-00611.warc.gz
920,072,706
3,944
physics posted by . A toy train is on a track which has a circumference of 10 meters. The train takes 20 seconds to complete one revolution around the track. • physics - You posted this question twice. They tell you how far it travels in 20 seconds. Divide the circumference by that time to get the speed. The average velocity will depend upon the time interval of the measurement. For any multiple of 20 seconds, you end up on the same place. That should be a clue. Similar Questions 1. physics A toy train track is mounted on a large wheel that is free to turn with negligible friction about a vertical axis (see the figure). A toy train of mass m = 2.5 kg is placed on the track and, with the system initially at rest, the electrical … 2. physics A toy train is on a track which has a circumference of 10 meters. The train takes 20 seconds to complete one revolution around the track.what is it's average speed? 3. Physics Some how TRAIN A travelling at 36 m/s, is accidentally sidetracked onto the train track for TRAIN A. The TRAIN A engineer spots TRAIN A 100 m ahead on the same track and travelling in the same direction. The engineer jams on the brakes … 4. physics A toy train rolls around a horizontal 1.0-{\rm m}-diameter track. The coefficient of rolling friction is 0.15. What is the magnitude of the train's angular acceleration after it is released? 5. Physics A track is mounted on a large wheel that is free to turn with negligible friction about a vertical axis. A toy train of mass m = 0.140 is placed on the track and, with the system initially at rest, the train's electrical power is turned … 6. Physics I'm not really sure on this one... Am i correct? 7. math Sean has 8-inch pieces of toy train track and Ruth has 18-inches of train track. How many of each piece would each child need to build tracks that are equal in length? 8. Physics A 1.11 kg toy train rolls around a circular horizontal track. If the train has an angular acceleration of -2.2 rads/s2 and is released with an angular speed of 17.0 rpm, what time is required for the train to come to a complete stop? 9. Physics A 1.96 kg toy train rolls around a circular horizontal track. If the train has an angular acceleration of -1.95 rads/s2 and is released with an angular speed of 33.0 rpm, what time is required for the train to come to a complete stop? 10. maths A train moving on a circular track with a diameter of 14,642 km,takes 50 minute to complete one revolution. calculate the peripheral velocity of the train in km/h More Similar Questions
602
2,557
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.546875
3
CC-MAIN-2017-34
latest
en
0.938393
https://forum.effectivealtruism.org/posts/gMxTEMvh8RttX9Nt4/uncertainty-and-sensitivity-analyses-of-givewell-s-cost
1,725,898,221,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00405.warc.gz
241,800,403
159,089
# 86 (The same content is broken up into three posts and given a very slightly different presentation on my blog.) # Overview GiveWell models the cost-effectiveness of its top charities. Because the input parameters are uncertain (How much moral weight should we give to increasing consumption? What is the current income of a typical GiveDirectly recipient?), the resulting cost-effectiveness estimates are also fundamentally uncertain. By performing uncertainty analysis, we get a better sense of just how uncertain the results are. Uncertainty analysis is also the first step on the route to sensitivity analysis. Sensitivity analysis reveals which input parameters each charity's cost-effectiveness estimate is most sensitive to. That kind of information helps us target future investigations (i.e. uncertainty reduction). The final step is to combine the individual charity cost-effectiveness estimates into one giant model. By performing uncertainty and sensitivity analysis on this giant model, we get a better sense of which input parameters have the most influence on the relative cost-effectiveness of GiveWell's top charities—i.e. how the charities rank against each other. A key feature of the analysis outlined above and performed below is that it requires the analyst to specify their uncertainty over each input parameter. Because I didn't want all of the results here to reflect my idiosyncratic beliefs, I instead pretended that each input parameter is equally uncertain. This makes the results "neutral" in a certain sense, but it also means that they don't reveal much about the real world. To achieve real insight, you need to adjust the input parameters to match your beliefs. You can do that by heading over to the Jupyter notebook, editing the parameters in the second cell, and clicking "Runtime > Run all". This limitation means that all the ensuing discussion is more akin to an analysis template than a true analysis. # Uncertainty analysis of GiveWell's cost-effectiveness estimates ## Section overview GiveWell produces cost-effectiveness models of its top charities. These models take as inputs many uncertain parameters. Instead of representing those uncertain parameters with point estimates—as the cost-effectiveness analysis spreadsheet does—we can (should) represent them with probability distributions. Feeding probability distributions into the models allows us to output explicit probability distributions on the cost-effectiveness of each charity. ## GiveWell's cost-effectiveness analysis GiveWell, an in-depth charity evaluator, makes their detailed spreadsheets models available for public review. These spreadsheets estimate the value per dollar of donations to their 8 top charities: GiveDirectly, Deworm the World, Schistosomiasis Control Initiative, Sightsavers, Against Malaria Foundation, Malaria Consortium, Helen Keller International, and the END Fund. For each charity, a model is constructed taking input values to an estimated value per dollar of donation to that charity. The inputs to these models vary from parameters like "malaria prevalence in areas where AMF operates" to "value assigned to averting the death of an individual under 5". Helpfully, GiveWell isolates the input parameters it deems as most uncertain. These can be found in the "User inputs" and "Moral weights" tabs of their spreadsheet. Outsiders interested in the top charities can reuse GiveWell's model but supply their own perspective by adjusting the values of the parameters in these tabs. For example, if I go to the "Moral weights" tab and run the calculation with a 0.1 value for doubling consumption for one person for one year—instead of the default value of 1—I see the effect of this modification on the final results: deworming charities look much less effective since their primary effect is on income. ## Uncertain inputs GiveWell provides the ability to adjust these input parameters and observe altered output because the inputs are fundamentally uncertain. But our uncertainty means that picking any particular value as input for the calculation misrepresents our state of knowledge. From a subjective Bayesian point of view, the best way to represent our state of knowledge on the input parameters is with a probability distribution over the values the parameter could take. For example, I could say that a negative value for increasing consumption seems very improbable to me but that a wide range of positive values seem about equally plausible. Once we specify a probability distribution, we can feed these distributions into the model and, in principle, we'll end up with a probability distribution over our results. This probability distribution on the results helps us understand the uncertainty contained in our estimates and how literally we should take them. ### Is this really necessary? Perhaps that sounds complicated. How are we supposed to multiply, add and otherwise manipulate arbitrary probability distributions in the way our models require? Can we somehow reduce our uncertain beliefs about the input parameters to point estimates and run the calculation on those? One candidate is to take the single most likely value of each input and using that value in our calculations. This is the approach the current cost-effectiveness analysis takes (assuming you provide input values selected in this way). Unfortunately, the output of running the model on these inputs is necessarily a point value and gives no information about the uncertainty of the results. Because the results are probably highly uncertain, losing this information and being unable to talk about the uncertainty of the results is a major loss. A second possibility is to take lower bounds on the input parameters and run the calculation on these values, and to take the upper bounds on the input parameters and run the calculation on these values. This will produce two bounding values on our results, but it's hard to give them a useful meaning. If the lower and upper bounds on our inputs describe, for example, a 95% confidence interval, the lower and upper bounds on the result don't (usually) describe a 95% confidence interval. ### Computers are nice If we had to proceed analytically, working with probability distributions throughout, the model would indeed be troublesome and we might have to settle for one of the above approaches. But we live in the future. We can use computers and Monte Carlo methods to numerically approximate the results of working with probability distributions while leaving our models clean and unconcerned with these probabilistic details. Guesstimate is a tool you may have heard of that works along these lines and bills itself as "A spreadsheet for things that aren’t certain". ## Analysis We have the beginnings of a plan then. We can implement GiveWell's cost-effectiveness models in a Monte Carlo framework (PyMC3 in this case), specify probability distributions over the input parameters, and finally run the calculation and look at the uncertainty that's been propagated to the results. ### Model The Python source code implementing GiveWell's models can be found on GitHub[1]. The core models can be found in cash.py, nets.py, smc.py, worms.py and vas.py. ### Inputs For the purposes of the uncertainty analysis that follows, it doesn't make much sense to infect the results with my own idiosyncratic views on the appropriate value of the input parameters. Instead, what I have done is uniformly taken GiveWell's best guess and added and subtracted 20%. These upper and lower bounds then become the 90% confidence interval of a log-normal distribution[2]. For example, if GiveWell's best guess for a parameter is 0.1, I used a log-normal with a 90% CI from 0.08 to 0.12. While this approach screens off my influence it also means that the results of the analysis will primarily tell us about the structure of the computation rather than informing us about the world. Fortunately, there's a remedy for this problem too. I have set up a Jupyter notebook[3] with the all the input parameters to the calculation which you can manipulate and rerun the analysis. That is, if you think the moral weight given to increasing consumption ought to range from 0.8 to 1.5 instead of 0.8 to 1.2, you can make that edit and see the corresponding results. Making these modifications is essential for a realistic analysis because we are not, in fact, equally uncertain about every input parameter. It's also worth noting that I have considerably expanded the set of input parameters receiving special scrutiny. The GiveWell cost-effectiveness analysis is (with good reason—it keeps things manageable for outside users) fairly conservative about which parameters it highlights as eligible for user manipulation. In this analysis, I include any input parameter which is not tautologically certain. For example, "Reduction in malaria incidence for children under 5 (from Lengeler 2004 meta-analysis)" shows up in the analysis which follows but is not highlighted in GiveWell's "User inputs" or "Moral weights" tab. Even though we don't have much information with which to second guess the meta-analysis, the value it reports is still uncertain and our calculation ought to reflect that. ### Results Finally, we get to the part that you actually care about, dear reader: the results. Given input parameters which are each distributed log-normally with a 90% confidence interval spanning ±20% of GiveWell's best estimate, here are the resulting uncertainties in the cost-effectiveness estimates: Probability distributions of value per dollar for GiveWell's top charities For reference, here are the point estimates of value per dollar using GiveWell's values for the charities: GiveWell's cost-effectiveness estimates for its top charities Charity Value per dollar GiveDirectly 0.0038 The END Fund 0.0222 Deworm the World 0.0738 Schistosomiasis Control Initiative 0.0378 Sightsavers 0.0394 Malaria Consortium 0.0326 Helen Keller International 0.0223 Against Malaria Foundation 0.0247 I've also plotted a version in which the results are normalized—I divided the results for each charity by that charity's expected value per dollar. Instead of showing the probability distribution on the value per dollar for each charity, this normalized version shows the probability distribution on the percentage of that charity's expected value that it achieves. This version of the plot abstracts from the actual value per dollar and emphasizes the spread of uncertainty. It also reëmphasizes the earlier point that—because we use the same spread of uncertainty for each input parameter—the current results are telling us more about the structure of the model than about the world. For real results, go try the Jupyter notebook! Probability distributions for percentage of expected value obtained with each of GiveWell's top charities ## Section recap Our preliminary conclusion is that all of GiveWell's top charities cost-effectiveness estimates have similar uncertainty with GiveDirectly being a bit more certain than the rest. However, this is mostly an artifact of pretending that we are exactly equally uncertain about each input parameter. # Sensitivity analysis of GiveWell's cost effectiveness estimates ## Section overview In the previous section, we introduced GiveWell's cost-effectiveness analysis which uses a spreadsheet model to take point estimates of uncertain input parameters to point estimates of uncertain results. We adjusted this approach to take probability distributions on the input parameters and in exchange got probability distributions on the resulting cost-effectiveness estimates. But this machinery lets us do more. Now that we've completed an uncertainty analysis, we can move on to sensitivity analysis. The basic idea of sensitivity analysis is, when working with uncertain values, to see which input values most affect the output when they vary. For example, if you have the equation and each of and varies uniformly over the range from 5 to 10, is much more sensitive to then . A sensitivity analysis is practically useful in that it can offer you guidance as to which parameters in your model it would be most useful to investigate further (i.e. to narrow their uncertainty). Visual (scatter plot) and delta moment-independent sensitivity analysis on GiveWell's cost-effectiveness models show which input parameters the cost-effectiveness estimates are most sensitive to. Preliminary results (given our input uncertainty) show that some input parameters are much more influential on the final cost-effectiveness estimates for each charity than others. ## Visual sensitivity analysis The first kind of sensitivity analysis we'll run is just to look at scatter plots comparing each input parameter to the final cost-effectiveness estimates. We can imagine these scatter plots as the result of running the following procedure many times[4]: sample a single value from the probability distribution for each input parameter and run the calculation on these values to determine a result value. If we repeat this procedure enough times, it starts to approximate the true values of the probability distributions. (One nice feature of this sort of analysis is that we see how the output depends on a particular input even in the face of variations in all the other inputs—we don't hold everything else constant. In other words, this is a global sensitivity analysis.) (Caveat: We are again pretending that we are equally uncertain about each input parameter and the results reflect this limitation. To see the analysis result for different input uncertainties, edit and run the Jupyter notebook.) ### Direct cash transfers #### GiveDirectly Scatter plots showing sensitivity of GiveDirectly's cost-effectiveness to each input parameter The scatter plots show that, given our choice of input uncertainty, the output is most sensitive (i.e. the scatter plot for these parameters shows the greatest directionality) to the input parameters: Highlighted input factors to which result is highly sensitive Input Type of uncertainty Meaning/importance value of increasing ln consumption per capita per annum Moral Determines final conversion between empirical outcomes and value transfer as percent of total cost Operational Determines cost of results return on investment Opportunities available to recipients Determines stream of consumption over time baseline consumption per capita Empirical Diminishing marginal returns to consumption mean that baseline consumption matters ### Deworming Some useful and non-obvious context for the following is that the primary putative benefit of deworming is increased income later in life. #### The END Fund Scatter plots showing sensitivity of the END Fund's cost-effectiveness to each input parameter Here, it's a little harder to identify certain factors as more important. It seems that the final estimate is (given our input uncertainty) the result of many factors of medium effect. It does seem plausible that the output is somewhat less sensitive to these factors: Highlighted input factors to which result is minimally sensitive Input Type of uncertainty Meaning/(un)importance num yrs between deworming and benefits Forecast Affects how much discounting of future income streams must be done duration of long-term benefits Forecast The length of time for a which a person works and earns income expected value from leverage and funging Game theoretic How much does money donated to the END Fund shift around other money #### Deworm the World Scatter plots showing sensitivity of the Deworm the World's cost-effectiveness to each input parameter Again, it's a little harder to identify certain factors as more important. It seems that the final estimate is (given our input uncertainty) the result of many factors of medium effect. It does seem plausible that the output is somewhat less sensitive to these factors: Highlighted input factors to which result is minimally sensitive Input Type of uncertainty Meaning/(un)importance num yrs between deworming and benefits Forecast Affects how much discounting of future income streams must be done duration of long-term benefits Forecast The length of time for a which a person works and earns income expected value from leverage and funging Game theoretic How much does money donated to Deworm the World shift around other money #### Schistosomiasis Control Initiative Scatter plots showing sensitivity of the Schistosomiasis Control Initiative's cost-effectiveness to each input parameter Again, it's a little harder to identify certain factors as more important. It seems that the final estimate is (given our input uncertainty) the result of many factors of medium effect. It does seem plausible that the output is somewhat less sensitive to these factors: Highlighted input factors to which result is minimally sensitive Input Type of uncertainty Meaning/(un)importance num yrs between deworming and benefits Forecast Affects how much discounting of future income streams must be done duration of long-term benefits Forecast The length of time for a which a person works and earns income expected value from leverage and funging Game theoretic How much does money donated to Schistosomiasis Control Initiative shift around other money #### Sightsavers Scatter plots showing sensitivity of the Sightsavers' cost-effectiveness to each input parameter Again, it's a little harder to identify certain factors as more important. It seems that the final estimate is (given our input uncertainty) the result of many factors of medium effect. It does seem plausible that the output is somewhat less sensitive to these factors: Highlighted input factors to which result is minimally sensitive Input Type of uncertainty Meaning/(un)importance num yrs between deworming and benefits Forecast Affects how much discounting of future income streams must be done duration of long-term benefits Forecast The length of time for a which a person works and earns income expected value from leverage and funging Game theoretic How much does money donated to Sightsavers shift around other money ### Seasonal malaria chemoprevention #### Malaria Consortium Scatter plots showing sensitivity of Malaria Consortium's cost-effectiveness to each input parameter The scatter plots show that, given our choice of input uncertainty, the output is most sensitive (i.e. the scatter plot for these parameters shows the greatest directionality) to the input parameters: Highlighted input factors to which result is highly sensitive Input Type of uncertainty Meaning/importance direct mortality in high transmission season Empirical Fraction of overall malaria mortality during the peak transmission season and amenable to SMC internal validity adjustment Methodological How much do we trust the results of the underlying SMC studies external validity adjustment Methodological How much do the results of the underlying SMC studies transfer to new settings coverage in trials in meta-analysis Historical/methodological Determines how much coverage an SMC program needs to achieve to match studies value of averting death of a young child Moral Determines final conversion between empirical outcomes and value cost per child targeted Operational Affects cost of results ### Vitamin A supplementation #### Helen Keller International Scatter plots showing sensitivity of the Helen Keller International's cost-effectiveness to each input parameter The scatter plots show that, given our choice of input uncertainty, the output is most sensitive to the input parameters: Highlighted input factors to which result is highly sensitive Input Type of uncertainty Meaning/importance relative risk of all-cause mortality for young children in programs Causal How much do VAS programs affect mortality cost per child per round Operational Affects cost of results rounds per year Operational Affects cost of results ### Bednets #### Against Malaria Foundation Scatter plots showing sensitivity of Against Malaria Foundation's cost-effectiveness to each input parameter The scatter plots show that, given our choice of input uncertainty, the output is most sensitive (i.e. the scatter plot for these parameters shows the greatest directionality) to the input parameters: Highlighted input factors to which result is highly sensitive Input Type of uncertainty Meaning/importance num LLINs distributed per person Operational Affects cost of results cost per LLIN Operational Affects cost of results deaths averted per protected child under 5 Causal How effective is the core activity lifespan of an LLIN Empirical Determines how many years of benefit accrue to each distribution net use adjustment Empirical Determines benefits from LLIN as mediated by proper and improper use internal validity adjustment Methodological How much do we trust the results of the underlying studies percent of mortality due to malaria in AMF areas vs trials Empirical/historical Affects size of the problem percent of pop. under 5 Empirical Affects size of the problem ## Delta moment-independent sensitivity analysis If eyeballing plots seems a bit unsatisfying to you as a method for judging sensitivity, not to worry. We also have the results of a more formal sensitivity analysis. This method is called delta moment-independent sensitivity analysis. (the delta moment-independent sensitivity indicator of parameter ) "represents the normalized expected shift in the distribution of [the output] provoked by [that input]". To make this meaning more explicit, we'll start with some notation/definitions. Let: 1. be the random variables used as input parameters 2. so that is a function from to describing the relationship between inputs and outputs—i.e. GiveWell's cost-effectiveness model 3. be the density function of the result —i.e. the probability distributions we've already seen showing the cost-effectiveness for each charity 4. be the conditional density of Y with one of the parameters fixed—i.e. a probability distribution for the cost-effectiveness of a charity while pretending that we know one of the input values precisely With these in place, we can define . It is: . The inner can be interpreted as the total area between probability density function and probability density function . This is the "shift in the distribution of provoked by " we mentioned earlier. Overall, then says: • pick one value for and measure the shift in the output distribution from the "default" output distribution • do that for each possible and take the expectation Some useful properties to point out: • ranges from 0 to 1 • If the output is independent of the input, for that input is 0 • The sum of for each input considered separately isn't necessarily 1 because there can be interaction effects In the plots below, for each charity, we visualize the delta sensitivity (and our uncertainty about that sensitivity) for each input parameter. ### Direct cash transfers #### GiveDirectly Delta sensitivities for each input parameter in the GiveDirectly cost-effectiveness calculation Comfortingly, this agrees with the results of our scatter plot sensitivity analysis. For convenience, I have copied the table from the scatter plot analysis describing the most influential inputs: Highlighted input factors to which result is highly sensitive Input Type of uncertainty Meaning/importance value of increasing ln consumption per capita per annum Moral Determines final conversion between outcomes and value transfer as percent of total cost Operational Affects cost of results return on investment Opportunities available to recipients Determines stream of consumption over time baseline consumption per capita Empirical Diminishing marginal returns to consumption mean that baseline consumption matters ### Deworming #### The END Fund Delta sensitivities for each input parameter in the END Fund cost-effectiveness calculation Comfortingly, this again agrees with the results of our scatter plot sensitivity analysis[5]. For convenience, I have copied the table from the scatter plot analysis describing the least influential inputs: Highlighted input factors to which result is minimally sensitive Input Type of uncertainty Meaning/(un)importance num yrs between deworming and benefits Forecast Affects how much discounting of future income streams must be done duration of long-term benefits Forecast The length of time for a which a person works and earns income expected value from leverage and funging Game theoretic How much does money donated to the END Fund shift around other money #### Deworm the World Delta sensitivities for each input parameter in the Deworm the World cost-effectiveness calculation For convenience, I have copied the table from the scatter plot analysis describing the least influential inputs: Highlighted input factors to which result is minimally sensitive Input Type of uncertainty Meaning/(un)importance num yrs between deworming and benefits Forecast Affects how much discounting of future income streams must be done duration of long-term benefits Forecast The length of time for a which a person works and earns income expected value from leverage and funging Game theoretic How much does money donated to Deworm the World shift around other money #### Schistosomiasis Control Initiative Delta sensitivities for each input parameter in the Schistosomiasis Control Initiative cost-effectiveness calculation For convenience, I have copied the table from the scatter plot analysis describing the least influential inputs: Highlighted input factors to which result is minimally sensitive Input Type of uncertainty Meaning/(un)importance num yrs between deworming and benefits Forecast Affects how much discounting of future income streams must be done duration of long-term benefits Forecast The length of time for a which a person works and earns income expected value from leverage and funging Game theoretic How much does money donated to Schistosomiasis Control Initiative shift around other money #### Sightsavers Delta sensitivities for each input parameter in the Sightsavers cost-effectiveness calculation For convenience, I have copied the table from the scatter plot analysis describing the least influential inputs: Highlighted input factors to which result is minimally sensitive Input Type of uncertainty Meaning/(un)importance num yrs between deworming and benefits Forecast Affects how much discounting of future income streams must be done duration of long-term benefits Forecast The length of time for a which a person works and earns income expected value from leverage and funging Game theoretic How much does money donated to Sightsavers shift around other money #### Deworming comment That we get substantially identical results in terms of delta sensitivities for each deworming charity is not surprising: The structure of each calculation is the same and (for the sake of not tainting the analysis with my idiosyncratic perspective) the uncertainty on each input parameter is the same. ### Seasonal malaria chemoprevention #### Malaria Consortium Delta sensitivities for each input parameter in the Malaria Consortium cost-effectiveness calculation Again, there seems to be good agreement between the delta sensitivity analysis and the scatter plot sensitivity analysis though there is perhaps a bit of reordering in the top factor. For convenience, I have copied the table from the scatter plot analysis describing the most influential inputs: Highlighted input factors to which result is highly sensitive Input Type of uncertainty Meaning/importance internal validity adjustment Methodological How much do we trust the results of the underlying SMC studies) direct mortality in high transmission season Empirical Fraction of overall malaria mortality during the peak transmission season and amenable to SMC cost per child targeted Operational Afffects cost of results external validity adjustment Methodological How much do the results of the underlying SMC studies transfer to new settings coverage in trials in meta-analysis Historical/methodological Determines how much coverage an SMC program needs to achieve to match studies value of averting death of a young child Moral Determines final conversion between outcomes and value ### Vitamin A supplementation #### Hellen Keller International Delta sensitivities for each input parameter in the Helen Keller International cost-effectiveness calculation Again, there's broad agreement between the scatter plot analysis and this one. This analysis perhaps makes the crucial importance of the relative risk of all-cause mortality for young children in VAS programs even more obvious. For convenience, I have copied the table from the scatter plot analysis describing the most influential inputs: Highlighted input factors to which result is highly sensitive Input Type of uncertainty Meaning/importance relative risk of all-cause mortality for young children in programs Causal How much do VAS programs affect mortality cost per child per round Operational Affects the total cost required to achieve effect rounds per year Operational Affects the total cost required to achieve effect ### Bednets #### Against Malaria Foundation Delta sensitivities for each input parameter in the Against Malaria Foundation cost-effectiveness calculation Again, there's broad agreement between the scatter plot analysis and this one. For convenience, I have copied the table from the scatter plot analysis describing the most influential inputs: Highlighted input factors to which result is highly sensitive Input Type of uncertainty Meaning/importance num LLINs distributed per person Operational Affects the total cost required to achieve effect cost per LLIN Operational Affects the total cost required to achieve effect deaths averted per protected child under 5 Causal How effective is the core activity lifespan of an LLIN Empirical Determines how many years of benefit accrue to each distribution net use adjustment Empirical Affects benefits from LLIN as mediated by proper and improper use internal validity adjustment Methodological How much do we trust the results of the underlying studies percent of mortality due to malaria in AMF areas vs trials Empirical/historical Affects size of the problem percent of pop. under 5 Empirical Affects size of the problem ## Section recap We performed visual (scatter plot) sensitivity analyses and delta moment-independent sensitivity analyses on GiveWell's top charities. Conveniently, these two methods generally agreed as to which input factors had the biggest influence on the output. For each charity, we found that there were clear differences in the sensitivity indicators for different inputs. This suggests that certain inputs are better targets than others for uncertainty reduction. For example, the overall estimate of the cost-effectiveness of Helen Keller International's vitamin A supplementation program depends much more on the relative risk of all-cause mortality for children in VAS programs than it does on the expected value from leverage and funging. If the cost of investigating each were the same, it would be better to spend time on the former. An important caveat to remember is that these results still reflect my fairly arbitrary (but scrupulously neutral) decision to pretend that we equally uncertain about each input parameter. To remedy this flaw, head over to the Jupyter notebook and tweak the input distributions. # Uncertainty and sensitivity analysis of GiveWell's ranking ## Section overview In the last two sections, we performed uncertainty and sensitivity analyses on GiveWell's charity cost-effectiveness estimates. Our outputs were, respectively: • probability distributions describing our uncertainty about the value per dollar obtained for each charity and • estimates of how sensitive each charity's cost-effectiveness is to each of its input parameters One problem with this is that we are not supposed to take the cost-effectiveness estimates literally. Arguably, the real purpose of GiveWell's analysis is not to produce exact numbers but to assess the relative quality of each charity evaluated. Another issue is that by treating each cost-effectiveness estimate as independent we underweight parameters which are shared across many models. For example, the moral weight that ought to be assigned to increasing consumption shows up in many models. If we consider all the charity-specific models together, this input seems to become more important. Our solution to these problems will be to use distance metrics on the overall charity rankings. By using distance metrics across these multidimensional outputs, we can perform uncertainty and sensitivity analysis to answer questions about: • how uncertain we are about the overall relative cost-effectiveness of the charities • which input parameters this overall relative cost-effectiveness is most sensitive to ## Metrics on rankings Our first step on the path to a solution is to abstract away from particular values in the cost-effectiveness analysis and look at the overall rankings returned. That is we want to transform: GiveWell's cost-effectiveness estimates for its top charities Charity Value per $10,000 donated GiveDirectly 38 The END Fund 222 Deworm the World 738 Schistosomiasis Control Initiative 378 Sightsavers 394 Malaria Consortium 326 Against Malaria Foundation 247 Helen Keller International 223 into: Givewell's top charities ranked from most cost-effective to least • Deworm the World • Sightsavers • Schistosomiasis Control Initiative • Malaria Consortium • Against Malaria Foundation • Helen Keller International • The END Fund • GiveDirectly But how do we usefully express probabilities over rankings[6] (rather than probabilities over simple cost-effectivness numbers)? The approach we'll follow below is to characterize a ranking produced by a run of the model by computing its distance from the reference ranking listed above (i.e. GiveWell's current best estimate). Our output probability distribution will then express how far we expect to be from the reference ranking—how much we might learn about the ranking with more information on the inputs. For example, if the distribution is narrow and near 0, that means our uncertain input parameters mostly produce results similar to the reference ranking. If the distribution is wide and far from 0, that means our uncertain input parameters produce results that are highly uncertain and not necessarily similar to the reference ranking. ### Spearman's footrule What is this mysterious distance metric between rankings that enables the above approach? One such metric is called Spearman's footrule distance. It's defined as: where: • and are rankings, • varies over all the elements of the rankings and • returns the integer position of item in ranking . In other words, the footrule distance between two rankings is the sum over all items of the (absolute) difference in positions for each item. (We also add a normalization factor so that the distance varies ranges from 0 to 1 but omit that trivia here.) So the distance between A, B, C and A, B, C is 0; the (unnormalized) distance between A, B, C and C, B, A is 4; and the (unnormalized) distance between A, B, C and B, A, C is 2. ### Kendall's tau Another common distance metric between rankings is Kendall's tau. It's defined as: where: • and are again rankings, • and are items in the set of unordered pairs of distinct elements in and • if and are in the same order (concordant) in and and otherwise (discordant) In other words, the Kendall tau distance looks at all possible pairs across items in the rankings and counts up the ones where the two rankings disagree on the ordering of these items. (There's also a normalization factor that we've again omitted so that the distance ranges from 0 to 1.) So the distance between A, B, C and A, B, C is 0; the (unnormalized) distance between A, B, C and C, B, A is 3; and the (unnormalized) distance between A, B, C and B, A, C is 1. ### Angular distance One drawback of the above metrics is that they throw away information in going from the table with cost-effectiveness estimates to a simple ranking. What would be ideal is to keep that information and find some other distance metric that still emphasizes the relationship between the various numbers rather than their precise values. Angular distance is a metric which satisfies these criteria. We can regard the table of charities and cost-effectiveness values as an 8-dimensional vector. When our output produces another vector of cost-effectiveness estimates (one for each charity), we can compare this to our reference vector by finding the angle between the two[7]. ## Results ### Uncertainties To recap, what we're about to see next is the result of running our model many times with different sampled input values. In each run, we compute the cost-effectiveness estimates for each charity and compare those estimates to the reference ranking (GiveWell's best estimate) using each of the tau, footrule and angular distance metrics. Again, the plots below are from running the analysis while pretending that we're equally uncertain about each input parameter. To avoid this limitation, go to the Jupyter notebook and adjust the input distributions. Probability distributions of value per dollar for each of GiveWell's top charity and probability distributions for the distance between model results and the reference results We see that our input uncertainty does matter even for these highest level results—there are some input values which cause the ordering of best charities to change. If the gaps between the cost-effectiveness estimates had been very large or our input uncertainty had been very small, we would have expected essentially all of the probability mass to be concentrated at 0 because no change in inputs would have been enough to meaningfully change the relative cost-effectiveness of the charities. ### Visual sensitivity analysis We can now repeat our visual sensitivity analysis but using our distance metrics from the reference as our outcome of interest instead of individual cost-effectiveness estimates. What these plots show is how sensitive the relative cost-effectiveness of the different charities is to each of the input parameters used in any of the cost-effectiveness models (so, yes, there are a lot of parameters/plots). We have three big plots, one for each distance metric—footrule, tau and angle. In each plot, there's a subplot corresponding to each input factor used anywhere in the GiveWell's cost-effectiveness analysis. Scatter plots showing sensitivity of the footrule distance with respect to each input parameter Scatter plots showing sensitivity of the tau distance with respect to each input parameter Scatter plots showing sensitivity of the angular distance with respect to each input parameter (The banding in the tau and footrule plots is just an artifact of those distance metrics returning integers (before normalization) rather than reals.) These results might be a bit surprising at first. Why are there so many charity-specific factors with apparently high sensitivity indicators? Shouldn't input parameters which affect all models have the biggest influence on the overall result? Also, why do so few of the factors that showed up as most influential in the charity-specific sensitivity analyses from last section make it to the top? However, after reflecting for a bit, this makes sense. Because we're interested in the relative performance of the charities, any factor which affects them all equally is of little importance here. Instead, we want factors that have a strong influence on only a few charities. When we go back to the earlier charity-by-charity sensitivity analysis, we see that many of the input parameters we identified as most influential where shared across charities (especially across the deworming charities). Non-shared factors that made it to the top of the charity-by-charity lists—like the relative risk of all-cause mortality for young children in VAS programs—show up somewhat high here too. But it's hard to eyeball the sensitivity when there are so many factors and most are of small effect. So let's quickly move on to the delta analysis. ### Delta moment-independent sensitivity analysis Again, we'll have three big plots, one for each distance metric—footrule, tau and angle. In each plot, there's an estimate of the delta moment-independent sensitivity for each input factor used anywhere in the GiveWell's cost-effectiveness analysis (and an indication of how confident that sensitivity estimate is). Delta sensitivities for each input parameter in footrule distance analysis Delta sensitivities for each input parameter in the tau distance analysis Delta sensitivities for each input parameter in angular distance analysis So these delta sensitivities corroborate the suspicion that arose during the visual sensitivity analysis—charity-specific input parameters have the highest sensitivity indicators. The other noteworthy result is which charity-specific factors are the most influential depends somewhat on which distance metric we use. The two rank-based metrics—tau and footrule distance—both suggest that the final charity ranking (given these inputs) is most sensitive to the worm intensity adjustment and cost per capita per annum of Sightsavers and the END Fund. These input parameters are a bit further down (though still fairly high) in the list according to the angular distance metric. #### Needs more meta It would be nice to check that our distance metrics don't produce totally contradictory results. How can we accomplish this? Well, the plots above already order the input factors according to their sensitivity indicators... That means we have rankings of the sensitivities of the input factors and we can compare the rankings using Kendall's tau and Spearman's footrule distance. If that sounds confusing hopefully the table clears things up: Using Kendall's tau and Spearman's footrule distance to assess the similarity of sensitivity rankings generated under different distance metrics Delta sensitivity rankings compared Tau distance Footrule distance Tau and footrule 0.358 0.469 Tau and angle 0.365 0.516 Angle and footrule 0.430 0.596 So it looks like the three rankings have middling agreement. Sensitivities according to tau and footrule agree the most while sensitivities according to angle and footrule agree the least. The disagreement probably also reflects random noise since the confidence intervals for many of the variables' sensitivity indicators overlap. We could presumably shrink these confidence intervals and reduce the noise by increasing the number of samples used during our analysis. To the extent that the disagreement isn't just noise, it's not entirely surprising—part of the point of using different distance metrics is to capture different notions of distance, each of which might be more or less suitable for a given purpose. But the divergence does mean that we'll need to carefully pick which metric to pay attention to depending on the precise questions we're trying to answer. For example, if we just want to pick the single top charity and donate all our money to that, factors with high sensitivity indicators according to footrule distance might be the most important to pin down. On the other hand, if we want to distribute our money in proportion to each charity's estimated cost-effectiveness, angular distance is perhaps a better metric to guide our investigations. ## Section recap We started with a couple of problems with our previous analysis: we were taking cost-effectiveness estimates literally and looking at them independently instead of as parts of a cohesive analysis. We addressed these problems by redoing our analysis while looking at distance from the current best cost-effectiveness estimates. We found that our input uncertainty is consequential even when looking only at the relative cost-effectiveness of the charities. We also found that input parameters which are important but unique to a particular charity often affect the final relative cost-effectiveness substantially. Finally, we have the same caveat as last time: these results still reflect my fairly arbitrary (but scrupulously neutral) decision to pretend that we equally uncertain about each input parameter. To remedy this flaw and get results which are actually meaningful, head over to the Jupyter notebook and tweak the input distributions. # Recap GiveWell models the cost-effectiveness of its top charities with point estimates in a spreadsheet. We insisted that working with probability distributions instead of point estimates more fully reflects our state of knowledge. By performing uncertainty analysis, we got a better sense of how uncertain the results are (e.g. GiveDirectly is the most certain given our inputs). After uncertainty analysis, we proceeded to sensitivity analysis and found that indeed there were some input parameters that were more influential than others. The most influential parameters are likely targets for further investigation and refinement. The final step we took was to combine the individual charity cost-effectiveness estimates into one giant model. By looking at how far (using three different distance metrics) these results deviated from the current overall cost-effectiveness analysis, we accomplished two things. First, we confirmed that our input uncertainty is indeed consequential—there are some plausible input values which might reorder the top charities in terms of cost-effectiveness. Second, we identified which input parameters (given our uncertainty) have the highest sensitivity indicators and therefore are the best targets for further scrutiny. We also found that this final sensitivity analysis was fairly sensitive to which distance metric we use so it's important to pick a distance metric tailored to the question of interest. Finally, throughout, I reminded you that this is more of a template for an analysis than an actual analysis because we pretended to be equally uncertain about each input parameter. To get a more useful analysis, you'll have to edit the input uncertainty to reflect your actual beliefs and run the Jupyter notebook. # Appendix ## Sobol indices for per-charity cost-effectiveness I also did a variance-based sensitivity analysis with Sobol indices. Those plots follow. The variable order in each plot is from the input parameter with the highest sensitivity to the input parameter with the lowest sensitivity. That makes it straightforward to compare the ordering of sensitivities according to the delta moment-independent method and according to the Sobol method. We see that there is broad—but not perfect—agreement between the methods. Sobol sensitivities for each input parameter in the GiveDirectly cost-effectiveness calculation Sobol sensitivities for each input parameter in the END Fund cost-effectiveness calculation Sobol sensitivities for each input parameter in the Deworm the World cost-effectiveness calculation Sobol sensitivities for each input parameter in the Schistosomiasis Control Initiative cost-effectiveness calculation Sobol sensitivities for each input parameter in the Sightsavers cost-effectiveness calculation Sobol sensitivities for each input parameter in the Malaria Consortium cost-effectiveness calculation Sobol sensitivities for each input parameter in the Helen Keller International cost-effectiveness calculation Sobol sensitivities for each input parameter in the Against Malaria Foundation cost-effectiveness calculation ## Sobol indices for relative cost-effectiveness of charities The variable order in each plot is from the input parameter with the highest sensitivity to the input parameter with the lowest sensitivity. That makes it straightforward to compare the ordering of sensitivities according to the delta moment-independent method and according to the Sobol method. We see that there is broad—but not perfect—agreement between the different methods. Sobol sensitivities for each input parameter in footrule distance analysis Sobol sensitivities for each input parameter in tau distance analysis Sobol sensitivities for each input parameter in angular distance analysis 1. Unfortunately, the code implements the 2019 V4 cost-effectiveness analysis instead of the most recent V5 because I just worked off the V4 tab I'd had lurking in my browser for months and didn't think to check for a new version until too late. I also deviated from the spreadsheet in one place because I think there's an error (Update: The error will be fixed in GiveWell's next publically-released version). ↩︎ 2. Log-normal strikes me as a reasonable default distribution for this task: because it's support is (0, +∞) which fits many of our parameters well (they're all positive but some are actually bounded above by 1); and because "A log-normal process is the statistical realization of the multiplicative product of many independent random variables" which also seems reasonable here. ↩︎ 3. When you follow the link, you should see a Jupyter notebook with three "cells". The first is a preamble setting things up. The second has all the parameters with lower and upper bounds. This is the part you want to edit. Once you've edited it, find and click "Runtime > Run all" in the menu. You should eventually see the notebook produce a serious of plots. ↩︎ 4. This is, in fact, approximately what Monte Carlo methods do so this is a very convenient analysis to run. ↩︎ 5. I swear I didn't cheat by just picking the results on the scatter plot that match the delta sensitivities! ↩︎ 6. If we just look at the probability for each possible ranking independently, we'll be overwhelmed by the number of permutations and it will be hard to find any useful structure in our results. ↩︎ 7. The angle between the vectors is a better metric here than the distance between the vectors' endpoints because we're interested in the relative cost-effectiveness of the charities and how those change. If our results show that each charity is twice as effective as in the reference vector, our metric should return a distance of 0 because nothing has changed in the relative cost-effectiveness of each charity. ↩︎ # 86 # Reactions # More posts like this Comments24 Sorted by Click to highlight new comments since: I’d find it useful if you could summarize your main takeaways from the analysis in nontechnical language. You looked at the overall recap and saw the takeaways there? e.g. Sensitivity analysis indicates that some inputs are substantially more influential than others, and there are some plausible values of inputs which would reorder the ranking of top charities. These are sort of meta-conclusions though and I'm guessing you're hoping for more direct conclusions. That's sort of hard to do. As I mention in several places, the analysis depends on the uncertainty you feed into it. To maintain "neutrality", I just pretended to be equally uncertain about each input. But, given this, any simple conclusions like "The AMF cost-effectiveness estimates have the most uncertainty." or "The relative cost-effectiveness is most sensitive to the discount rate." would be misleading at best. The only way to get simple conclusions like that is to feed input parameters you actually believe in to the linked Jupyter notebook. Or I could put in my best guesses as to inputs and draw simple conclusions from that. But then you'd be learning about me as much as you'd be learning about the world as you see it. Does that all make sense? Is there another kind of takeaway that you're imagining? Despite your reservations, I think it would actually be very useful for you to input your best guess inputs (and its likely to be more useful for you to do it than an average EA, given you've thought about this more). My thinking is this. I'm not sure I entirely followed the argument, but I took it that the thrust of what you're saying is "we should do uncertainty analysis (use Monte Carlo simulations instead of point-estimates) as our cost-effectiveness might be sensitive to it". But you haven't shown that GiveWell's estimates are sensitive to a reliance on point estimates (have you?), so you haven't (yet) demonstrated it's worth doing the uncertainty analysis you propose after all. :) More generally, if someone says "here's a new, really complicated methodology we *could* use", I think its incumbent on them to show that we *should* use it, given the extra effort involved. Thanks for your thoughts. the thrust of what you're saying is "we should do uncertainty analysis (use Monte Carlo simulations instead of point-estimates) as our cost-effectiveness might be sensitive to it" Yup, this is the thrust of it. But you haven't shown that GiveWell's estimates are sensitive to a reliance on point estimates (have you?) I think I have---conditionally. The uncertainty analysis shows that, if you think the neutral uncertainty I use as input is an acceptable approximation, substantially different rankings are within the bounds of plausibility. If I put in my own best estimates, the conclusion would still be conditional. It's just that instead of being conditional upon "if you think the neutral uncertainty I use as input is an acceptable approximation" it's conditional upon "if you think my best estimates of the uncertainty are an acceptable approximation". So the summary point there is that there's really no way to escape conditional conclusions within a subjective Bayesian framework. Conclusions will always be of the form "Conclusion C is true if you accept prior beliefs B". This makes generic, public communication hard (as we're seeing!), but offers lots of benefits too (which I tried to demonstrate in the post---e.g. an explicit quantification of uncertainty, a sense of which inputs are most influential). here's a new, really complicated methodology we could use If I've given the impression that it's really complicated, I think might have misled. One of the things I really like about the approach is that you pay a relatively modest fixed cost and then you get this kind of analysis "for free". By which I mean the complexity doesn't infect all your actual modeling code. For example, the GiveDirectly model here actually reads more clearly to me than the corresponding spreadsheet because I'm not constantly jumping around trying to figure out what the cell reference (e.g. B23) means in formulas. Admittedly, some of the stuff about delta moment-independent sensitivity analysis and different distance metrics is a bit more complicated. But the distance metric stuff is specific to this particular problem---not the methodology in general---and the sensitivity analysis can largely be treated as a black box. As long as you understand what the properties of the resulting number are (e.g. ranges from 0-1, 0 means independence), the internal workings aren't crucial. I think it would actually be very useful for you to input your best guess inputs (and its likely to be more useful for you to do it than an average EA, given you've thought about this more) Given the responses here, I think I will go ahead and try that approach. Though I guess even better would be getting GiveWell's uncertainty on all the inputs (rather than just the inputs highlighted in the "User weights" and "Moral inputs" tab). Sorry for adding even more text to what's already a lot of text :). Hope that helps. Did you ever get round to running the analysis with your best guess inputs? If that revealed substantial decision uncertainty (and especially if you were very uncertain about your inputs), I'd also like to see it run with GiveWell's inputs. They could be aggregated distributions from multiple staff members, elicited using standard methods, or in some cases perhaps 'official' GiveWell consensus distributions. I'm kind of surprised this doesn't seem to have been done already, given obvious issues with using point estimates in non-linear models. Or do you have reason to believe the ranking and cost-effectiveness ratios would not be sensitive to methodological changes like this? This approach seems to be being neglected by GiveWell, and not taken up by others in this space. (I don't have time to write a full review). Thanks for this (somewhat overwhelming!) analysis. I tried to do something similar a few years back, and am pretty enthusiastic about the idea of incorporating more uncertainty analysis into cost effectiveness estimates, generally. One thing (that I don't think you mentioned, though I'm still working through the whole post) this allows you to do is use techniques from Modern Portfolio Theory to create giving portfolios with similar altruistic returns and lower downside risk. I'd be curious to see if your analysis could be used in a similar way. Oh, very cool! I like the idea of sampling from different GiveWell staffers' values (though I couldn't do that here since I regarded essentially all input parameters as uncertain instead of just the highlighted ones). I hadn't thought about the MPT connection. I'll think about that more. @cole_haus: I really like this approach. One thing that is not clear to me: • Do you work at or with Givewell? • if you shared or discussed this work with anyone at Givewell? I think it's something they should be attuned to, and I'd like to see them go more in the direction of open, transparent, and cleanly-coded models. By the way I added a few comments and suggestions here and in your blog using hypothes.is, a little browser plugin. I like to do that as you can add comments (even small ones) as highlights/annotations directly in the text Further opinions/endorsement ... I think "this approach" (making uncertainty explicit) is important, necessary, and correct... I'd pair it with "letting the user specify parameters/distributions over moral uncertainty things" (and perhaps even subjective beliefs about different types of evidence). I think (epistemic basis -- mostly gut feeling) it will likely will make a difference in how charities and interventions rank against each other. At first pass, it may lead to 'basically the same ranking' (or at least, not a strong change). But I suspect that if it is made part of a longer-term careful practice, some things will switch order, and this is meaningful. It will also enable evaluation of a wider set of charities/interventions. If we make uncertainty explicit, we can feel more comfortable evaluating cases where there is much less empirical evidence. So I think 'some organization' should be doing this, and I expect this will happen soon; whether that is GiveWell doing it or someone else. @cole_haus I'm working on a new modeling tool - https://causal.app. If you think it's useful I could try to replicate your analysis with Causal. The advantage would be that the output is an interactive web site where users can play with the assumptions (e.g. plug in different distributions). If there's another analysis that you think might be a better fit I could also build that. I just think that Causal could be useful for the EA community :) How did this/how is this going? I'm chatting with Taimur of Causal tomorrow and I wanted to bring this up How did the chat go? I wonder if porting GiveWell cost-effectiveness models to causal.app might make them more understandable He was very positive about it and willing to move forward on it. I didn't/don't have all the bandwidth to follow up as much as I'd like to, but maybe someone else could do. (And I'd hope to turn back to this at some point.) I think this could be done in addition to and in complement to HazelFire's work. Note that the Hazelfire effort is using squiggle language. I've been following up and encouraging them as well. I hope that we can find a way to leverage the best features of each of these tools, and also bring in domain knowledge. The link has an extra '.' - https://www.causal.app/ Looks neat, good luck! Do the expected values of the output probability distributions equal the point estimates that GiveWell gets from their non-probabilistic estimates? If not, how different are they? More generally, are there any good write-ups about when and how the expected value of a model with multiple random variables differs from the same model filled out with the expected value of each of its random variables? (I didn't find the answer skimming through, but it might be there already--sorry!) Short version: Do the expected values of the output probability distributions equal the point estimates that GiveWell gets from their non-probabilistic estimates? No, but they're close. More generally, are there any good write-ups about when and how the expected value of a model with multiple random variables differs from the same model filled out with the expected value of each of its random variables? Don't know of any write-ups unfortunately, but the linearity of expectation means that the two are equal if and (generally?) only if the model is linear. Long version: When I run the Python versions of the models with point estimates, I get: Charity Value/$ GiveDirectly 0.0038 END 0.0211 DTW 0.0733 SCI 0.0370 Sightsavers 0.0394 Malaria Consortium 0.0316 HKI 0.0219 AMF 0.0240 The (mostly minor) deviations from the official GiveWell numbers are due to: 1. Different handling of floating point numbers between Google Sheets and Python 2. Rounded/truncated inputs 3. A couple models calculated the net present value of an annuity based on payments at the end of the each period instead of the beginning--I never got around to implementing this 4. Unknown errors When I calculate the expected values of the probability distributions given the uniform input uncertainty, I get: Charity Value/\$ GiveDirectly 0.0038 END 0.0204 DTW 0.0715 SCI 0.0354 Sightsavers 0.0383 Malaria Consortium 0.0300 HKI 0.0230 AMF 0.0231 I would generally call these values pretty close. It's worth noting though that the procedure I used to add uncertainty to inputs doesn't produce inputs distributions that have the original point estimate as their expected value. By creating a 90% CI at ±20% of the original value, the CI is centered around the point estimate but since log normal distributions aren't symmetric, the expected value is not precisely at the the point estimate. That explains some of the discrepancy. The rest of the discrepancy is presumably from the non-linearity of the models (e.g. there are some logarithms in the models). In general, the linearity of expectation means that the expected value of a linear model of multiple random variables is exactly equal to the linear model of the expected values. For non-linear models, no such rule holds. (The relatively modest discrepancy between the point estimates and the expected values suggests that the models are "mostly" linear.) Fabulous! This is extremely good to know and it's also quite a relief! Yes, how does the posterior mode differ from GiveWell's point estimates, and how does this vary as a function of the input uncertainty (confidence interval length)? Thanks for putting this together, it's really interesting! Based on this analysis, it seems the worm wars may have been warranted after all. I worked on a related project a few years ago, but I was mainly looking for evidence of a "return" to altruistic risk taking. I had a hard time finding impact estimates that quantified their uncertainty, but eventually found a few sources that might interest you. I listed all the standalone sources here, then tried to combine them in a meta-analysis here. I don't have access to most of the underlying models though, so I don't think it's possible to incorporate the results into your sensitivity analysis. I also don't have much of a background in statistics so take the results with a grain of salt. Some quick points: we see how the output depends on a particular input even in the face of variations in all the other inputs—we don't hold everything else constant. In other words, this is a global sensitivity analysis. • I'm a bit confused. In the GiveDirectly case for 'value of increasing consumption', you're still holding the discount rate constant, right? • To address the recurring caveat, I wonder if we could plot the posterior mode/stdev against the input confidence interval length. Basically, taking GiveWell's point estimate as the prior mean, how do the cost-effectiveness estimates (and their uncertainty) change as we vary our uncertainty over the input parameters. More to come! I'm a bit confused. In the GiveDirectly case for 'value of increasing consumption', you're still holding the discount rate constant, right? Nope, it varies. One way you can check this intuitively is: if the discount rate and all other parameters were held constant, we'd have a proper function and our scatter plot would show at most one output value for each input. taking GiveWell's point estimate as the prior mean, how do the cost-effectiveness estimates (and their uncertainty) change as we vary our uncertainty over the input parameters. There are (at least) two versions I can think of: 1. Adjust all the input uncertainties in concert. That is, spread all the point estimates by ±20% or all by ±30% , etc. This would be computationally tractable, but I'm not sure it would get us too much extra. I think the key problem with the current approach which would remain is that we're radically more uncertain about some of the inputs than the others. 2. Adjust all the input uncertainties individually. That is, spread point estimate 1 by ±20%, point estimate 2 by ±10%, etc. Then, spread point estimate 1 by ±10%, spread point estimate 2 by ±20%, etc. Repeat for all combinations of spreads and inputs. This would actually give us somewhat useful information, but would be computational intractable given the number of input parameters. Curated and popular this week Relevant opportunities
12,944
66,179
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.71875
3
CC-MAIN-2024-38
latest
en
0.872321
http://mathhelpforum.com/number-theory/181448-there-any-easy-way-find-remainders-very-large-integer-divisions.html
1,495,814,038,000,000,000
text/html
crawl-data/CC-MAIN-2017-22/segments/1495463608668.51/warc/CC-MAIN-20170526144316-20170526164316-00357.warc.gz
274,554,488
13,838
# Thread: Is there any easy way to find the remainders of very large integer divisions? 1. ## Is there any easy way to find the remainders of very large integer divisions? Hi all, I am Chaitanya, I am preparing for GRE, unfortunately I am poor at mathematics. But with some help I can understand quickly. I came across few questions, solved few on mu own. But I came across one question, What is the remainder of the division of 14414*14416*14418 by 14? Is there any easy way to solve this? Thank you all in advance and sorry if I ask a very silly question. 2. It's called modular arithmetic. Look it up. It's quite useful, and will open you up to some interesting mathematics. 3. Than you dude, do you mean I have to google "modular arithmetic"? 4. Originally Posted by krishna3264 Than you dude, do you mean I have to google "modular arithmetic"? Yes, and you should start by looking at the first hit. CB I dint understood this para. Modular arithmetic lets us state these results quite precisely, and it also provides a convenient language for similar but slightly more complex statements. In the above example, our modulus is the number 2. The modulus can be thought of as the number of classes that we have broken the integers up into. It is also the difference between any two "consecutive" numbers in a given class. Can anyone explain? Thanks in advance. 6. we have to find the solution of the eqn x≡14414*14416*14418 (mod 14) When 14414 is divided by 14 , it leaves remainder as 8, 14416 is divided by 14, it leaves remainder as 10, 14418 is divided by 14, it leaves remainder as 12. so we get 14414*14416*14418≡ 8*10*12≡80*12≡10*12≡8(mod 14) Hence, the remainder is 8 when 14414*14416*14418 is divided by 14. i think this will help.but please check the calculations 7. Originally Posted by krishna3264 Hi all, I am Chaitanya, I am preparing for GRE, unfortunately I am poor at mathematics. But with some help I can understand quickly. I came across few questions, solved few on mu own. But I came across one question, What is the remainder of the division of 14414*14416*14418 by 14? Is there any easy way to solve this? Thank you all in advance and sorry if I ask a very silly question. even if you don't know modular arithmetic you can do this: $14414=14k+8, \, 14416=14l+10, \, 14418=14m+12 \Rightarrow 14484 \cdot 14416 \cdot 14418=(14k+8)(14l+10)(14m+12)$. Expand this thing to get $14484 \cdot 14416 \cdot 14418=14p+8 \cdot 10 \cdot 12=14q+8$, where $p,q$ are integers. 8. Thanks dude. I thought the answer would be 3. In the page I was reading(this) the author was describing about congruency for addition and multiplication. I thought for division 0/0≡1mod2 0/1≡0mod2 1/0≡0mod2 1/1≡1mod2 Since even*even*even/even is even/even and 0/0≡1mod2, I thought answer is 3. Since the options given were 8 3 12 10 6 Since 3 is the only odd number I thought 3 would be the right answer. How ever there is little confusion in your post. How did you exactly said that 8 is the right answer. It can be either 10 or 12. Could you please explain. 9. Originally Posted by sorv1986 14, it leaves remainder as 12. so we get 14414*14416*14418≡ 8*10*12≡80*12≡10*12≡8(mod 14) Hi Sorv1986, how did you say that 8 is the right answer. Could you please explain 10. Originally Posted by krishna3264 Thanks dude. I thought the answer would be 3. In the page I was reading(this) the author was describing about congruency for addition and multiplication. I thought for division 0/0≡1mod2 i don't what do you mean by this. 0/0???? 0/1≡0mod2 1/0≡0mod2 1/1≡1mod2 Since even*even*even/even is even/even and 0/0≡1mod2, I thought answer is 3. Since the options given were 8 3 12 10 6 Since 3 is the only odd number I thought 3 would be the right answer. How ever there is little confusion in your post. How did you exactly said that 8 is the right answer. It can be either 10 or 12. Could you please explain. ... 11. @abhishek, I just thought divisions will be represented like that. My guess was wrong. All I want to know is, how you people are telling 8 is the right answer? 12. Originally Posted by krishna3264 @abhishek, I just thought divisions will be represented like that. My guess was wrong. All I want to know is, how you people are telling 8 is the right answer? i have solved it in post #7. what step is not clear to you? 13. Is modular arithimetic just a way of finding remainders? 14. It's actually quite a bit more than that, but that is something that it can be used for. 15. Originally Posted by krishna3264 Hi Sorv1986, how did you say that 8 is the right answer. Could you please explain a≡b(mod n) if and only if n divides (b-a). so when 80 is divided by 14, the remainder will be 10 .(as 80=14.5+10) similarly the rests. would like to know which part of the solution is foggy? Page 1 of 2 12 Last , ### process of 14414 divided by 14 Click on a term to search for related topics.
1,403
4,928
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.953125
4
CC-MAIN-2017-22
longest
en
0.952665
https://ktbssolutions.com/kseeb-solutions-for-class-8-maths-chapter-2-ex-2-2/
1,656,119,904,000,000,000
text/html
crawl-data/CC-MAIN-2022-27/segments/1656103033925.2/warc/CC-MAIN-20220625004242-20220625034242-00149.warc.gz
396,592,224
13,213
# KSEEB Solutions for Class 8 Maths Chapter 2 Linear Equations in One Variable Ex 2.2 You can Download KSEEB Solutions for Class 8 Maths Chapter 2 Linear Equations in One Variable Ex 2.2 Questions and Answers helps you to revise the complete syllabus. ## KSEEB Solutions for Class 8 Maths Chapter 2 Linear Equations in One Variable Ex 2.2 Question 1. If you subtract $$\frac{1}{2}$$ from a number and multiply the result by $$\frac{1}{2}$$, you get $$\frac{1}{8}$$. What is the number? Solution: Let the required number = x Subtracting $$\frac{1}{2}$$ from the number, we get = (x – $$\frac{1}{2}$$) According to the question, The required number is $$\frac{3}{4}$$. Question 2. The perimeter of a rectangular swimming pool is 154 m. Its length is 2 m more than twice its breadth. What are the length and the breadth of the pool? Solution: Let the breadth of swimming pool = x mts Length of pool = (2x + 2) mts Perimeter of rectangle = 2(l + b) = 2 [(2x + 2) + x] = 2 [2x + 2 + x] = 2[3x + 2] mts Perimeter of rectangular swimming pool = 154 ⇒ 2 (3x + 2) = 154 ⇒ 3x + 2 = $$\frac{154}{2}$$ ⇒ 3x + 2 = 77 ⇒ 3x = 77 – 2 ⇒ 3x = 75 ⇒ x = 25 Hence, Breadth = 25 m Length = 2 × 25 + 2 = 50 + 2 = 52 m Question 3. The base of an isosceles triangle is $$\frac{4}{3}$$ cm. The perimeter of the triangle is 4$$\frac{2}{15}$$ cm. What is the length of either of the remaining equal sides? Solution: Let the equal sides of an isosceles triangle be AB = AC Let equal sides = x cm Base BC = $$\frac{4}{3}$$ cm Perimeter of ΔABC = AB + BC + AC = x + $$\frac{4}{3}$$ + x = (2x + $$\frac{4}{3}$$) cm But Perimeter of ΔABC = 4$$\frac{2}{15}$$ = $$\frac{62}{15}$$ cm Hence, each equal sides is $$\frac{7}{5}$$ cm Question 4. Sum of two numbers is 95. If one exceeds the other by 15, find the numbers. Solution: Let one number = x 2nd number = (x + 15) According to question, x + (x + 15) = 95 ⇒ x + x + 15 = 95 ⇒ 2x + 15 = 95 ⇒ 2x = 95 – 15 ⇒ 2x = 80 ⇒ x = 40 (x + 15) = 40 + 15 = 55 Hence, numbers are 40, 55. Question 5. Two numbers are in the ratio 5 : 3. If they differ by 18, what are the numbers? Solution: Let the ratio be x The numbers be 5x and 3x 5x – 3x = 18 ⇒ 2x = 18 ⇒ x = 9 Numbers are 5x = 5 × 9 = 45 3x = 3 × 9 = 27 Question 6. Three consecutive integers add up to 51. What are these integers? Solution: Let first integer = x Second integer = (x + 1) Third integer = (x + 2) According to question, x + x + 1 + x + 2 = 51 ⇒ 3x + 3 = 51 ⇒ 3x = 51 – 3 ⇒ 3x = 48 ⇒ x = 16 First integer = 16 Second integer = 16 + 1 = 17 Third integer = 16 + 2 = 18 Question 7. The sum of three consecutive multiples of 8 is 888. Find the multiples. Solution: Let First multiple of 8 be = 8x Second multiple be = (8x + 8) Third multiple be = 8x + 8 + 8 = 8x + 16 According to question, 8x + 8x + 8 + 8x + 16 = 888 24x + 24 = 888 24x = 888 – 24 24x = 864 x = 36 8x = 8 × 36 = 288 8x + 8 = 288 + 8 = 296 8x + 16 = 8 × 36 + 16 = 288 + 16 = 304 The numbers are 288, 296, 304. Question 8. Three consecutive integers are such that when they are taken in increasing order and multiplied by 2, 3, and 4, respectively, they add up to 74. Find these numbers. Solution: Let First integer = x Second integer = (x + 1) Third integer = (x + 2) Since, first integer is multiple of 2 ∴ 2 is factor of x Hence, number = 2x Second number = 3(x + 1) Third number = 4(x + 2) According to question, 2x + 3(x + 1) + 4(x + 2) = 74 ⇒ 2x + 3x + 3 + 4x + 8 = 74 ⇒ 9x + 11 = 74 ⇒ 9x = 74 – 11 ⇒ 9x = 63 ⇒ x = 7 First integer = 7 Second integer = (7 + 1) = 8 Third integer = (7 + 2) = 9 Question 9. The ages of Rahul and Haroon are in the ratio 5 : 7. Four years later the sum of their ages will be 56 years. What are their present ages? Solution: Let the ratio be x Rahul’s present age = 5x yrs Haroon’s present age = 7x yrs After 4 years, Rahul’s age = (5x + 4) yrs Haroon’s age = (7x + 4) yrs According to question, (5x + 4) + (7x + 4) = 56 ⇒ 5x + 4 + 7x + 4 = 56 ⇒ 12x + 8 = 56 ⇒ 12x = 56 – 8 ⇒ 12x = 48 ⇒ x = 4 Rahul’s present age = 5 × 4 = 20 years Haroon’s present age = 7 × 4 = 28 years Question 10. The number of boys and girls in a class is in the ratio 7 : 5. The number of boys is 8 more than the number of girls. What is the total class strength? Solution: Let the ratio be = x The number of boys = 7x Number of girls = 5x According to question, 7x – 5x = 8 ⇒ 2x = 8 ⇒ x = 4 Number of boys = 7 × 4 = 28 Number of girls = 5 × 4 = 20 Total strength of class = 28 + 20 = 48 Question 11. Baichung’s father is 26 years younger than Baichung’s grandfather and 29 years older than Baichung. The sum of the ages of all three is 135 years. What is the age of each one of them? Solution: Let age of Baichung = x years Baichung’s father age = (x + 29) years Baichung’s grandfather’s age = (x + 29 + 26) = (x + 55) years According to question, x + x + 29 + x + 55 = 135 ⇒ 3x + 84 = 135 ⇒ 3x = 135 – 84 ⇒ 3x = 51 ⇒ x = 17 Baichung’s age = 17 years Father’s age = 17 + 29 = 46 years Grandfather’s age = 17 + 55 = 72 years Question 12. Fifteen years from now Ravi’s age will be four times his present age. What is Ravi’s present age? Solution: Let present age of Ravi = x years 15 years later Ravi’s age = (x + 15) years According to question, 4x = (x + 15) ⇒ 4x – x = 15 ⇒ 3x = 15 ⇒ x = 5 Ravi’s age = 5 years. Question 13. A rational number is such that when you multiply it by $$\frac{5}{2}$$ and add $$\frac{2}{3}$$ to the product, you get $$-\frac{7}{12}$$. What is the number? Solution: Let the number be x When x is multiplied by $$\frac{5}{2}$$ = $$\frac{5}{2} x$$ According to question, Rational number = $$\frac{-1}{2}$$ Question 14. Lakshmi is a cashier in a bank. She has currency notes of denominations Rs. 100, Rs. 50 and Rs. 10, respectively. The ratio of the number of these notes is 2 : 3 : 5. The total cash with Lakshmi is Rs. 4,00,000. How many notes of each denomination does she have? Solution: Let the ratio = x Rs. 100, Rs. 50 and Rs. 10 ratio = 2 : 3 : 5 Number of Rs. 100 notes = 2x Number of Rs. 50 notes = 3x Number of Rs. 10 notes = 5x Value of Rs. 100 notes = 100 × 2x = 200x Value of Rs. 50 notes = 50 × 3x = 150x Value of Rs. 10 notes = 10 × 5x = 50x According to question, 200x + 150x + 50x = 4,00,000 ⇒ 400x = 4,00,000 ⇒ x = $$\frac{4,00,000}{400}$$ = 1000 Number of Rs. 100 notes = 1000 × 2 = 2000 Number of Rs. 50 notes = 1000 × 3 = 3000 Number of Rs. 10 notes = 1000 × 5 = 5000 Question 15. I have a total of Rs. 300 in coins of denomination Re. 1, Rs. 2, and Rs. 5. The number of Rs. 2 coins is 3 times the number of Rs. 5 coins. The total number of coins is 160. How many coins of each denomination are with me? Solution: Total number of coins = 160 Let Re. 1 coins = x Rs. 2 coins = y Rs. 5 coins = z Number of Rs. 2 coin = 3 × Number of Rs. 5 coin y = 3z According to question, x + y + z = 160 ⇒ x + 3z + z = 160 ⇒ x + 4z = 160 ………. (1) Value of Re. 1 coin = 1 × x = x Value of Rs. 2 coin = 2y = 2y Value of Rs. 5 coin = 5z = 5z x + 2y + 3z = 300 ⇒ x + 2 (3z) + 5z = 300 ⇒ x + 6z + 5z = 300 ⇒ x + 11z = 300 ⇒ x = (300 – 11z) ……….. (2) Substituting the value of x in equation 1 300 – 11z + 4z = 160 ⇒ 300 – 7z = 160 ⇒ 300 – 160 = 7z ⇒ 140 = 7z ⇒ z = 20 y = 3z = 3 × 20 = 60 x + y + z = 160 ⇒ x + 20 + 60 = 160 ⇒ x = 160 – 80 ⇒ x = 80 Hence, Re. 1 coins = 80 Rs. 2 coins = 60 Rs. 5 coins = 20 Question 16. The organisers of an essay competition decide that a winner in the competition gets a prize of Rs. 100 and a participant who does not win gets a prize of Rs. 25. The total prize money distributed is Rs. 3,000. Find the number of winners, if the total number of participants is 63. Solution: Total number of participants = 63 Let number of winners = x Number who does not win = (63 – x) Prize for each winner = Rs. 100 Prize for non-winner = Rs. 25 According to question, 100x + 25 (63 – x) = 3000 ⇒ 100x + 1575 – 25x = 3000 ⇒ 75x + 1575 = 3000 ⇒ 75x = 3000 – 1575 ⇒ 75x = 1425 ⇒ x = 19 Number of winners = 19
3,166
7,920
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.71875
5
CC-MAIN-2022-27
latest
en
0.809632
http://mathforum.org/library/drmath/view/61214.html
1,498,319,420,000,000,000
text/html
crawl-data/CC-MAIN-2017-26/segments/1498128320264.42/warc/CC-MAIN-20170624152159-20170624172159-00533.warc.gz
251,960,955
3,673
Associated Topics || Dr. Math Home || Search Dr. Math ### Density of a Stone ```Date: 09/14/2002 at 10:54:02 From: Amanda Subject: Density Hello, Dr. Math. Can you please explain this question to me? A stone having a mass of 160 g displaces the level of water in a cylinder by 30 mL. Find the density of the stone. Thank you. ``` ``` Date: 09/14/2002 at 11:20:44 From: Doctor Ian Subject: Re: Density Hi Amanda, 'Density' is a little like speed, in that it relates two quantities. Suppose there are two cars, and one goes 2 miles in one minute, while the other goes only 1 mile. The speed of the first is 2 miles per minute, and the speed of the second is 1 mile per minute. The compound units 'miles per minute' are used to indicate a particular speed. Now think about two stones with exactly the same volume. (One way to measure the volume of something is to dunk it in water, and see how much of the water has to move out of the way to make room for it.) Let's say each stone displaces two liters of water. The first stone has a mass of 10 kg, and the other has a mass of 20 kg. The second one packs more mass into the same space. When that happens, we say that the second one is 'more dense', or has 'greater density' than the first, in the same way that we say that one of the cars is faster than the other. The units of density are 'mass per volume'. So we'd say the first stone has a density of 10 kg per 2 liters, while the second has a density of 20 kg per 2 liters. We would normally simplify that, 20 kg / 2 liters = (20/2) kg/liter = 10 kg/liter in the same way that we might simplify the calculation of a speed, 30 miles / 15 minutes = (30/15) miles/minute = 2 miles/minute Note that in both cases, you can use various units for the individual quantities. Speed might be measured in miles per minute, kilometers per second, centimeters per week, and so on. Similarly, density might be measured in grams per liter, kilograms per milliliter, ounces per gallon, and so on. There is another way that density is like speed. When you know a distance and a time, you can compute a speed: 10 miles / 2 hours = (10/2) miles/hour = 5 miles/hour But from a speed and a distance, you can compute a time: 10 miles / (5 miles/hour) = (10/5) hours = 2 hours And from a speed and a time, you can compute a distance: 2 hours * 5 miles/hour = (2*5) miles = 10 miles Similarly, if you have a mass and a volume, you can compute a density: density = mass / volume But if you have a density and a mass, you can compute a volume: mass / density = volume And if you have a density and a volume, you can compute a mass: density * volume = mass It's really just the same kind of idea applied to two different kinds of measurements. I hope this helps. Write back if you'd like to talk more - Doctor Ian, The Math Forum http://mathforum.org/dr.math/ ``` Associated Topics: High School Linear Equations High School Physics/Chemistry Middle School Equations Search the Dr. Math Library: Find items containing (put spaces between keywords):   Click only once for faster results: [ Choose "whole words" when searching for a word like age.] all keywords, in any order at least one, that exact phrase parts of words whole words Submit your own question to Dr. Math Math Forum Home || Math Library || Quick Reference || Math Forum Search
876
3,354
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.3125
4
CC-MAIN-2017-26
longest
en
0.929432
https://blackwhiteblue.newsone.com/rutamo49585.html
1,638,838,048,000,000,000
text/html
crawl-data/CC-MAIN-2021-49/segments/1637964363327.64/warc/CC-MAIN-20211206224536-20211207014536-00481.warc.gz
199,673,921
6,743
# Question: What is an example of a group? Contents The definition of group is to collect two or more people or things together. An example of group is separating ten people into two sets of five people. A group is defined as a collection, or a number of people or things. An example of a group is six people eating dinner together at a table. ## What is group and its examples? Definition and Examples of Groups. Definition 21.1. A group is a nonempty set G equipped with a binary operation ∗ : G×G → G satisfying. the following axioms: ı(i) Closure: if a, b ∈ G, then a ∗ b ∈ G. ## What are four examples of groups? Types of Groups are;Formal Group.Informal Group.Managed Group.Process Group.Semi-Formal Groups.Goal Group.Learning Group.Problem-Solving Group. ## What is considered a group? A group is a set of people who have the same interests or aims, and who organize themselves to work or act together. A group is a set of people, organizations, or things which are considered together because they have something in common. ## What are the examples of out group? An out-group, conversely, is a group someone doesnt belong to; often we may feel disdain or competition in relationship to an out-group. Sports teams, unions, and sororities are examples of in-groups and out-groups; people may belong to, or be an outsider to, any of these. ## What is an example of group communication? The term group communication refers to the messages that are exchanged by group members. For example, a soccer team can be considered to be a group, but one would not expect a soccer team to exist or compete with other soccer teams without exchanging messages. ## How do we classify groups? Defining and Classifying GroupsFormal groups -- those defined by the organizations structure, with designated work assignments establishing tasks.Informal groups -- alliances that are neither formally structured nor organizationally determined. Command group -- determined by the organization chart. ## Is a group abelian? All cyclic groups are Abelian, but an Abelian group is not necessarily cyclic. All subgroups of an Abelian group are normal. In an Abelian group, each element is in a conjugacy class by itself, and the character table involves powers of a single element known as a group generator. ## What is an example of small group communication? The term “small group communication” refers to communication that occurs within groups of three to 15 people. Typically, an organizer arranges a small group for a specific purpose. For example, a class reunion organizer may limit the planning committee to a group of 12 alumni. ## How do you classify finite groups? The classification of finite simple groups is a theorem stating that every finite simple group belongs to one of the following families:A cyclic group with prime order;An alternating group of degree at least 5;A simple group of Lie type;One of the 26 sporadic simple groups;
622
2,949
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3
3
CC-MAIN-2021-49
latest
en
0.941967
https://cstheory.stackexchange.com/questions/10635/halting-problem-uncomputable-sets-common-mathematical-proof
1,723,065,897,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722640713269.38/warc/CC-MAIN-20240807205613-20240807235613-00489.warc.gz
154,478,498
42,240
# Halting problem, uncomputable sets: common mathematical proof? It is known that with a countable set of algorithms (characterised by a Gödel number), we cannot compute (build a binary algorithm which checks belonging) all subsets of N. A proof could be summarized as: if we could, then the set of all subsets of N would be countable (we could associate to each subset the Gödel number of the algorithm which computes it). As this is false, it proves the result. This is a proof I like as it shows that the problem is equivalent to the subsets of N not being countable. Now I'd like to prove that the halting problem is not solvable using only this same result (uncountability of N subsets), because I guess those are very close problem. Is it possible to prove it this way ? • Clearly both results can be proved by using the same technique (diagonalization). However, I do not think that it is possible to prove the undecidability of the halting problem just by using the uncountability of the family of subsets of ℕ, because the former is about the comparison between RE and R, both of which are countable families of subsets of ℕ. Commented Mar 11, 2012 at 12:30 • There are only countably many programs with access to the halting oracle, again characterized by a Godel number. However, the halting problem IS among this countable set. Commented Mar 11, 2012 at 13:09 The halting theorem, Cantor's theorem (the non-isomorphism of a set and its powerset), and Goedel's incompleteness theorem are all instances of the Lawvere fixed point theorem, which says that for any cartesian closed category, if there is an epimorphic map $e : A \to (A \Rightarrow B)$ then every $f : B \to B$ has a fixed point.
405
1,709
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.765625
3
CC-MAIN-2024-33
latest
en
0.957862
http://openstudy.com/updates/4f17291de4b0aeb795f5502a
1,444,233,394,000,000,000
text/html
crawl-data/CC-MAIN-2015-40/segments/1443737867468.81/warc/CC-MAIN-20151001221747-00244-ip-10-137-6-227.ec2.internal.warc.gz
227,624,084
11,140
## Icewinifredd 3 years ago Please help me out asap! i can't figure this one out! argh! The x-intercept of a line is –5 and the y-intercept of the line is –2. What is the equation of the line? 1. Godsgirl what equation are u trying to get? 2. Icewinifredd the one with the x and y-intercepts. i just don't get it!! 3. Godsgirl do u have any answers they give u?? 4. Icewinifredd no. this is what it says: The x-intercept of a line is –5 and the y-intercept of the line is –2. What is the equation of the line. how do i figure this out? I'm trying to redo my test for more credit since i failed this one!! 5. Godsgirl whats the slope 6. Icewinifredd doesn't say anything. That's all it says. THAT"S why I'm so confused!! 7. Icewinifredd oh wait. it DOES have answers. sorry. Here they are, but i have to show work: 1.) y=-5/2x-5 2.)y=2/5x+2 3.) y=5/2x-2 4.) y=-2/5x-2 8. Godsgirl well what you do is y-y,=m(x-x,) so you would get y+2=2/5 (x+5) Then you would get y+2=2/5x+2 After that you would subtract 2 from both sides Your answers should be y=2/5x 9. Godsgirl Its b 10. Icewinifredd thanks. how did u get the answer? 11. Godsgirl my work show how i got it 12. Icewinifredd 13. Godsgirl 14. Icewinifredd ok! thanks so much! God bless you!! 15. Godsgirl God bless you to!!!
439
1,302
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.71875
4
CC-MAIN-2015-40
longest
en
0.910033
https://oeis.org/A104231
1,653,382,945,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652662570051.62/warc/CC-MAIN-20220524075341-20220524105341-00243.warc.gz
489,248,200
3,886
The OEIS is supported by the many generous donors to the OEIS Foundation. Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!) A104231 Triangle read by rows, based on the morphism f: 1->2, 2->3, 3->{3,3,5,4}, 4->5, 5->6, 6->{6,6,2,1}. First row is 1. If current row is a,b,c,..., then the next row is a,b,c,...,f(a),f(b),f(c),... 0 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 3, 3, 5, 4, 1, 2, 2, 3, 2, 3, 3, 3, 3, 5, 4, 2, 3, 3, 3, 3, 5, 4, 3, 3, 3, 5, 4, 3, 3, 5, 4, 3, 3, 5, 4, 3, 3, 5, 4, 6, 5, 1, 2, 2, 3, 2, 3, 3, 3, 3, 5, 4, 2, 3, 3, 3, 3, 5, 4, 3, 3, 3, 5, 4, 3, 3, 5, 4, 3, 3, 5, 4, 3, 3, 5, 4, 6, 5, 2, 3, 3, 3, 3, 5, 4, 3, 3, 3, 5, 4, 3 (list; graph; refs; listen; history; text; internal format) OFFSET 0,3 COMMENTS This substitution was suggested by looking at output of the symbols of an actual Kenyon border tiling program. LINKS Richard Kenyon, The Construction of Self-Similar Tilings MATHEMATICA s[n_] := n /. {1 -> 2, 2 -> 3, 3 -> {3, 3, 5, 4}, 4 -> 5, 5 -> 6, 6 -> {6, 6, 2, 1}}; t[a_] := Join[a, Flatten[s /@ a]]; Flatten[ NestList[t, {1}, 5]] CROSSREFS Cf. A073058, A103748. Sequence in context: A180094 A333870 A103748 * A105111 A105112 A105113 Adjacent sequences:  A104228 A104229 A104230 * A104232 A104233 A104234 KEYWORD nonn,tabf AUTHOR Roger L. Bagula, Apr 02 2005 STATUS approved Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recents The OEIS Community | Maintained by The OEIS Foundation Inc. Last modified May 24 04:59 EDT 2022. Contains 354005 sequences. (Running on oeis4.)
760
1,658
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.109375
3
CC-MAIN-2022-21
latest
en
0.695725
https://eventthyme.net/how-long-does-it-take-to-smoke-a-turkey/
1,643,086,237,000,000,000
text/html
crawl-data/CC-MAIN-2022-05/segments/1642320304760.30/warc/CC-MAIN-20220125035839-20220125065839-00118.warc.gz
310,626,388
18,988
in # How Long Does It Take To Smoke A Turkey But i’m going to change all that with one word: We've all heard the old saying.slow and low. How to Cook and How Long Does it Take to Smoke a Turkey ### Place the turkey on a cooking rack and cook for 8 to 12 hours or until the inner thigh temperature reaches 180° f. How long does it take to smoke a turkey. The recipe says 3 1/2 hours for a 12 to 14 pound turkey; It depends on the size of your turkey and the smoker you’re using. My secret, at least it was until now, was to cut a whole bird up and smoke the parts. At this point, it is easier to tell how long to smoke turkey in a masterbuilt electric smoker. Make sure that you cook to an internal temperature of 165 f degrees. You should always determine doneness by internal temperature and not by time. How long does it take to smoke a turkey? The answer to how long to smoke a turkey is all about the size of your bird. The smoker will take about 40 minutes to reach these temperatures. How long does it take to smoke turkey breast? How long does it take to smoke a turkey? About an hour before you expect the turkey to be ready, check its progress. Allow about 6 hours with a turkey of 12 pounds. And it'll take less time. Expect that the turkey will require at least 30 minutes per pound to smoke at 230 degrees. I prefer to smoke whole turkeys at the 250 to 275 degree range. How long does it take to smoke a 21 pound turkey? Sometimes it can take as long as 12 hours. So, how long will it take to smoke a turkey? How long you smoke a turkey depends on the variables While it’s heating up, oil the grate to help prevent your turkey sticking to it. Of course, you want to be sure that you’ve completely thawed it first, but that’s a given. I smoke everything including the neck and organs. The temperature to smoke a turkey is 110 degrees but an exception of 116 degrees can be made. Smoke for 1 hour, maintaining the proper temperature in the smoker. The turkey is safe to take off the smoker when it’s reached 165 degrees. Preheat your electric smoker to 225°f/110°c. Place the seasoned turkey on the middle rack of the smoker, close the door, and set a timer for approximately 6.5 hours. Once preheated, put your turkey in the smoker. Smoke turkey, maintaining temperature inside smoker between 225° and 250°, for 3 1/2 to 4 hours or until a meat thermometer inserted into thickest portion registers 165°. Smoke the turkey at a low temperature, between 225 and 250 degrees. I've been smoking turkeys on the holidays for the last fifteen years. The turkey should smoke for 30 to 40 minutes per pound, until the inside temperature reaches 165˚f. However, i'll have an 18 pound turkey. Place turkey in the smoker and smoke for about 30 minutes per lb. While it’s cooking, use a little bit of butter or vegetable oil to baste it every now and then. The basic hint here is that a whole turkey smoking at 240 degrees fahrenheit takes 30 to 40 minutes per pound. How long does it take to smoke turkey breast? My question is how long to smoke the bird. At 325 °f (163 °c), a 15 lb (6.8 kg) turkey will take between 3 and 3.5 hours to smoke. If you decide to cook the turkey at a higher temperature, the cooking time will diminish significantly, but the turkey won't retain as much smoky flavor. The rule of thumb is to smoke the turkey at 300 degrees, 15 minutes for each pound. To most people turkey is so once a year affair on thanksgiving to have roast turkey. If you’re like most people you really don’t care for turkey even with good smoke turkey recipes. You’re looking for about 30 to 40 minutes of smoking per pound. Using the digital meat thermometer, make sure that the sensor is in the thickest part of the breast meat. Typically, it takes at least 6 hours to smoke an average sized turkey at 250 degrees f. What my first book touched on, this second book takes it into much greater detail with lots of pictures. Occasionally if you’re on a health kick you will take to work roast turkey breast. Your turkey must pass through a critical range of 40° f to 140° f in 4 hours or less. Cook for 40 minutes for every pound of bird. For example, at 220 °f (104 °c), a 15 lb (6.8 kg) turkey will take between 8 and 9 hours to smoke. Use your digital thermometer in the thigh of the bird and the smoked turkey will be done when it reaches 165 degrees f internal temperature. Plan on having your turkey cook for about 30 minutes per pound. It’s really important to note that both bird size and the smoking temperatures you use will determine this. Dividing the turkey is the best way for smoking. It is recommended to apply the rub to the turkey 1 hour before putting it into the oven so be sure to take that into account. Set the smoker to 225° f. How long to smoke a whole turkey. A turkey can take up to 35 minutes per pound when smoking at 220 degrees f, but a number of factors could impact the cook time. Luckily, for this recipe, it should only take 3.5 to 4 hours for the turkey breast to fully cook. How long to smoke a turkey smoking is all about “low and slow”—cooking your bird at a relatively low temperature for a long period of time. How long does it take to smoke a 3 pound boneless turkey breast? I smoke at 250˚ to 275˚f for three to four hours. Ample time to reach the recommended cooking temperature. Test the turkey for doneness using a meat thermometer. The skin gets good color, the turkey gets a good dose of smoke flavor and doesn't take forever to finish cooking. Check the temperature of your turkey after 3½ hours. You can also remove the turkey from the smoker when the internal temperature reaches 160 degrees. We periodically basted the turkey while cooking with some of the pan drippings. Place the turkey, breast side down, directly on the smoker grate. Pin on Food for the Journey Pin on Smoker recipes on the lowfat side How Long Does It Take to Smoke a Turkey? Healthy Pin on BBQ for Everyone! How long does it take to smoke a turkey on a pellet grill How to Cook and How Long Does it Take to Smoke a Turkey How to Cook a Turkey and How Long Does it Take to Smoke a Best smoked turkey for Thanksgiving or Christmas this year How Long Does It Take to Smoke a Turkey? in 2020 Smoked Smoked Turkey is an amazing way to mix up your turkey Pin on BBQ (Grilling, Smoking, and More) How Long Does it Take to Cook the Perfect Ham? How to How to Cook and How Long Does it Take to Smoke a Turkey Cooking Smoked Turkey Legs . Quick Cooking Smoked Turkey Smoking a whole turkey is easier than you think! Tips on Pin on How long does chicken take to cook How To Cook Turkey How Long Does It Take To Smoke A How to Cook and How Long Does it Take to Smoke a Turkey Pin on Best Healthy Recipes
1,596
6,763
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2022-05
latest
en
0.948314
https://www.six-sigma-material.com/t-distribution.html
1,723,542,488,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722641075627.80/warc/CC-MAIN-20240813075138-20240813105138-00377.warc.gz
746,163,567
16,934
# t-distribution (t-tests) The various t-tests are applied during the ANALYZE and CONTROL phase. You should be very familiar with these test and able to explain the results. William Sealy Gosset is credited with first publishing the data of the test statistic and became known as the Student's t-distribution. This is used to estimate the parameters of a population when the sample size is small. The t-test is generally used when: • Sample sizes less than 30 (n<30) • Standard deviation is UNKNOWN The t-distribution bell curve gets flatter as the Degrees of Freedom (dF) decrease. Looking at it from the other perspective, as the dF increases, the number of samples (n) must be increasing thus the sample is becoming more representative of the population and the sample statistics approach the population parameters. As these values come closer to one another the "z" calculation and "t" calculation get closer and closer to same value. The table below explains each test in more detail. # Hypothesis Testing T-tests are very commonly used in Six Sigma for evaluating means (μ) of one or two populations. You can use a t-test for determining if: • One mean from random sample is different than a target value, a known mean, or a historical mean. • Two group means are different. • Paired means are different. The word different could be greater than, less than, or a certain value different than a target value. You can run statistical test in software usually be easily configuring the parameters to look for certain types of differences as were just mentioned. For example, instead of just testing to see if one group mean is different than another, you can test to see if one group is a greater than the other and by a certain amount. You can get more information by adding more specific criteria to your test. Before running a test, visualize your data to get a better understanding of the projected outcome of expected result. Using tools such as Box Plot can provide a wealth of information. Also, if the confidence interval contains the value of zero then insufficient evidence exist to suggest their is a statistical difference between the null and alternative hypotheses and accept the null. Get a .pdf t-test module and over 1000 training slides and a practice exam to help prepare you as a Green Belt or Black Belt. Hypothesis testing fluency is a important requirement for Green Belts and Black Belts to execute their project and pass a certification exam.There are a variety of other Six Sigma topics also available plus a 180+ practice certification questions to help you pass your exam. ### One Sample t-test This test compares a sample to a known population mean, historical mean, or targeted mean. The population standard deviation is unknown and the data must satisfy normality assumptions. Given: n = sample size Degrees of freedom (dF) = n-1 Most statistical software will allow a variety of options to be examined from how large a sample must be to detect a certain size difference given a desired level of Power (= 1 - Beta Risk). You can also select various levels of Significance or Alpha Risk. For a given difference that you are interested in, the amount of samples required increases if you want to reduce beta risk (which seems logical). However, gathering more samples has a cost and that is the job of the GB/BB to balance getting the most info to get more Power and highest Confidence Level without too much cost or tying up too many resources. ### Example of One Sample t-test The following example shows the step by step analysis of a One Sample t-test. The example uses a sample size of 51 so usually the z-test would be used but the result will be very similar. The sample standard deviation would be the population standard deviation since the sample size is large enough (>30). Also the degrees of freedom (dF) are not applicable. The point to take away are the steps applied and the interpretation of the final results. ### Example of a Two Sample t-test This test is used when comparing the means of: 1) Two random independent samples are drawn, n1 and n2 2) Each population exhibit normal distribution 3) Equal standard deviations assumed for each population The degrees of freedom (dF*) = n1 + n2 - 2 *There is another more complicated formula for dF if the two population standard deviation are NOT assumed to be equal. Example: The overall length of a sample of a part running on two different machines is being evaluated. The hypothesis test is to determine if there is a difference between the overall lengths of the parts made of the two machines using 95% level of confidence. Our sample size has already been confirmed that we need 20 samples (from each machine) to provide 80% Power. Machine 1 (call this the OLD machine): Sample Size: 25 parts Mean: 24.12 mm Sample standard deviation: 5.52 mm Machine 2 (call this the NEW machine): Sample Size: 20 parts Mean: 19.15 mm Sample standard deviation: 6.51 mm dF = n1 + n2 - 2 = 25+20-2 = 43 Alpha-risk = 1-CI = 1-0.95 = 0.05 (95% level of significance) Establish the hypothesis test: Null Hypothesis (HO): Mean1 = Mean2 Alternative Hypothesis (HA): Mean1 does not equal Mean2 This is two-tailed example since the direction (shorter or longer) is not relevant. All that is relevant is if there is a statistical difference or not. Now, determine the range for which t-statistic and any values outside these ranges will result in rejecting the null and inferring the alternative hypothesis. Using the t-table below notice that: -t(0.975) to t(0.975) with 43 dF equals a range of about -2.02 to 2.02. If the calculated t-value from our example falls within this range then accept the null hypothesis. NOTE: The table below is a one-tailed table so use the column 0.025 that corresponds to 40 dF and include both the positive and negative value. 43 dF isn't exactly shown in this table but you can figure out that the value will be near 2.02 since it is trending downward from 40 dF to 60 dF. You could do a mathematical interpolation but it could be a waste of time since the t-statistic probably won't be that close to the this t-table value of 2.02. ### Interpreting the results The display above is a common output of running a Two Sample t-test. In this example, both samples exhibit normal behavior and it was assumed that the variance are equal and the dF = 20 + 25 - 2 - 43 The hypothesized difference is 0. Assumed equal variances so the pooled standard deviation is used. The estimate for the difference is the difference from the OLD to NEW mean. With an alpha-risk of 5% (or CL of 95%) the difference will be between 1.36 and 8.58. The p-value is 0.008 which is much less than 0.05, therefore reject the HO and infer the HA that there is a statistical difference in the means of the two machines. #### When to assume EQUAL VARIANCES or UNEQUAL VARIANCES? The answer isn't as straightforward as one might hope. For the sake of keeping it simple and understanding there may be exceptions, generally you can assume equal variances unless: • comparing vastly different sample sizes • knowingly have unequal variances • used the F-test and you detemine that the variance of the samples are unequal. Some statisticians suggest taking the more care and conservative approach and assume unequal variances all the time. This method covers for the worst case scenairo that the variance are truly unequal and only forgoes a minute amount of statistical power. In other words, sacrifice a little power to protect for the worst case. ### Paired t test Comparing the difference between two paired sample means (each with the same number of samples) from two normal distributions. ASSUMPTIONS: • The values must be obtained from the same subject (machine, person) each time. • The values from the “differences” must be normal • Values must be independent (i.e. the samples don’t impact each others measurements) Use this test when analyzing the samples of a BEFORE and AFTER situation and the number of samples must be the same. Also referred to as "pre-post" test and consist of two measurements taken on the same subjects such as machines, people, or process. This option is selected to test the hypothesis of no difference between two variables. The data may consist of two measurements taken on the same machine (or subject) or one before and after measurement taken on a matched pair of subjects. For example, if the Six Sigma team has implemented improvements from the IMPROVE phase they are expecting a favorable change to the outputs (Y). If the improvements had no effect the average difference between the measurements is equal to 0 and the null hypothesis is inferred. If the team did a good job making improvements to address the critical inputs (X's) to the problem (Y's) that were causing the variation (and/or to shift the mean in unfavorable direction) then their should be a statistical difference and the alternative hypothesis should be inferred. dF = n - 1 The "Sd" is the standard deviation of the difference in all of the samples. The data is recorded in pairs and each pair of data has a difference, d. Another application may be to measure the weight or cholesterol levels of a group of people that are given a certain diet over a period of time. The before data of each person (weight or cholesterol levels) are recorded to serve as the basis for the null hypothesis. With the other variables controlled and maintained consistent for all people for the duration of the study, then the after measurements are taken. The null hypothesis infers that there is not a significant difference in the weights (or cholesterol levels) of the patients. HO: Meanbefore = Meanafter HA: Meanbefore ≠ Meanafter Again, this test assumes the data set are normally distributed and continuous. ## Practice Certification Questions 1) Find the Degrees of Freedom (dF) if running a paired t-test with samples of 15. A) 15 B) 13 C) 14 D) 29 The answer is dF = n-1 = 15-1 = 14 This value is used in calculating each sample standard deviation and the Standard Error of the Mean (in the denominator) for each sample. 2) If sample sizes are 13 (n=13) and the alpha-risk is chosen to be 0.05, what is the critical t-value for a two tailed paired-t test? The dF = 12 and since it is two-tailed you look at the column below that is 0.025 (the alpha-risk divided by two). tcritical = 2.179 See the table below and see that a dF of 12 and alpha-risk of 0.025 = 2.179 3) If the sample size is 26 for each of two pairs of data, the average of the differences of paired values (dbar) is 0.77, and the standard deviation of the values of the differences is 3.43, what is the t-statistic? For paired t-test (assuming the values of differences has been confirmed to be normal). N-1= n, so n = 25 t = (dbar) * √n / Sd t = (0.77 * √25) / 3.43 t = 3.85 / 3.43 t = 1.12 ## t test in Excel The formula returns the probability (p-value) associated with the Student's t-test. T.Test (array1, array2, tails, type) Each array (or data set) must have the same number of data points. The "tails" represents the number of distribution tails to return: 1 = one-tailed distribution 2 = two-tailed distribution The "type" represent the type of t-test. 1= paired 2 = two sample equal variance 3 = two sample unequal variance ## Z test The Z test uses a set of data and test a point of interest. An example is shown below using Excel. This function returns the one-tailed probability. The sigma value is optional. Use the population standard deviation if it is known. If not, the test defaults to the sample standard deviation. Running the Z test at the mean of the data set returns a value of 0.5, or 50%. EXAMPLE: Recall the sample sizes are generally >30 (the snapshot below uses fewer only to illustrate the data and formula within Excel within a reasonable amount of space) and there is a known population standard deviation. The data below uses a point of interest for the hypothesized population mean of 105. This corresponds to a Z test value of 0.272 indicating that there is a 27.2% chance that 105 is greater than the average of actual data set assuming data set meets normality assumptions. The Z test (as shown in the example below) value represents the probability that the sample mean is greater than the actual value from the data set when the underlying population mean is μ0. The Z-test value IS NOT the same as a z-score. The z-score involves the Voice of the Customer and the USL, LSL specification limits. Six Sigma projects should have a baseline z-score after the Measure phase is completed and before moving into Analyze. The final Z-score is also calculated after the Improve phase and the Control phase is about instituting actions to maintain the gains. There other metrics such as RTY, NY, DPMO, PPM, RPN, can be used in Six Sigma projects as the Big "Y" but usually they can be converted to a z-score (except RPN which is used within the FMEA for risk analysis of failure modes). ## Example Certification Questions 1) A Green Belt wants to evaluate the output of a process before and after a set of changes were made to increase the productivity. The data acquired meets the assumption of normality. Which hypothesis test is best suited to determine if the changes actually improved the productivity? A) Paired-t test B) Two sample t test C) ANOVA D) F-test 2) If a Black Belt wants to test if a supplier can produce a batch of parts in less than 5 business days, which t test would be used? A) One sample t test B) Two sample t test C) Paired sample t test Answer: A, and a one-tailed test. 3) If a Black Belt wants to test the productivity of two machines in terms of pieces/hour and determine whether there is a difference, which test should be used? A) One sample t test B) Two sample t test C) Paired sample t test Answer: B, and a two-tailed test. Want more? Test your knowledge with over 180 practice questions Templates, Tables, and Calculators ## Site MembershipClick for a Passwordto access entire site ### Six SigmaTemplates & Calculators Six Sigma Modules The following are available Green Belt Program (1,000+ Slides) Basic Statistics Cost of Quality SPC Process Mapping Capability Studies MSA Cause & Effect Matrix FMEA Multivariate Analysis Central Limit Theorem Confidence Intervals Hypothesis Testing T Tests 1-Way ANOVA Chi-Square Correlation and Regression Control Plan Kaizen MTBF and MTTR Project Pitfalls Error Proofing Effective Meetings OEE Takt Time Line Balancing Practice Exam ... and more
3,333
14,668
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.21875
3
CC-MAIN-2024-33
latest
en
0.917013
https://www.superprof.co.in/blog/rules-of-logarithms/
1,680,366,934,000,000,000
text/html
crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00101.warc.gz
1,098,684,054
16,604
Today, we'll be taking a look at the function of a calculator button that may remain a mystery to those but a bold few who choose to study mathematics and calculus: The "log" button! This is going to be another algebra heavy topic, so if you're not already comfortable working with algebra, perhaps it's best to brush up on that first, and then come back. So, what is this button? "log" is short for "logarithms", which we use - in combination with exponents - to express large numbers and calculations. However, there's a catch: there is a set of laws surrounding logarithms and exponents, known as the "laws of logarithms"; if we don't follow these laws, then things will go wrong when we try to solve any equations. Throughout the duration of your course, you'll find logarithms most useful when solving exponential equations, and you'll see why below! Without further ado, let's get underway looking at some of the logarithm laws, and how you might use them in practice to solve tricky questions. The best Maths tutors available 4.9 (59 reviews) Richa ₹900 /h 1st class free! 5 (61 reviews) Rahul ₹1,000 /h 1st class free! 5 (32 reviews) Jasleen ₹800 /h 1st class free! 4.9 (16 reviews) Faizan ₹500 /h 1st class free! 5 (52 reviews) Ranjan ₹500 /h 1st class free! 5 (20 reviews) Khushi ₹600 /h 1st class free! 4.9 (14 reviews) Preeti ₹2,000 /h 1st class free! 4.9 (18 reviews) Shilpa ₹600 /h 1st class free! 4.9 (59 reviews) Richa ₹900 /h 1st class free! 5 (61 reviews) Rahul ₹1,000 /h 1st class free! 5 (32 reviews) Jasleen ₹800 /h 1st class free! 4.9 (16 reviews) Faizan ₹500 /h 1st class free! 5 (52 reviews) Ranjan ₹500 /h 1st class free! 5 (20 reviews) Khushi ₹600 /h 1st class free! 4.9 (14 reviews) Preeti ₹2,000 /h 1st class free! 4.9 (18 reviews) Shilpa ₹600 /h 1st class free! ## The Exponential Function Let's get a special case out of the way first. You've probably come across the exponential function already, it's simply written using notation like "ex", and you may see it in an expression like so: In case you aren't too familiar with exponents, they hold a special property that makes them useful... The derivative of  "e" (which is short for "exponential form") is equal to itself, that is to say, if we were to differentiate (learn more about Differentiation/integration problems here) the expression above, we'd end up with the following: When we use logarithms (also known as logarithmic functions), we typically use the "ln" notation to represent what's known as the "natural log" - this is the inverse function of the exponent. What exactly is the exponent? It's what's known as a "constant" because it never changes. "e" represents a specific real number with a lot of decimal places, so it's easier to just represent it using a common notation like a single letter. Just like we do for Pi! Before we get started looking at the meaty part of logarithm rules, let's do a really quick sample question using natural logs. ### Natural Logarithm Example Here's an example question: For the exponential equation below, solve for x: This looks tricky on the face of it, but only because we have an "e" (exponential) thrown into the mix. I think we can all agree that once that's out of the picture, everything will get a lot easier. But how to go about that? Well, thankfully "ln" has another handy property: it's the inverse of "e", this means that by adding it into our equation, we can cancel out the exponential - just like you might divide an equation by a number to get rid of a coefficient. Let's do that now: Remember, with equations everything we do on one side of the equation, has to be done on the other. Now, "ln" and "e" are going to cancel each other out, just like multiplying and then dividing by the same number would. Once that's out of the way, we'll have our answer: Our exponential equation has now been solved! We don't need to do anything else with the ln(...), as this represents a real number - put it into any graphing or scientific calculator and see! Note: Get a reputable math online tutoring on Superprof. ## The Laws of Logarithms Down to business: let's get into the nitty-gritty of logs! There's something to bear in mind when looking at logarithm laws - in all the examples below, we just use "loga", the notation normally used with logarithms is either "log", when the logarithm function is using base10, or "loga", where "a" is a number, when the logarithm is using a different base. Any law can be applied to any base system, provided there's no change of base in the expression. For example, all these laws apply to base3 numbers as well, as long as you use base3 for every number in your expression. ### Log Law No1 The first law states that adding the logarithms of two numbers (of the same base!) together is the same as taking the logarithm after we multiply the numbers together. ### Log Law No2 The second law states that subtracting the logarithms of two numbers (again, of the same base), is equivalent to dividing the two numbers and taking the logarithm of the result. ### Log Law No3 The final logarithm law we'll look at may seem familiar to you - it's very similar to differentiating a term of an expression! The logarithm of a number raised to a power is the same as the whole logarithm being multiplied by that exponent, removing the exponent from the original expression. ## Putting It All Together These all look pretty simple in isolation, but unfortunately, it's rarely so easy in an exam. Normally you'll be faced with the prospect of using any number of combinations of logarithmic functions in order to come up with an answer. Questions about the properties of logarithms quite often crop up in the format of simplifying an expression. That is - you'll be given an expression that includes several different logarithmic terms, and you will have to reduce this down as much as possible - normally to just one or two terms. While this may seem daunting at first, as always, the best way to approach these problems is to tackle them in bitesize chunks: splitting the question into smaller problems will make it seem much more manageable, and simplifying some of the terms may also make completing a trickier part of the question more obvious to you! With this in mind, let's take a look at an example of simplifying a logarithmic expression. Notice how because this is an expression and not an equation, there's no equals sign - this means that we can't really "solve" expressions, as they're not problems, just a mathematical statement of fact. ### The Example Here, all of the terms in our expression are in base10, so we don't specify a base. The question is, how can we simplify this? If you're able to spot the first step, then everything else pretty much works itself out. All we've done here is rewrite the second logarithmic function, using the third one of the properties of logarithms that we looked at above. This lets us turn the second term of the expression into a polynomial. The second term we've ended up with (and the second term in the original question both evaluate the same) and are just expressed in different ways. If you don't believe me, you can check on a calculator by entering just the second term of the expression. Do this once for the term we just created, and once for the original term, and you should see they're both the same value. Now, at the end of the day, all numbers are numbers: this means that we can take our two to the power of three, and turn it onto 8. Let's do that: This is starting to look nicer. We're getting closer to just being able to perform simple arithmetic. If you're not sure what's next, take a glance up at the logarithm laws we outlined earlier, and see if you can work out where to go from here. The next step is to simplify even further, by removing another term. We can now use the first logarithm law we learnt, that adding two logarithms together is the same as multiplying their numerical terms. Once we've done this, we'll be left with two logarithms. We're nearly there! We've multiplied the first two numbers together to make 32, and simplify our expression into two logarithms. Take a look above, you'll see there's only one law left that we haven't used, and hopefully you'll also see that it fits the final stage of our problem perfectly! We can now subtract the two logarithms: We can't simplify that fraction anymore, so our final answer becomes: And we're done! ## Wrapping Up the Rules of Logs The good news is that the laws of logarithms are fairly simple in their own right - all you really need is a sound understanding of basic arithmetic and you're well on your way. Where it gets trickier is combining these laws in order to solve more complex problems. If you find yourself struggling with any of the laws of logarithms, focus on (or make up!) some questions that only require you to use the law you're struggling with, and keep plugging away at it until you feel more comfortable using that particular law in questions. Logarithms are likely to come up frequently when you're faced with calculus questions, so make sure you're comfortable using them. As always in math, practice makes perfect! Although these mathematical concepts don't seem like much on their own, they all add up to be essential cogs in a large machine! If you fancy some extra activities involving logarithmic equations and exponent functions, try graphing e(x) and ln(x), and see if you can spot any patterns. If you do find yourself struggling, you might want to consider using SuperProf to find maths tutors who can give you a helping hand remembering these identities. Don't worry if you're unsure of something, lots of students struggle with all sorts of mathematics problems: from trigonometry to solving quadratic equations! As well as logarithms, you can find maths help on Calculus and Mechanic Forces for your maths GCSE and A level here. Check for Vedic Maths tutorial here on Superprof. The platform that connects tutors and students
2,356
10,038
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.125
3
CC-MAIN-2023-14
latest
en
0.911638
https://oeis.org/A209901
1,600,945,726,000,000,000
text/html
crawl-data/CC-MAIN-2020-40/segments/1600400217623.41/warc/CC-MAIN-20200924100829-20200924130829-00183.warc.gz
545,496,461
4,091
The OEIS Foundation is supported by donations from users of the OEIS and by a grant from the Simons Foundation. Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!) A209901 7^p - 6^p - 2 with p = prime(n). 1 11, 125, 9029, 543605, 1614529685, 83828316389, 215703854542469, 10789535445362645, 26579017117027313525, 3183060102526390833854309, 156448938516521406467644085, 18500229372226631089176131976869, 44487435359130133495783012898708549 (list; graph; refs; listen; history; text; internal format) OFFSET 1,1 COMMENTS After 11 and 9029, there are no prime values of a(n) through 7^109 - 6^109 - 2. LINKS Vincenzo Librandi, Table of n, a(n) for n = 1..100 FORMULA a(n) = A016169(A000040(n)) - 2 = A204768(n) - 1 = A000420(A000040(n)) - A000400(A000040(n)) - 2. EXAMPLE 543605 is in the sequence because 543605 = 7^7 - 6^7 - 2, and 7 is prime. MATHEMATICA Table[7^p - 6^p - 2, {p, Prime[Range[20]]}] (* T. D. Noe, Mar 15 2012 *) PROG (PARI) forprime(p=2, 100, print1(7^p-6^p-2", ")) \\ Charles R Greathouse IV, Mar 15 2012 CROSSREFS Cf. A000040, A000400, A000420, A016169, A204768. Sequence in context: A163310 A240335 A076483 * A285056 A015597 A296732 Adjacent sequences:  A209898 A209899 A209900 * A209902 A209903 A209904 KEYWORD nonn,easy,less AUTHOR Jonathan Vos Post, Mar 14 2012 STATUS approved Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent The OEIS Community | Maintained by The OEIS Foundation Inc. Last modified September 24 07:01 EDT 2020. Contains 337317 sequences. (Running on oeis4.)
588
1,657
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.5625
4
CC-MAIN-2020-40
latest
en
0.540598
https://geetcode.com/problems/xor-operation-in-an-array/
1,638,087,917,000,000,000
text/html
crawl-data/CC-MAIN-2021-49/segments/1637964358480.10/warc/CC-MAIN-20211128073830-20211128103830-00335.warc.gz
342,316,914
7,717
# GeetCode Hub Given an integer `n` and an integer `start`. Define an array `nums` where `nums[i] = start + 2*i` (0-indexed) and `n == nums.length`. Return the bitwise XOR of all elements of `nums`. Example 1: ```Input: n = 5, start = 0 Output: 8 Explanation: Array nums is equal to [0, 2, 4, 6, 8] where (0 ^ 2 ^ 4 ^ 6 ^ 8) = 8. Where "^" corresponds to bitwise XOR operator. ``` Example 2: ```Input: n = 4, start = 3 Output: 8 Explanation: Array nums is equal to [3, 5, 7, 9] where (3 ^ 5 ^ 7 ^ 9) = 8.``` Example 3: ```Input: n = 1, start = 7 Output: 7 ``` Example 4: ```Input: n = 10, start = 5 Output: 2 ``` Constraints: • `1 <= n <= 1000` • `0 <= start <= 1000` • `n == nums.length` class Solution { public int xorOperation(int n, int start) { } }
290
767
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.234375
3
CC-MAIN-2021-49
longest
en
0.512196
https://www.juliahomotopycontinuation.org/examples/reach-curve/
1,716,665,076,000,000,000
text/html
crawl-data/CC-MAIN-2024-22/segments/1715971058834.56/warc/CC-MAIN-20240525192227-20240525222227-00849.warc.gz
740,914,337
6,923
# The reach of a plane curve Computing the reach The reach $\tau$ of an embedded manifold $M\subset \mathbb{R}^n$ is an important complexity measure for methods in computational topology, statistics and machine learning. Namely, estimating $M$, or functionals of $M$, requires regularity conditions and a common regularity assumption is that the reach $\tau >0$. The definition of $\tau$ is as follows: $$\tau = \sup \{t\mid \text{all x\in\mathbb{R}^n with \mathrm{dist}(x,M)<t have a unique nearest point on M}\}$$ where the distance measure $\mathrm{dist}$ is the Euclidean distance. In this example we want to compute the reach of an algebraic manifold; that is, an embedded manifold which is also an algebraic variety. The variety we consider is the plane curve $C$ defined by the equation $$f(x,y) = (x^3 - xy^2 + y + 1)^2(x^2+y^2 - 1)+y^2-5 = 0$$ As pointed out by Aamari et. al. the reach is the determined by the bottlenecks of $C$, which quantify how close $C$ is from being self-intersecting, and the curvature of $C$: $$\tau = \min\left\{\frac{\rho}{2}, \frac{1}{\sigma}\right\},$$ where $\sigma$ is a the maximal curvature of a geodesic running through $C$ and $\rho$ is the width of the narrowest bottleneck of $C$. We compute both $\rho$ and $\sigma$. For this, we first define the equation of $C$ in Julia. using HomotopyContinuation @polyvar x y f = (x^3 - x*y^2 + y + 1)^2 * (x^2 + y^2 - 1) + y^2 - 5 Our computation below finds $$\rho \approx 0.13835 \text{ and }\ \sigma \approx 2097.17$$ and therefore the reach of the curve $C$ is $$\tau \approx \min\left\{\frac{0.13835}{2}, \frac{1}{2097.167}\right\} \approx 0.000479.$$ ## Bottlenecks Bottlenecks of $C$ are pairs of points $(p,q)\in C\times C$ such that $p-q$ is perpendicular to the tangent space $\mathrm{T}_p C$ and perpendicular to the tangent space $\mathrm{T}_q C$. Eklund and di Rocco et. al. discuss the algebraic equations of bottlenecks. The equations are $$f(p) = 0, \quad \det\begin{bmatrix} \nabla_p f & p-q\end{bmatrix} = 0, \quad f(q) = 0 ,\quad \det\begin{bmatrix} \nabla_q f & p-q\end{bmatrix}=0,$$ where $\nabla_p f$ denotes the gradient of $f$ at $p$. The first equation defines $p\in C$ and the second equation defines $p-q \perp \mathrm{T}_p C$. The third equation defines $q\in C$ and the fourth equation defines $p-q \perp \mathrm{T}_q C$. The width of a bottleneck is $\rho(p,q) = \Vert p-q\Vert_2$. The width of the narrowest bottleneck is the minimum over all $\rho(p,q)$ such that $(p,q)$ satisfies the above equations. Let us define and solve the equations in Julia: using LinearAlgebra: det @polyvar p[1:2] q[1:2] # define variables for the points p and q f_p = subs(f, [x;y] => p) f_q = subs(f, [x;y] => q) ∇_p = differentiate(f_p, p) ∇_q = differentiate(f_q, q) bn_eqs = [f_p; det([∇_p p-q]); f_q; det([∇_q p-q])] bn_result = solve(bn_eqs, start_system = :polyhedral) Result{Array{Complex{Float64},1}} with 1858 solutions ===================================================== • 1726 non-singular solutions (104 real) • 132 singular solutions (0 real) • 3600 paths tracked • random seed: 577138 • multiplicity table of singular solutions: ┌───────┬───────┬────────┬────────────┐ │ mult. │ total │ # real │ # non-real │ ├───────┼───────┼────────┼────────────┤ │ 1 │ 60 │ 0 │ 60 │ │ 2 │ 72 │ 0 │ 72 │ └───────┴───────┴────────┴────────────┘ From bn_result we see that $C$ has $1726$ (complex) bottleneck pairs and of those are $104$ real. Note that in our formulation we have for each bottleneck pair $(p,q)$ also the pair $(q, p)$ as a solution. Therefore we find that the curve $C$ has $52$ distinct real bottlenecks. From the real solutions we compute the width of the narrowest bottleneck. bn_pairs = real_solutions(nonsingular(bn_result)) ρ = map(s -> norm(s[1:2] - s[3:4]), bn_pairs) ρ_min, ρ_min_ind = findmin(ρ) (0.13835123592621543, 22) We see that the narrowest bottleneck of $C$ is of width $\rho \approx 0.13835$. Finally, we want to plot all bottlenecks. The narrowest bottleneck is highlighted in red. using Plots, ImplicitPlots # Show curve implicit_plot(f; dpi=200, axis=false, grid=false) # Draw all bottlenecks in gray with dashed lines for (p₁,p₂,q₁,q₂) in bn_pairs plot!([p₁, q₁], [p₂, q₂]; color = :slategray, grid=false, linestyle=:dot) end # Draw narrowest bottleneck in red narrowest_bn_pair = bn_pairs[ρ_min_ind] plot!(narrowest_bn_pair[[1,3]], narrowest_bn_pair[[2,4]]; color = :tomato, grid=false, linewidth = 3) ## Maximal curvature The following formula gives the curvature $\sigma(p)$ at $p\in C = \{f(x,y)=0\}$. $$\sigma(p) = \frac{h(p)}{g(p)^\frac{3}{2}}$$ where $$g(p)= \nabla_p f^T\nabla_p f\quad \text{and}\quad h(p) = v(p)^T H(p) v(p),$$ and where $H(p)$ is the Hessian of $f$ at $p$ and $v(p) = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}\nabla_p f$. For computing the maximum of $\sigma(p)$ over $C$ we solve the critical equations of $\sigma(p)$. The critical equations are $$v(p)^T \nabla_p \sigma=0\quad \text{and}\quad f(p)=0.$$ We use the following code. ∇ = differentiate(f, [x;y]) # the gradient H = differentiate(∇, [x;y]) # the Hessian g = ∇ ⋅ ∇ v = [-∇[2]; ∇[1]] h = v' * H * v dg = differentiate(g, [x;y]) dh = differentiate(h, [x;y]) ∇σ = g .* dh - ((3/2) * h).* dg F₂ = [v ⋅ ∇σ; f] curv_result = solve(F₂, start_system = :polyhedral) Result{Array{Complex{Float64},1}} with 176 solutions ==================================================== • 176 non-singular solutions (24 real) • 0 singular solutions (0 real) • 292 paths tracked • random seed: 140163 From curv_result we see that C has $176$ (complex) points of critical curvature and of those are $24$ real. From the result we compute the corresponding curvatures and extract the maximum. curv_pts = real_solutions(nonsingular(curv_result)) σ(s) = h(s) / g(s)^(3/2) σ_max, σ_max_ind = findmax(σ) (2097.165767782749, 23) Therefore, the maximal curvature of a geodesic in $C$ is $\sigma \approx 2097.17$. Here is a plot of all critical points in green with the point of maximal curvature in red. implicit_plot(f; dpi=200, axis=false, grid=false) scatter!(first.(curv_pts), last.(curv_pts); markerstrokewidth=0, markersize=4, color=:slategray, grid=false) # Draw point of maximal curvature max_curv_pt = R₂[σ_max_ind] scatter!(max_curv_pt[1:1], max_curv_pt[2:2]; markerstrokewidth=0, markersize=4, color=:tomato, grid=false) Cite this example: @Misc{ reach-curve2023 , author = { Paul Breiding and Sascha Timme }, title = { The reach of a plane curve }, howpublished = { \url{ https://www.JuliaHomotopyContinuation.org/examples/reach-curve/ } }, note = { Accessed: March 10, 2023 } }
2,170
6,692
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.640625
4
CC-MAIN-2024-22
latest
en
0.817874
http://cnx.org/content/m38360/latest/?collection=col11306/1.3
1,394,352,878,000,000,000
application/xhtml+xml
crawl-data/CC-MAIN-2014-10/segments/1393999675839/warc/CC-MAIN-20140305060755-00079-ip-10-183-142-35.ec2.internal.warc.gz
38,053,893
36,611
# Connexions You are here: Home » Content » FHSST: Grade 10 Maths [CAPS] » Finance ### Lenses What is a lens? #### Definition of a lens ##### Lenses A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust. ##### What is in a lens? Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content. ##### Who can create a lens? Any individual member, a community, or a respected organization. ##### What are tags? Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens. #### Affiliated with (What does "Affiliated with" mean?) This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization. • FETMaths This module and collection are included inLens: Siyavula: Mathematics (Gr. 10-12) By: Siyavula Module Review Status: In Review Collection Review Status: In Review Click the "FETMaths" link to see all content affiliated with them. Click the tag icon to display tags associated with this content. ### Recently Viewed This feature requires Javascript to be enabled. ### Tags (What is a tag?) These tags come from the endorsement, affiliation, and other lenses that include this content. Inside Collection (Textbook): # Finance ## Introduction Should you ever find yourself stuck with a mathematics question on a television quiz show, you will probably wish you had remembered how many even prime numbers there are between 1 and 100 for the sake of R1 000 000. And who does not want to be a millionaire, right? Welcome to the Grade 10 Finance Chapter, where we apply maths skills to everyday financial situations that you are likely to face both now and along your journey to purchasing your first private jet. If you master the techniques in this chapter, you will grasp the concept of compound interest, and how it can ruin your fortunes if you have credit card debt, or make you millions if you successfully invest your hard-earned money. You will also understand the effects of fluctuating exchange rates, and its impact on your spending power during your overseas holidays! Before we begin this chapter it is worth noting that the vast majority of countries use a decimal currency system. This simply means that countries use a currency system that works with powers of ten, for example in South Africa we have 100 (10 squared) cents in a rand. In America there are 100 cents in a dollar. Another way of saying this is that the country has one basic unit of currency and a sub-unit which is a power of 10 of the major unit. This means that, if we ignore the effect of exchange rates, we can essentially substitute rands for dollars or rands for pounds. ## Being Interested in Interest If you had R1 000, you could either keep it in your wallet, or deposit it in a bank account. If it stayed in your wallet, you could spend it any time you wanted. If the bank looked after it for you, then they could spend it, with the plan of making profit from it. The bank usually “pays" you to deposit it into an account, as a way of encouraging you to bank it with them, This payment is like a reward, which provides you with a reason to leave it with the bank for a while, rather than keeping the money in your wallet. We call this reward "interest". If you deposit money into a bank account, you are effectively lending money to the bank - and you can expect to receive interest in return. Similarly, if you borrow money from a bank (or from a department store, or a car dealership, for example) then you can expect to have to pay interest on the loan. That is the price of borrowing money. The concept is simple, yet it is core to the world of finance. Accountants, actuaries and bankers, for example, could spend their entire working career dealing with the effects of interest on financial matters. In this chapter you will be introduced to the concept of financial mathematics - and given the tools to cope with even advanced concepts and problems. ### Tip: Interest The concepts in this chapter are simple - we are just looking at the same idea, but from many different angles. The best way to learn from this chapter is to do the examples yourself, as you work your way through. Do not just take our word for it! ## Simple Interest Definition 1: Simple Interest Simple interest is where you earn interest on the initial amount that you invested, but not interest on interest. As an easy example of simple interest, consider how much you will get by investing R1 000 for 1 year with a bank that pays you 5% simple interest. At the end of the year, you will get an interest of: Interest = R 1 000 × 5 % = R 1 000 × 5 100 = R 1 000 × 0 , 05 = R 50 Interest = R 1 000 × 5 % = R 1 000 × 5 100 = R 1 000 × 0 , 05 = R 50 (1) So, with an “opening balance" of R1 000 at the start of the year, your “closing balance" at the end of the year will therefore be: Closing Balance = Opening Balance + Interest = R 1 000 + R 50 = R 1 050 Closing Balance = Opening Balance + Interest = R 1 000 + R 50 = R 1 050 (2) We sometimes call the opening balance in financial calculations the Principal, which is abbreviated as PP (R1 000 in the example). The interest rate is usually labelled ii (5% in the example), and the interest amount (in Rand terms) is labelled II (R50 in the example). So we can see that: I = P × i I = P × i (3) and Closing Balance = Opening Balance + Interest = P + I = P + ( P × i ) = P ( 1 + i ) Closing Balance = Opening Balance + Interest = P + I = P + ( P × i ) = P ( 1 + i ) (4) This is how you calculate simple interest. It is not a complicated formula, which is just as well because you are going to see a lot of it! ### Not Just One You might be wondering to yourself: 1. how much interest will you be paid if you only leave the money in the account for 3 months, or 2. what if you leave it there for 3 years? It is actually quite simple - which is why they call it Simple Interest. 1. Three months is 1/4 of a year, so you would only get 1/4 of a full year's interest, which is: 1/4×(P×i)1/4×(P×i). The closing balance would therefore be: Closing Balance =P+1/4×(P×i)=P(1+(1/4)i) Closing Balance =P+1/4×(P×i)=P(1+(1/4)i) (5) 2. For 3 years, you would get three years' worth of interest, being: 3×(P×i)3×(P×i). The closing balance at the end of the three year period would be: Closing Balance =P+3×(P×i)=P×(1+(3)i) Closing Balance =P+3×(P×i)=P×(1+(3)i) (6) If you look carefully at the similarities between the two answers above, we can generalise the result. If you invest your money (PP) in an account which pays a rate of interest (ii) for a period of time (nn years), then, using the symbol AA for the Closing Balance: A = P ( 1 + i · n ) A = P ( 1 + i · n ) (7) As we have seen, this works when nn is a fraction of a year and also when nn covers several years. #### Important: Interest Calculation: Annual Rates means Yearly rates. and p.a.(per annum) = per year #### Exercise 1: Simple Interest If I deposit R1 000 into a special bank account which pays a Simple Interest of 7% for 3 years, how much will I get back at the end of this term? ##### Solution 1. Step 1. Determine what is given and what is required : • opening balance, P=R1000P=R1000 • interest rate, i=7%i=7% • period of time, n=3 years n=3 years We are required to find the closing balance (A). 2. Step 2. Determine how to approach the problem : We know from Equation 7 that: A = P ( 1 + i · n ) A = P ( 1 + i · n ) (8) 3. Step 3. Solve the problem : A = P ( 1 + i · n ) = R 1 000 ( 1 + 3 × 7 % ) = R 1 210 A = P ( 1 + i · n ) = R 1 000 ( 1 + 3 × 7 % ) = R 1 210 (9) 4. Step 4. Write the final answer : The closing balance after 3 years of saving R1 000 at an interest rate of 7% is R1 210. #### Exercise 2: Calculating nn If I deposit R30 000 into a special bank account which pays a Simple Interest of 7.5%, for how many years must I invest this amount to generate R45 000? ##### Solution 1. Step 1. Determine what is given and what is required : • opening balance, P=R30000P=R30000 • interest rate, i=7,5%i=7,5% • closing balance, A=R45000A=R45000 We are required to find the number of years. 2. Step 2. Determine how to approach the problem : We know from Equation 7 that: A = P ( 1 + i · n ) A = P ( 1 + i · n ) (10) 3. Step 3. Solve the problem : A = P ( 1 + i · n ) R 45 000 = R 30 000 ( 1 + n × 7 , 5 % ) ( 1 + 0 , 075 × n ) = 45000 30000 0 , 075 × n = 1 , 5 - 1 n = 0 , 5 0 , 075 n = 6 , 6666667 n = 6 years 8 months A = P ( 1 + i · n ) R 45 000 = R 30 000 ( 1 + n × 7 , 5 % ) ( 1 + 0 , 075 × n ) = 45000 30000 0 , 075 × n = 1 , 5 - 1 n = 0 , 5 0 , 075 n = 6 , 6666667 n = 6 years 8 months (11) 4. Step 4. Write the final answer : The period is 6 years and 8 months for R30 000 to generate R45 000 at a simple interest rate of 7,5%. If we were asked for the nearest whole number of years, we would have to invest the money for 7 years. ### Other Applications of the Simple Interest Formula #### Exercise 3: Hire-Purchase Troy is keen to buy an additional hard drive for his laptop advertised for R 2 500 on the internet. There is an option of paying a 10% deposit then making 24 monthly payments using a hire-purchase agreement where interest is calculated at 7,5% p.a. simple interest. Calculate what Troy's monthly payments will be. ##### Solution 1. Step 1. Determine what is given and what is required : A new opening balance is required, as the 10% deposit is paid in cash. • 10% of R 2 500 = R250 • new opening balance, P=R2500-R250=R2250P=R2500-R250=R2250 • interest rate, i=7,5%i=7,5% • period of time, n=2 years n=2 years We are required to find the closing balance (A) and then the monthly payments. 2. Step 2. Determine how to approach the problem : We know from Equation 7 that: A = P ( 1 + i · n ) A = P ( 1 + i · n ) (12) 3. Step 3. Solve the problem : A = P ( 1 + i · n ) = R 2 250 ( 1 + ( 2 × 7 , 5 % ) ) = R 2 587 , 50 Monthly payment = 2587 , 50 ÷ 24 = R 107 , 81 A = P ( 1 + i · n ) = R 2 250 ( 1 + ( 2 × 7 , 5 % ) ) = R 2 587 , 50 Monthly payment = 2587 , 50 ÷ 24 = R 107 , 81 (13) 4. Step 4. Write the final answer : Troy's monthly payments = R 107,81 Many items become less valuable as they are used and age. For example, you pay less for a second hand car than a new car of the same model. The older a car is the less you pay for it. The reduction in value with time can be due purely to wear and tear from usage but also to the development of new technology that makes the item obsolete, for example, new computers that are released force down the value of older models. The term we use to descrive the decrease in value of items with time is depreciation. Depreciation, like interest can be calculated on an annual basis and is often done with a rate or percentage change per year. It is like ”negative” interest. The simplest way to do depreciation is to assume a constant rate per year, which we will call simple depreciation. There are more complicated models for depreciation but we won't deal with them here. #### Exercise 4: Depreciation Seven years ago, Tjad's drum kit cost him R12 500. It has now been valued at R2 300. What rate of simple depreciation does this represent ? ##### Solution 1. Step 1. Determine what is given and what is required : • opening balance, P=R12500P=R12500 • period of time, n=7 years n=7 years • closing balance, A=R2300A=R2300 We are required to find the interest rate(ii). 2. Step 2. Determine how to approach the problem : We know from Equation 7 that: A = P ( 1 + i · n ) A = P ( 1 + i · n ) (14) Therefore, for depreciation the formula will change to: A = P ( 1 - i · n ) A = P ( 1 - i · n ) (15) 3. Step 3. Solve the problem : A = P ( 1 - i · n ) R 2 300 = R 12 500 ( 1 - 7 × i ) i = 0 , 11657 . . . A = P ( 1 - i · n ) R 2 300 = R 12 500 ( 1 - 7 × i ) i = 0 , 11657 . . . (16) 4. Step 4. Write the final answer : Therefore the rate of depreciation is 11,66%11,66% #### Simple Interest 1. An amount of R3 500 is invested in a savings account which pays simple interest at a rate of 7,5% per annum. Calculate the balance accumulated by the end of 2 years. 2. Calculate the simple interest for the following problems. 1. A loan of R300 at a rate of 8% for l year. 2. An investment of R225 at a rate of 12,5% for 6 years. 3. I made a deposit of R5 000 in the bank for my 5 year old son's 21st birthday. I have given him the amount of R 18 000 on his birthday. At what rate was the money invested, if simple interest was calculated ? 4. Bongani buys a dining room table costing R 8 500 on Hire Purchase. He is charged simple interest at 17,5% per annum over 3 years. 1. How much will Bongani pay in total ? 2. How much interest does he pay ? 3. What is his monthly installment ? ## Compound Interest To explain the concept of compound interest, the following example is discussed: ### Exercise 5: Using Simple Interest to lead to the concept Compound Interest I deposit R1 000 into a special bank account which pays a Simple Interest of 7%. What if I empty the bank account after a year, and then take the principal and the interest and invest it back into the same account again. Then I take it all out at the end of the second year, and then put it all back in again? And then I take it all out at the end of 3 years? #### Solution 1. Step 1. Determine what is given and what is required : • opening balance, P=R1000P=R1000 • interest rate, i=7%i=7% • period of time, 1 year 1 year at a time, for 3 years We are required to find the closing balance at the end of three years. 2. Step 2. Determine how to approach the problem : We know that: A = P ( 1 + i · n ) A = P ( 1 + i · n ) (17) 3. Step 3. Determine the closing balance at the end of the first year : A = P ( 1 + i · n ) = R 1 000 ( 1 + 1 × 7 % ) = R 1 070 A = P ( 1 + i · n ) = R 1 000 ( 1 + 1 × 7 % ) = R 1 070 (18) 4. Step 4. Determine the closing balance at the end of the second year : After the first year, we withdraw all the money and re-deposit it. The opening balance for the second year is therefore R1070R1070, because this is the balance after the first year. A = P ( 1 + i · n ) = R 1 070 ( 1 + 1 × 7 % ) = R 1 144 , 90 A = P ( 1 + i · n ) = R 1 070 ( 1 + 1 × 7 % ) = R 1 144 , 90 (19) 5. Step 5. Determine the closing balance at the end of the third year : After the second year, we withdraw all the money and re-deposit it. The opening balance for the third year is therefore R1144,90R1144,90, because this is the balance after the first year. A = P ( 1 + i · n ) = R 1 144 , 90 ( 1 + 1 × 7 % ) = R 1 225 , 04 A = P ( 1 + i · n ) = R 1 144 , 90 ( 1 + 1 × 7 % ) = R 1 225 , 04 (20) 6. Step 6. Write the final answer : The closing balance after withdrawing all the money and re-depositing each year for 3 years of saving R1 000 at an interest rate of 7% is R1 225,04. In the two worked examples using simple interest (Exercise 1 and Exercise 5), we have basically the same problem because PP=R1 000, ii=7% and nn=3 years for both problems. Except in the second situation, we end up with R1 225,04 which is more than R1 210 from the first example. What has changed? In the first example I earned R70 interest each year - the same in the first, second and third year. But in the second situation, when I took the money out and then re-invested it, I was actually earning interest in the second year on my interest (R70) from the first year. (And interest on the interest on my interest in the third year!) This more realistically reflects what happens in the real world, and is known as Compound Interest. It is this concept which underlies just about everything we do - so we will look at it more closely next. Definition 2: Compound Interest Compound interest is the interest payable on the principal and its accumulated interest. Compound interest is a double-edged sword, though - great if you are earning interest on cash you have invested, but more serious if you are stuck having to pay interest on money you have borrowed! In the same way that we developed a formula for Simple Interest, let us find one for Compound Interest. If our opening balance is PP and we have an interest rate of ii then, the closing balance at the end of the first year is: Closing Balance after 1 year = P ( 1 + i ) Closing Balance after 1 year = P ( 1 + i ) (21) This is the same as Simple Interest because it only covers a single year. Then, if we take that out and re-invest it for another year - just as you saw us doing in the worked example above - then the balance after the second year will be: Closing Balance after 2 years = [ P ( 1 + i ) ] × ( 1 + i ) = P ( 1 + i ) 2 Closing Balance after 2 years = [ P ( 1 + i ) ] × ( 1 + i ) = P ( 1 + i ) 2 (22) And if we take that money out, then invest it for another year, the balance becomes: Closing Balance after 3 years = [ P ( 1 + i ) 2 ] × ( 1 + i ) = P ( 1 + i ) 3 Closing Balance after 3 years = [ P ( 1 + i ) 2 ] × ( 1 + i ) = P ( 1 + i ) 3 (23) We can see that the power of the term (1+i)(1+i) is the same as the number of years. Therefore, Closing Balance after n years = P ( 1 + i ) n Closing Balance after n years = P ( 1 + i ) n (24) ### Fractions add up to the Whole It is easy to show that this formula works even when nn is a fraction of a year. For example, let us invest the money for 1 month, then for 4 months, then for 7 months. Closing Balance after 1 month = P ( 1 + i ) 1 12 Closing Balance after 5 months = Closing Balance after 1 month invested for 4 months more = [ P ( 1 + i ) 1 12 ] ( 1 + i ) 4 12 = P ( 1 + i ) 1 12 + 4 12 = P ( 1 + i ) 5 12 Closing Balance after 12 months = Closing Balance after 5 months invested for 7 months more = [ P ( 1 + i ) 5 12 ] ( 1 + i ) 7 12 = P ( 1 + i ) 5 12 + 7 12 = P ( 1 + i ) 12 12 = P ( 1 + i ) 1 Closing Balance after 1 month = P ( 1 + i ) 1 12 Closing Balance after 5 months = Closing Balance after 1 month invested for 4 months more = [ P ( 1 + i ) 1 12 ] ( 1 + i ) 4 12 = P ( 1 + i ) 1 12 + 4 12 = P ( 1 + i ) 5 12 Closing Balance after 12 months = Closing Balance after 5 months invested for 7 months more = [ P ( 1 + i ) 5 12 ] ( 1 + i ) 7 12 = P ( 1 + i ) 5 12 + 7 12 = P ( 1 + i ) 12 12 = P ( 1 + i ) 1 (25) which is the same as investing the money for a year. Look carefully at the long equation above. It is not as complicated as it looks! All we are doing is taking the opening amount (PP), then adding interest for just 1 month. Then we are taking that new balance and adding interest for a further 4 months, and then finally we are taking the new balance after a total of 5 months, and adding interest for 7 more months. Take a look again, and check how easy it really is. Does the final formula look familiar? Correct - it is the same result as you would get for simply investing PP for one full year. This is exactly what we would expect, because: 1 month + 4 months + 7 months = 12 months, which is a year. Can you see that? Do not move on until you have understood this point. ### The Power of Compound Interest To see how important this “interest on interest" is, we shall compare the difference in closing balances for money earning simple interest and money earning compound interest. Consider an amount of R10 000 that you have to invest for 10 years, and assume we can earn interest of 9%. How much would that be worth after 10 years? The closing balance for the money earning simple interest is: A = P ( 1 + i · n ) = R 10 000 ( 1 + 9 % × 10 ) = R 19 000 A = P ( 1 + i · n ) = R 10 000 ( 1 + 9 % × 10 ) = R 19 000 (26) The closing balance for the money earning compound interest is: A = P ( 1 + i ) n = R 10 000 ( 1 + 9 % ) 10 = R 23 673 , 64 A = P ( 1 + i ) n = R 10 000 ( 1 + 9 % ) 10 = R 23 673 , 64 (27) So next time someone talks about the “magic of compound interest", not only will you know what they mean - but you will be able to prove it mathematically yourself! Again, keep in mind that this is good news and bad news. When you are earning interest on money you have invested, compound interest helps that amount to increase exponentially. But if you have borrowed money, the build up of the amount you owe will grow exponentially too. #### Exercise 6: Taking out a Loan Mr Lowe wants to take out a loan of R 350 000. He does not want to pay back more than R625 000 altogether on the loan. If the interest rate he is offered is 13%, over what period should he take the loan. ##### Solution 1. Step 1. Determine what has been provided and what is required : • opening balance, P=R350000P=R350000 • closing balance, A=R625000A=R625000 • interest rate, i=13% per year i=13% per year We are required to find the time period(nn). 2. Step 2. Determine how to approach the problem : We know from Equation 24 that: A = P ( 1 + i ) n A = P ( 1 + i ) n (28) We need to find nn. Therefore we convert the formula to: A P = ( 1 + i ) n A P = ( 1 + i ) n (29) and then find nn by trial and error. 3. Step 3. Solve the problem : A P = ( 1 + i ) n 625000 350000 = ( 1 + 0 , 13 ) n 1 , 785 . . . = ( 1 , 13 ) n Try n = 3 : ( 1 , 13 ) 3 = 1 , 44 . . . Try n = 4 : ( 1 , 13 ) 4 = 1 , 63 . . . Try n = 5 : ( 1 , 13 ) 5 = 1 , 84 . . . A P = ( 1 + i ) n 625000 350000 = ( 1 + 0 , 13 ) n 1 , 785 . . . = ( 1 , 13 ) n Try n = 3 : ( 1 , 13 ) 3 = 1 , 44 . . . Try n = 4 : ( 1 , 13 ) 4 = 1 , 63 . . . Try n = 5 : ( 1 , 13 ) 5 = 1 , 84 . . . (30) 4. Step 4. Write the final answer : Mr Lowe should take the loan over four years (If he took the loan over five years, he would end up paying more than he wants to.) ### Other Applications of Compound Growth The following two examples show how we can take the formula for compound interest and apply it to real life problems involving compound growth or compound decrease. #### Exercise 7: Population Growth South Africa's population is increasing by 2,5% per year. If the current population is 43 million, how many more people will there be in South Africa in two years' time ? ##### Solution 1. Step 1. Determine what has been provided and what is required : • initial value (opening balance), P=43000000P=43000000 • period of time, n=2 year n=2 year • rate of increase, i=2,5% per yeari=2,5% per year We are required to find the final value (closing balance AA). 2. Step 2. Determine how to approach the problem : We know from Equation 24 that: A = P ( 1 + i ) n A = P ( 1 + i ) n (31) 3. Step 3. Solve the problem : A = P ( 1 + i ) n = 43 000 000 ( 1 + 0 , 025 ) 2 = 45 176 875 A = P ( 1 + i ) n = 43 000 000 ( 1 + 0 , 025 ) 2 = 45 176 875 (32) 4. Step 4. Write the final answer : There will be 45176875-43000000=217687545176875-43000000=2176875 more people in 2 years' time #### Exercise 8: Compound Decrease A swimming pool is being treated for a build-up of algae. Initially, 50m250m2 of the pool is covered by algae. With each day of treatment, the algae reduces by 5%. What area is covered by algae after 30 days of treatment ? ##### Solution 1. Step 1. Determine what has been provided and what is required : • Starting amount (opening balance), P=50m2P=50m2 • period of time, n=30 days n=30 days • rate of decrease, i=5% per dayi=5% per day We are required to find the final area covered by algae (closing balance AA). 2. Step 2. Determine how to approach the problem : We know from Equation 24 that: A = P ( 1 + i ) n A = P ( 1 + i ) n (33) But this is compound decrease so we can use the formula: A = P ( 1 - i ) n A = P ( 1 - i ) n (34) 3. Step 3. Solve the problem : A = P ( 1 - i ) n = 50 ( 1 - 0 , 05 ) 30 = 10 , 73 m 2 A = P ( 1 - i ) n = 50 ( 1 - 0 , 05 ) 30 = 10 , 73 m 2 (35) 4. Step 4. Write the final answer : Therefore the area still covered with algae is 10,73m210,73m2 #### Compound Interest 1. An amount of R3 500 is invested in a savings account which pays compound interest at a rate of 7,5% per annum. Calculate the balance accumulated by the end of 2 years. 2. If the average rate of inflation for the past few years was 7,3% and your water and electricity account is R 1 425 on average, what would you expect to pay in 6 years time ? 3. Shrek wants to invest some money at 11% per annum compound interest. How much money (to the nearest rand) should he invest if he wants to reach a sum of R 100 000 in five year's time ? The next section on exchange rates is included for completeness. However, you should know about fluctuating exchange rates and the impact that this has on imports and exports. Fluctuating exchange rates lead to things like increases in the cost of petrol. You can read more about this in Fluctuating exchange rates. ## Foreign Exchange Rates - (Not in CAPS, included for completeness) Is $500 ("500 US dollars") per person per night a good deal on a hotel in New York City? The first question you will ask is “How much is that worth in Rands?". A quick call to the local bank or a search on the Internet (for example on http://www.x-rates.com) for the Dollar/Rand exchange rate will give you a basis for assessing the price. A foreign exchange rate is nothing more than the price of one currency in terms of another. For example, the exchange rate of 6,18 Rands/US Dollars means that$1 costs R6,18. In other words, if you have $1 you could sell it for R6,18 - or if you wanted$1 you would have to pay R6,18 for it. But what drives exchange rates, and what causes exchange rates to change? And how does this affect you anyway? This section looks at answering these questions. ### How much is R1 really worth? We can quote the price of a currency in terms of any other currency, for example, we can quote the Japanese Yen in term of the Indian Rupee. The US Dollar (USD), British Pound Sterling (GBP) and the Euro (EUR) are, however, the most common used market standards. You will notice that the financial news will report the South African Rand exchange rate in terms of these three major currencies. Currency Abbreviation Symbol South African Rand ZAR R United States Dollar USD $British Pounds Sterling GBP £ So the South African Rand, noted ZAR, could be quoted on a certain date as 6,07040 ZAR per USD (i.e.$1,00 costs R6,07040), or 12,2374 ZAR per GBP. So if I wanted to spend $1 000 on a holiday in the United States of America, this would cost me R6 070,40; and if I wanted £1 000 for a weekend in London it would cost me R12 237,40. This seems obvious, but let us see how we calculated those numbers: The rate is given as ZAR per USD, or ZAR/USD such that$1,00 buys R6,0704. Therefore, we need to multiply by 1 000 to get the number of Rands per $1 000. Mathematically,$ 1 , 00 = R 6 , 0740 1 000 × $1 , 00 = 1 000 × R 6 , 0740 = R 6 074 , 00$ 1 , 00 = R 6 , 0740 1 000 × $1 , 00 = 1 000 × R 6 , 0740 = R 6 074 , 00 (36) as expected. What if you have saved R10 000 for spending money for the same trip and you wanted to use this to buy USD? How many USD could you get for this? Our rate is in ZAR/USD but we want to know how many USD we can get for our ZAR. This is easy. We know how much$1,00 costs in terms of Rands. $1 , 00 = R 6 , 0740$ 1 , 00 6 , 0740 = R 6 , 0740 6 , 0740 $1 , 00 6 , 0740 = R 1 , 00 R 1 , 00 =$ 1 , 00 6 , 0740 = $0 , 164636$ 1 , 00 = R 6 , 0740 $1 , 00 6 , 0740 = R 6 , 0740 6 , 0740$ 1 , 00 6 , 0740 = R 1 , 00 R 1 , 00 = $1 , 00 6 , 0740 =$ 0 , 164636 (37) As we can see, the final answer is simply the reciprocal of the ZAR/USD rate. Therefore, for R10 000 will get: R 1 , 00 = $1 , 00 6 , 0740 10 000 × R 1 , 00 = 10 000 ×$ 1 , 00 6 , 0740 = $1 646 , 36 R 1 , 00 =$ 1 , 00 6 , 0740 10 000 × R 1 , 00 = 10 000 × $1 , 00 6 , 0740 =$ 1 646 , 36 (38) We can check the answer as follows: $1 , 00 = R 6 , 0740 1 646 , 36 ×$ 1 , 00 = 1 646 , 36 × R 6 , 0740 = R 10 000 , 00 $1 , 00 = R 6 , 0740 1 646 , 36 ×$ 1 , 00 = 1 646 , 36 × R 6 , 0740 = R 10 000 , 00 (39) #### Six of one and half a dozen of the other So we have two different ways of expressing the same exchange rate: Rands per Dollar (ZAR/USD) and Dollar per Rands (USD/ZAR). Both exchange rates mean the same thing and express the value of one currency in terms of another. You can easily work out one from the other - they are just the reciprocals of the other. If the South African Rand is our domestic (or home) currency, we call the ZAR/USD rate a “direct" rate, and we call a USD/ZAR rate an “indirect" rate. In general, a direct rate is an exchange rate that is expressed as units of home currency per units of foreign currency, i.e., Domestic Currency Foreign Currency Domestic Currency Foreign Currency . The Rand exchange rates that we see on the news are usually expressed as direct rates, for example you might see: Currency Abbreviation Exchange Rates 1 USD R6,9556 1 GBP R13,6628 1 EUR R9,1954 The exchange rate is just the price of each of the Foreign Currencies (USD, GBP and EUR) in terms of our domestic currency, Rands. An indirect rate is an exchange rate expressed as units of foreign currency per units of home currency, i.e. Foreign Currency Domestic Currency Foreign Currency Domestic Currency . Defining exchange rates as direct or indirect depends on which currency is defined as the domestic currency. The domestic currency for an American investor would be USD which is the South African investor's foreign currency. So direct rates, from the perspective of the American investor (USD/ZAR), would be the same as the indirect rate from the perspective of the South Africa investor. #### Terminology Since exchange rates are simply prices of currencies, movements in exchange rates means that the price or value of the currency has changed. The price of petrol changes all the time, so does the price of gold, and currency prices also move up and down all the time. If the Rand exchange rate moved from say R6,71 per USD to R6,50 per USD, what does this mean? Well, it means that $1 would now cost only R6,50 instead of R6,71. The Dollar is now cheaper to buy, and we say that the Dollar has depreciated (or weakened) against the Rand. Alternatively we could say that the Rand has appreciated (or strengthened) against the Dollar. What if we were looking at indirect exchange rates, and the exchange rate moved from$0,149 per ZAR (=16,7116,71) to $0,1538 per ZAR (=16,5016,50). Well now we can see that the R1,00 cost$0,149 at the start, and then cost $0,1538 at the end. The Rand has become more expensive (in terms of Dollars), and again we can say that the Rand has appreciated. Regardless of which exchange rate is used, we still come to the same conclusions. In general, • for direct exchange rates, the home currency will appreciate (depreciate) if the exchange rate falls (rises) • For indirect exchange rates, the home currency will appreciate (depreciate) if the exchange rate rises (falls) As with just about everything in this chapter, do not get caught up in memorising these formulae - doing so is only going to get confusing. Think about what you have and what you want - and it should be quite clear how to get the correct answer. ##### Discussion : Foreign Exchange Rates In groups of 5, discuss: 1. Why might we need to know exchange rates? 2. What happens if one country's currency falls drastically vs another country's currency? 3. When might you use exchange rates? ### Cross Currency Exchange Rates - (not in CAPS, included for completeness) We know that exchange rates are the value of one currency expressed in terms of another currency, and we can quote exchange rates against any other currency. The Rand exchange rates we see on the news are usually expressed against the major currencies, USD, GBP and EUR. So if for example, the Rand exchange rates were given as 6,71 ZAR/USD and 12,71 ZAR/GBP, does this tell us anything about the exchange rate between USD and GBP? Well I know that if$1 will buy me R6,71, and if £1.00 will buy me R12,71, then surely the GBP is stronger than the USD because you will get more Rands for one unit of the currency, and we can work out the USD/GBP exchange rate as follows: Before we plug in any numbers, how can we get a USD/GBP exchange rate from the ZAR/USD and ZAR/GBP exchange rates? Well, USD / GBP = USD / ZAR × ZAR / GBP . USD / GBP = USD / ZAR × ZAR / GBP . (40) Note that the ZAR in the numerator will cancel out with the ZAR in the denominator, and we are left with the USD/GBP exchange rate. Although we do not have the USD/ZAR exchange rate, we know that this is just the reciprocal of the ZAR/USD exchange rate. USD / ZAR = 1 ZAR / USD USD / ZAR = 1 ZAR / USD (41) Now plugging in the numbers, we get: USD / GBP = USD / ZAR × ZAR / GBP = 1 ZAR / USD × ZAR / GBP = 1 6 , 71 × 12 , 71 = 1 , 894 USD / GBP = USD / ZAR × ZAR / GBP = 1 ZAR / USD × ZAR / GBP = 1 6 , 71 × 12 , 71 = 1 , 894 (42) #### Tip: Sometimes you will see exchange rates in the real world that do not appear to work exactly like this. This is usually because some financial institutions add other costs to the exchange rates, which alter the results. However, if you could remove the effect of those extra costs, the numbers would balance again. #### Investigation : Cross Exchange Rates - Alternative Method If $1 = R 6,40, and £1 = R11,58 what is the$/£ exchange rate (i.e. the number of US$per £)? Overview of problem You need the$/£ exchange rate, in other words how many dollars must you pay for a pound. So you need £1. From the given information we know that it would cost you R11,58 to buy £1 and that $1 = R6,40. Use this information to: 1. calculate how much R1 is worth in$. 2. calculate how much R11,58 is worth in $. Do you get the same answer as in the worked example? ### Fluctuating exchange rates If everyone wants to buy houses in a certain suburb, then house prices are going to go up - because the buyers will be competing to buy those houses. If there is a suburb where all residents want to move out, then there are lots of sellers and this will cause house prices in the area to fall - because the buyers would not have to struggle as much to find an eager seller. This is all about supply and demand, which is a very important section in the study of Economics. You can think about this is many different contexts, like stamp-collecting for example. If there is a stamp that lots of people want (high demand) and few people own (low supply) then that stamp is going to be expensive. And if you are starting to wonder why this is relevant - think about currencies. If you are going to visit London, then you have Rands but you need to “buy" Pounds. The exchange rate is the price you have to pay to buy those Pounds. Think about a time where lots of South Africans are visiting the United Kingdom, and other South Africans are importing goods from the United Kingdom. That means there are lots of Rands (high supply) trying to buy Pounds. Pounds will start to become more expensive (compare this to the house price example at the start of this section if you are not convinced), and the exchange rate will change. In other words, for R1 000 you will get fewer Pounds than you would have before the exchange rate moved. Another context which might be useful for you to understand this: consider what would happen if people in other countries felt that South Africa was becoming a great place to live, and that more people were wanting to invest in South Africa - whether in properties, businesses - or just buying more goods from South Africa. There would be a greater demand for Rands - and the “price of the Rand" would go up. In other words, people would need to use more Dollars, or Pounds, or Euros ... to buy the same amount of Rands. This is seen as a movement in exchange rates. Although it really does come down to supply and demand, it is interesting to think about what factors might affect the supply (people wanting to “sell" a particular currency) and the demand (people trying to “buy" another currency). This is covered in detail in the study of Economics, but let us look at some of the basic issues here. There are various factors which affect exchange rates, some of which have more economic rationale than others: • economic factors (such as inflation figures, interest rates, trade deficit information, monetary policy and fiscal policy) • political factors (such as uncertain political environment, or political unrest) • market sentiments and market behaviour (for example if foreign exchange markets perceived a currency to be overvalued and starting selling the currency, this would cause the currency to fall in value - a self-fulfilling expectation). The exchange rate also influences the price we pay for certain goods. All countries import certain goods and export other goods. For example, South Africa has a lot of minerals (gold, platinum, etc.) that the rest of the world wants. So South Africa exports these minerals to the world for a certain price. The exchange rate at the time of export influences how much we can get for the minerals. In the same way, any goods that are imported are also influenced by the exchange rate. The price of petrol is a good example of something that is affected by the exchange rate. ## Foreign Exchange 1. I want to buy an IPOD that costs £100, with the exchange rate currently at £1=R14£1=R14. I believe the exchange rate will reach R12R12 in a month. 1. How much will the MP3 player cost in Rands, if I buy it now? 2. How much will I save if the exchange rate drops to R12R12? 3. How much will I lose if the exchange rate moves to R15R15? Click here for the solution 2. Study the following exchange rate table: Country Currency Exchange Rate United Kingdom (UK) Pounds(£) R14,13R14,13 United States (USA) Dollars ($) R7,04R7,04 1. In South Africa the cost of a new Honda Civic is R173400R173400. In England the same vehicle costs £12200£12200 and in the USA $2190021900. In which country is the car the cheapest when you compare the prices converted to South African Rand ? 2. Sollie and Arinda are waiters in a South African restaurant attracting many tourists from abroad. Sollie gets a £6£6 tip from a tourist and Arinda gets$ 12. How many South African Rand did each one get ? ## Summary • There are two types of interest: simple and compound. • The following table summarises the key definitions that are used in both simple and compound interest. P P Principal (the amount of money at the starting point of the calculation) A A Closing balance (the amount of money at the ending point of the calculation) i i interest rate, normally the effective rate per annum n n period for which the investment is made • For simple interest we use: A = P ( 1 + i · n ) A=P ( 1 + i · n ) (45) • For compound interest we use: A = P ( 1 + i ) n A=P ( 1 + i ) n (46) • The formulae for simple and compound interest can be applied to many everyday problems. • A foreign exchange rate is the price of one currency in terms of another. ### Tip: Always keep the interest and the time period in the same units of time (e.g. both in years, or both in months etc.). The following three videos provide a summary of how to calculate interest. Take note that although the examples are done using dollars, we can use the fact that dollars are a decimal currency and so are interchangeable (ignoring the exchange rate) with rands. This is what is done in the subtitles. Figure 1 Khan academy video on interest - 1 Figure 2 Khan academy video on interest - 2 Note in this video that at the very end the rule of 72 is mentioned. You will not be using this rule, but will rather be using trial and error to solve the problem posed. Figure 3 Khan academy video on interest - 3 ## End of Chapter Exercises 1. You are going on holiday to Europe. Your hotel will cost 200 euros per night. How much will you need in Rands to cover your hotel bill, if the exchange rate is 1 euro = R9,20? 2. Calculate how much you will earn if you invested R500 for 1 year at the following interest rates: 1. 6,85% simple interest. 2. 4,00% compound interest. 3. Bianca has R1 450 to invest for 3 years. Bank A offers a savings account which pays simple interest at a rate of 11% per annum, whereas Bank B offers a savings account paying compound interest at a rate of 10,5% per annum. Which account would leave Bianca with the highest accumulated balance at the end of the 3 year period? 4. How much simple interest is payable on a loan of R2 000 for a year, if the interest rate is 10%? 5. How much compound interest is payable on a loan of R2 000 for a year, if the interest rate is 10%? 6. Discuss: 1. Which type of interest would you like to use if you are the borrower? 2. Which type of interest would you like to use if you were the banker? 7. Calculate the compound interest for the following problems. 1. A R2 000 loan for 2 years at 5%. 2. A R1 500 investment for 3 years at 6%. 3. An R800 loan for l year at 16%. 8. If the exchange rate for 100 Yen = R 6,2287 and 1 Australian Doller (AUD) = R 5,1094 , determine the exchange rate between the Australian Dollar and the Japanese Yen. 9. Bonnie bought a stove for R 3 750. After 3 years she had finished paying for it and the R 956,25 interest that was charged for hire-purchase. Determine the rate of simple interest that was charged. ## Content actions PDF | EPUB (?) ### What is an EPUB file? EPUB is an electronic book format that can be read on a variety of mobile devices. #### Collection to: My Favorites (?) 'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'. | A lens I own (?) #### Definition of a lens ##### Lenses A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust. ##### What is in a lens? Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content. ##### Who can create a lens? Any individual member, a community, or a respected organization. ##### What are tags? Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens. | External bookmarks #### Module to: My Favorites (?) 'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'. | A lens I own (?) #### Definition of a lens ##### Lenses A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust. ##### What is in a lens? Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content. ##### Who can create a lens? Any individual member, a community, or a respected organization. ##### What are tags? Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens. | External bookmarks
11,988
43,532
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.578125
3
CC-MAIN-2014-10
latest
en
0.91988
http://www.physicsforums.com/showpost.php?p=531699&postcount=1
1,410,896,088,000,000,000
text/html
crawl-data/CC-MAIN-2014-41/segments/1410657119220.53/warc/CC-MAIN-20140914011159-00119-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
734,781,722
3,427
Thread: Root sequence question View Single Post P: 736 For the recursive sequence $$R_n = x + \sqrt {x - \sqrt {R_{n - 2} } }$$ $$R_0 = x = k^2 - k + 1$$ $$\forall k \in \mathbb{N},\;k > 1$$ why does $$\mathop {\lim }\limits_{n \to \infty } R_n = k^2$$ ??
103
256
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.703125
3
CC-MAIN-2014-41
latest
en
0.564463
https://thismatter.com/money/tax/tax-structure.amp.htm
1,563,273,330,000,000,000
text/html
crawl-data/CC-MAIN-2019-30/segments/1563195524522.18/warc/CC-MAIN-20190716095720-20190716121720-00442.warc.gz
568,319,554
11,767
# Tax Structure: Tax Base, Tax Rate, Proportional, Regressive, and Progressive Taxation The tax structure of an economy depends on its tax base, tax rate, and how the tax rate varies. The tax base is the amount to which a tax rate is applied. The tax rate is the percentage of the tax base that must be paid in taxes. To calculate most taxes, it is necessary to know the tax base and the tax rate. So if the tax base equals \$100 and the tax rate is 9%, then the tax will be \$9 (=100 × 0.09). Proportional taxes (aka flat-rate taxes) apply the same tax rate to any income level, or for any size tax base. So if Bill earns \$50,000 and Jane earns \$100,000, and the tax rate is 10%, then Bill will owe \$5,000 in taxes while Jane will owe \$10,000. Many state income taxes and almost all sales taxes are proportional taxes. Social Security and Medicare taxes are also proportional since the same tax rate is applied to any earned income up to the Social Security wage base limit, which, for 2019, is \$132,900. The Medicare tax is a proportional tax that applies to all earned income and is equal to 2.9%. Flat taxes are a fixed amount and do not depend on income or transaction values, such as a \$10 per capita tax. A regressive tax is higher at lower incomes. The most prominent regressive tax is the Social Security tax, because the tax drops to 0, when earned income exceeds the Social Security wage base limit, which, for 2019, is \$132,900. Regressive taxes especially hurt the poor. The inequitable effects of regressive or proportional taxes are often mitigated by payments to the poor and by exempting essential products and services, such as food, from regressive and proportional taxes. A progressive tax applies a higher tax rate to higher incomes. So if the tax rate on \$50,000 is 10% and 20% for \$100,000, then, continuing the above example, Bill still owes \$5,000 in taxes while Jane will have to pay \$20,000 in taxes. However, almost all progressive taxes are structured as a marginal tax, which means that the progressive tax rate is only applied to that part of the income which is greater than a certain amount. The portion of the tax base that is subject to a particular tax rate, known as a tax bracket, always has lower and upper limits, except for the top tax bracket, which has no upper limit. The following tax brackets apply for 2013 to 2017: 10%, 15%, 25%, 28%, 33%, 35%, 39.6%. The 39.6% bracket was added in 2013. To see the current rates published by the IRS, scroll down to the bottom of the current tax table from the instructions for Form 1040. The new Republican tax policy, passed at the end of 2017, known as the Tax Cuts and Jobs Act, has changed the tax brackets for 2018 and afterwards. Congruent to the Republicans' tax objective to benefit the wealthy, most of the benefits in the change to tax brackets go to those who earn more than \$200,000. The marriage penalty has also been eliminated for all tax brackets, except the top 2. Upper Limits for Taxable Income Brackets Tax Brackets 10% 12% 22% 24% 32% 35% 37% 2019 Single \$9,700 \$39,475 \$84,200 \$160,725 \$204,100 \$510,300 Excess Amount over 35% Bracket HOH \$13,850 \$52,850 \$84,200 \$160,725 \$204,100 \$510,300 MFJ, QSS \$19,400 \$78,950 \$168,400 \$321,450 \$408,200 \$612,350 MP 2 2 2 2 2 1.2 1 2018 Single \$9,525 \$38,700 \$82,500 \$157,500 \$200,000 \$500,000 Excess Amount over 35% Bracket HOH \$13,600 \$51,800 \$82,500 \$157,500 \$200,000 \$500,000 MFJ, QSS \$19,050 \$77,400 \$165,000 \$315,000 \$400,000 \$600,000 MP 2 2 2 2 2 1.2 1 • Source: IRS.gov • Note: Married Filing Separately = 1/2 of Joint Rate Continuing the above example, if the 20% tax rate is only applied to that portion of the income between \$50,000 and \$100,000, then Jane would owe \$5000 on the first \$50,000 of income and \$10,000 on the 2nd \$50,000 of income, resulting in a total tax liability of \$15,000. Without marginal tax rates, a progressive tax would skew economic decisions and would be viewed as unfair. For instance, if the 20% tax rate was applied to all earned income and Jane only earned \$60,000, then she would have to pay \$12,000 in taxes, which is 2.4 times more than Bill's taxes, even though she only made 1.2 times more than Bill. To take a more extreme example, consider what happens if Jane makes \$50,001. She would have to pay slightly more than \$10,000 which is \$5000 more than what Bill would have to pay, even though he earned only \$1 less. Hence, without marginal tax rates, a pay increase could actually result in a decrease in disposable income. A person's tax bracket is the highest tax bracket applicable to her income level. A progressive, marginal tax rate also makes economic sense, since money, like everything else, has a declining marginal utility. In other words, \$1 is worth a lot more to someone who earns \$10,000 per year than to someone who makes \$10 million per year. Poor people need the money to buy essentials, whereas rich people spend their money for luxuries, so the wealthy can pay higher taxes without seriously lowering their standard of living. Because of marginal tax rates, the tax rate that one actually pays is not knowable just from their tax bracket, so another rate, called the effective tax rate (aka average tax rate), is calculated by dividing the actual taxes paid by the tax base. In other words, the total tax calculated by multiplying earned income times the effective tax rate will equal the same tax that is calculated by multiplying the amount of income in each tax bracket by the respective marginal tax rate and summing them all up. So in example 2, since Jane earned \$100,000 and paid \$15,000 in taxes, her effective tax rate is 15% (= \$15,000 ÷ \$100,000). The federal income tax and many state taxes are progressive. Although the federal income tax itself is progressive, the effective tax rate that is based on all the taxes collected by the federal government is progressive only until the Social Security limit is reached. Thereafter, the effective tax rate either declines or levels off with increasing income, since people who make more than the Social Security limit do not have to pay the 12.4% Social Security rate on any income earned above the limit, as can be seen from the following table for a single person who is not a head of the household (Note: For a self-employed person, the tax code allows the deduction of the employer's half of the payroll tax, which results in a net self-employment tax of 14.13%. The tax code also allows the deduction of the employer's portion of the tax, the value of which depends on the taxpayer's marginal tax bracket, but since this does not change the effective tax rate very much, it is ignored in the table below. The following table assumes that a single person with no dependents pays the entire payroll tax, which is true for the self-employed, but also applies to employees. Even though employees technically only pay ½ of the payroll tax, most economists agree, that most employees pay the other half through lower wages or through higher unemployment. For more info, see Tax Incidence: How The Tax Burden Is Shared Between Buyers And Sellers): 2011 Effective Tax Rate on Earned Income Earned IncomeIncome TaxesPayroll TaxesTotal Taxes PaidEffective Tax Rate \$10,000.00\$0.00\$1,412.96\$1,412.9614.13% \$20,000.00\$840.00\$2,825.91\$3,665.9118.33% \$30,000.00\$2,335.00\$4,238.87\$6,573.8721.91% \$40,000.00\$3,835.00\$5,651.82\$9,486.8223.72% \$50,000.00\$5,725.00\$7,064.78\$12,789.7825.58% \$100,000.00\$18,369.00\$14,129.55\$32,498.5532.50% \$150,000.00\$32,369.00\$17,260.43\$49,629.4333.09% \$200,000.00\$47,069.00\$18,599.50\$65,668.5032.83% \$250,000.00\$63,569.00\$19,938.58\$83,507.5833.40% \$300,000.00\$80,069.00\$21,277.65\$101,346.6533.78% \$350,000.00\$96,569.00\$22,616.73\$119,185.7334.05% \$400,000.00\$113,254.00\$23,955.80\$137,209.8034.30% \$450,000.00\$130,754.00\$25,294.88\$156,048.8834.68% \$500,000.00\$148,254.00\$26,633.95\$174,887.9534.98% \$550,000.00\$165,754.00\$27,973.03\$193,727.0335.22% \$600,000.00\$183,254.00\$29,312.10\$212,566.1035.43% \$650,000.00\$200,754.00\$30,651.18\$231,405.1835.60% \$700,000.00\$218,254.00\$31,990.25\$250,244.2535.75% \$750,000.00\$235,754.00\$33,329.33\$269,083.3335.88% \$800,000.00\$253,254.00\$34,668.40\$287,922.4035.99% \$850,000.00\$270,754.00\$36,007.48\$306,761.4836.09% \$900,000.00\$288,254.00\$37,346.55\$325,600.5536.18% \$950,000.00\$305,754.00\$38,685.63\$344,439.6236.26% \$1,000,000.00\$323,254.00\$40,024.70\$363,278.7036.33% Although the above table is based on 2011 tax rates, it still shows the effective tax rate on earned income, which would be little changed with the new tax brackets that the Republicans introduced at the end of 2017. The 2011 standard deduction of \$5,800 and the personal exemption of \$3,700 for a single person was deducted from the earned income to calculate the income tax in the above table. However, payroll taxes applies to all earned income. As you can see from the chart below, the federal tax on earned income is not nearly as progressive as it might seem by just looking at marginal tax rates. For instance, note the fact that someone who makes \$1 million has an effective tax rate of 36.33%, and someone who earns \$100,000 has an effective tax rate of 32.5%, so the millionaire pays taxes at a rate that is only 3.83% more. Although these figures are now several years old, this basic tax structure, as of 2017, is still the same — the current numbers are only a little bit higher. ## The Wealthy Really Do Have It Better The above table is misleading because it shows only the taxes assessed on working income, which is the most highly taxed form of income. It suggests that the wealthy pay a higher effective tax rate on their income than poorer people. However, because of favorable tax treatment for investment income and, especially for capital gains, and because large amounts of wealth can be transferred through gifts and inheritance (collectively, gratuitous transfers) tax-free, the wealthy actually pay a far lower effective tax rate if the taxes that they paid is divided by all their income, including investment income and inherited wealth. For instance, according to IRS statistics, in 2007, the top 400 taxpayers of the United States received an average of \$344.8 million and paid only 17.2% of that income in taxes, including payroll taxes that they may have paid. If you look at the above table again, you will note that someone who makes a mere \$20,000 per year pays an effective tax rate of 19.2% — even after subtracting the standard deduction and personal exemption! Furthermore, hedge fund managers, some who make more than \$1 billion per year, are exempted from paying any payroll taxes on their performance fee, which is usually most of their compensation if they are profitable, thanks to their Republican friends in Congress. However, the largest single factor that has created this inequity in taxation is the fact that earned income is the most highly taxed income, even though, for maximum economic growth, earned income should be the least taxed, because the higher price of wages due to these income taxes decreases the demand for labor while the lower amount received by the suppliers of this labor reduces supply — in other words, it lowers the incentive for work. In economics, this is referred to as the deadweight loss of taxation. Indeed, it is only work that increases the economic wealth of any society. Even investments cannot create true economic wealth unless it is used to put people to work, and transferred wealth actually reduces economic wealth because the people that receive it have a reduced incentive to actually work. Hence, the prudent economic policy of any government should be to tax work the least and gratuitous transfers the most.
3,142
11,875
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.609375
3
CC-MAIN-2019-30
latest
en
0.963434
https://www.effortlessmath.com/math-puzzles/algebra-puzzle-challenge-31/
1,723,624,630,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722641104812.87/warc/CC-MAIN-20240814061604-20240814091604-00292.warc.gz
569,969,592
12,040
# Algebra Puzzle – Challenge 31 Let's look at another great math puzzle to help improve your critical thinking and creative thinking! ## Challenge: Sophia has a guitar lesson three times a week and Lukas has a Math lesson every other week. In a given term, Sophia has 40 more lessons than Lukas. How many weeks long is their term? A- 8 B- 12 C- 16 D- 20 E- 24 ### The Absolute Best Book to challenge your Smart Student! Original price was: $16.99.Current price is:$11.99. Satisfied 123 Students Sophia has 6 lessons in two weeks and Lukas has one lesson in two weeks. Therefore, the difference is 5 lessons for two weeks or 2.5 lessons per week. The difference is 40 lessons. So, 40 ÷ 2.5 = 16 Their term is 16 weeks. The Absolute Best Books to Ace Algebra Original price was: $29.99.Current price is:$19.99. Original price was: $29.99.Current price is:$14.99. Satisfied 1 Students Original price was: $24.99.Current price is:$14.99. Satisfied 92 Students Original price was: $24.99.Current price is:$15.99. Satisfied 125 Students ### What people say about "Algebra Puzzle – Challenge 31 - Effortless Math: We Help Students Learn to LOVE Mathematics"? No one replied yet. X 45% OFF Limited time only! Save Over 45% SAVE $40 It was$89.99 now it is \$49.99
352
1,273
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.046875
3
CC-MAIN-2024-33
latest
en
0.878434
https://schoollearningcommons.info/question/a-write-a-pair-of-negative-integers-whose-different-gives-8-19781040-35/
1,632,778,730,000,000,000
text/html
crawl-data/CC-MAIN-2021-39/segments/1631780058552.54/warc/CC-MAIN-20210927211955-20210928001955-00302.warc.gz
528,600,526
13,371
## (a)write a pair of negative integers whose different gives 8​ Question (a)write a pair of negative integers whose different gives 8​ in progress 0 3 weeks 2021-09-04T19:15:56+00:00 2 Answers 0 views 0 : (-1,-9) and (-3,-11) are two pairs whose difference is 8. (-1,-9) and (-3,-11) are two pairs whose difference is 8. Step-by-step explanation: : (-1,-9) and (-3,-11) are two pairs whose difference is 8. (-1,-9) and (-3,-11) are two pairs whose difference is 8.
153
471
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.21875
3
CC-MAIN-2021-39
latest
en
0.851445
https://www.wikiod.com/openmp/openmp-reductions/
1,669,866,250,000,000,000
text/html
crawl-data/CC-MAIN-2022-49/segments/1669446710789.95/warc/CC-MAIN-20221201021257-20221201051257-00129.warc.gz
1,123,764,571
8,315
# OpenMP reductions ## Approximation of PI using #pragma omp reduction clause # ``````h = 1.0 / n; #pragma omp parallel for private(x) shared(n, h) reduction(+:area) for (i = 1; i <= n; i++) { x = h * (i - 0.5); area += (4.0 / (1.0 + x*x)); } pi = h * area; `````` In this example, each threads execute a subset of the iteration count. Each thread has its local private copy of `area` and at the end of the parallel region they all apply the addition operation (`+`) so as to generate the final value for `area`. ## Approximation of PI using reductions based on #pragma omp critical # ``````h = 1.0 / n; #pragma omp parallel for private(x) shared(n, h, area) for (i = 1; i <= n; i++) { x = h * (i - 0.5); #pragma omp critical { area += (4.0 / (1.0 + x*x)); } } pi = h * area; `````` In this example, each threads execute a subset of the iteration count and they accumulate atomically into the shared variable `area`, which ensures that there are no lost updates. ## Approximation of PI using reductions based on #pragma atomic # ``````h = 1.0 / n; #pragma omp parallel for private(x) shared(n, h, area) for (i = 1; i <= n; i++) { x = h * (i - 0.5); #pragma atomic area += (4.0 / (1.0 + x*x)); } pi = h * area; `````` In this example, each threads execute a subset of the iteration count and they accumulate atomically into the shared variable `area`, which ensures that there are no lost updates. We can use the `#pragma atomic` in here because the given operation (`+=`) can be done atomically, which simplifies the readability compared to the usage of the `#pragma omp critical`. ## Approximation of PI hand-crafting the #pragma omp reduction # ``````h = 1.0 / n; #pragma omp parallel private(x) shared(n, h) { double thread_area = 0; // Private / local variable #pragma omp for for (i = 1; i <= n; i++) { x = h * (i - 0.5); thread_area += (4.0 / (1.0 + x*x)); } #pragma omp atomic // Applies the reduction manually The threads are spawned in the `#pragma omp parallel`. Each thread will have an independent/private `thread_area` that stores its partial addition. The following loop is distributed among threads using `#pragma omp for`. In this loop, each thread calculates its own `thread_area` and after this loop, the code sequentially aggregates the area atomically through `#pragma omp atomic`.
639
2,358
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.734375
3
CC-MAIN-2022-49
latest
en
0.704293
http://lauracandler.com/books/sales/dmptestimonials.php
1,524,130,759,000,000,000
text/html
crawl-data/CC-MAIN-2018-17/segments/1524125936833.6/warc/CC-MAIN-20180419091546-20180419111546-00648.warc.gz
150,862,343
6,201
[Rotating Photo] ## Daily Math Puzzler Testimonials ### User-Friendly Resource I started using the Daily Math Puzzlers  in my class this year and my 4th graders love it!  It is exactly what they need to develop good problem solving strategies.  My kids enjoy using these puzzlers to play partner and group "games".  We start math when we get back from lunch and today I was reminded by the kids to do the puzzlers. They let me know today that they enjoy working on the problems daily. I have been giving the a few minutes to work on the problem then we discuss the problem and the answer. The students are able to share their different ways of solving the problem. I think I have been spending no more than 7 minutes a day on the Daily Math Puzzler Program. Even though some of the puzzlers are challenging, they actually look forward to the day's puzzler!  I'm so glad to have a user-friendly resource to help my students become better problem solvers.  Thanks! Sheryl Easterling Waynesville, North Carolina ### Daily Math Puzzler Calculator Lessons I have recently started using Daily Math Puzzlers in my classroom.  I started by teaching the calculator lesson, and I was surprised by the results.  I assumed that all of my 5th grades knew how to use a calculator correctly, but the lessons and assessments showed me otherwise.  The calculator lessons will hopefully prevent some input mistakes on future test.  I also liked the way the program teaches the basic problem solving skills in a quick lesson that I can incorporate into my daily math lesson. Kimberly Smith Middle Childhood Generalist NBCT ### Math Money a Big Hit! I started using the Daily Math Puzzlers with my 4th graders a few weeks ago.  My students enjoy the problems and are now begging me for more problems so they can earn Math Money.  I reward the students who present clear, well organized solutions with a Math Money certificate.  They are eager to review different solutions for the problems  I let students use Math Money to play simple math games with one another, or on the computers later in the day.  I'm happy to have a format for doing regular problem solving in my room each day since our math program is hard to jump in and out of and doesn't provide much in the way of problem solving. Debbie Davisreid Olympia, Washington ### Daily Math Puzzlers Set the Tone for the Day I love the Math Puzzler Power Pack!  The Puzzlers set the tone for our day by helping to focus and organize our minds.  I honestly think that my students would be lost without a daily math puzzler.  Problem solving is not as "scary" to my sixth graders as it once was.  I am grateful to you for prompting me to tell my students to think of word problems as brain teasers or puzzles.  I don't know why I hadn't thought of that before!  Thanks so much for knowing and providing what students need to be successful. Betsy Clark Evant, Texas ### Aligned with State Testing The Daily Math Puzzler program  is definitely aligned  with our state test. Washington State's test requires students to demonstrate their mathematical skills using problem solving for grade levels 3-12. The DMP program stretches their thinking skills because it offers a variation of the type of problems being solved. This is a great way to start our day. The kids know what to expect and the problem solving program has a nice routine to follow. We have only started the program, but I feel the confidence of my lower level learners is growing. The program is very easy to follow and implement into my classroom. It is great to start with calculator activities because no matter what we are focusing on, if calculators are involved, students are engaged. I love "Math Talk!" I have been using cooperative groups for a few years now and this is a great way to structure the group work. I particularly like how the problems are leveled by grade. Jennifer Bruce Franklin Pierce School District Washington Would you like to send in your own testimonial? If you are using the Daily Math Puzzler Program, I'd love to hear from you (lauracandler@att.net). Let me know what you think of the program. site design: dougbrowndesign.com
927
4,174
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.515625
3
CC-MAIN-2018-17
latest
en
0.933412
http://openstudy.com/updates/55c8c785e4b0016bb0158938
1,516,441,382,000,000,000
text/html
crawl-data/CC-MAIN-2018-05/segments/1516084889542.47/warc/CC-MAIN-20180120083038-20180120103038-00027.warc.gz
262,293,750
10,252
• anonymous Complete the following: (a)           Use the Leading Coefficient Test to determine the graph's end behavior. (b)           Find the x-intercepts. State whether the graph crosses the x-axis or touches the x-axis and turns around at each intercept. Show your work. (c)           Find the y-intercept. Show your work. f(x) = x2(x + 2) (a).inf (b).touches at 0 and goes through -2 (c).2 Mathematics • Stacey Warren - Expert brainly.com Hey! We 've verified this expert answer for you, click below to unlock the details :) SOLVED At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Looking for something else? Not the answer you are looking for? Search for more explanations.
352
1,325
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.65625
3
CC-MAIN-2018-05
latest
en
0.39932
https://www.teacherspayteachers.com/Product/Fraction-Task-Cards-Parts-of-a-Set-Thanksgiving-Math-2805274
1,606,187,397,000,000,000
text/html
crawl-data/CC-MAIN-2020-50/segments/1606141171077.4/warc/CC-MAIN-20201124025131-20201124055131-00215.warc.gz
881,474,504
31,087
Fraction Task Cards Parts of a Set | Thanksgiving Math Subject Resource Type Format ZipΒ (10 MB|13 pages) Standards \$3.00 \$3.00 Also included in 1. This is a BUNDLE of Fraction task cards! There are 11 sets of task cards and each set has 24 task cards representing parts of a set! I've included a recording sheet and answer key, too!***Note: These sets are also sold separatelyβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆThese activities would wo \$23.10 \$33.00 Save \$9.90 Description This colorful set of 24 task cards with fraction questions with Thanksgiving themed pictures representing parts of a set is a wonderful addition to your lessons! I've included a recording sheet and answer key, too! ***Note: This set is also available as part of a BUNDLE! Here's the link: Fraction Bundle- Parts of a Set BUNDLE β—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆ These activities would work for grades 1-3! β—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆ Here are some possible uses for these in your classroom: ✿ early finishers ✿ tutoring ✿ sub tubs ✿ math stations/centers ✿ holiday work ✿ small group ✿ end of unit quick assessments ✿ homework ✿ reinforcement ✿ enrichment β—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆ ✎ "Awesome activities! Very colorful! Thanks" Brenda G. ✎ "These are great. Thanks!" β—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆ Here are some other math resources you might to check out: β—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆ Customer Tips: How to get TPT credit to use on FUTURE purchases: ❀Please go to your My Purchases page (you may need to login). Beside each purchase you'll see a Provide Feedback button. Simply click it and you will be taken to a page where you can give a quick rating and leave a short comment for the product. ❀Each time you give feedback, TPT gives you feedback credits that you use to lower the cost of your future purchases. ❀I VALUE your feedback greatly as it helps me determine which products are most valuable for your classroom so I can create more for you. Be the first to know about my new discounts, freebies and product launches. All new resources are 50% off the FIRST 48 hours! β—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆβ—ˆ ❀️My TPT store: Believe to Achieve Store by Anne Rozell ❀️Follow my Pinterest boards:My Pinterest Boards ❀️Follow me on Instagram: My Instagram ❀️Follow me on my blog : My Blog Total Pages 13 pages Included Teaching Duration N/A Report this Resource to TpT Reported resources will be reviewed by our team. Report this resource to let us know if this resource violates TpT’s content guidelines. Standards to see state-specific standards (only available in the US). Understand a fraction 1/𝘣 as the quantity formed by 1 part when a whole is partitioned into 𝘣 equal parts; understand a fraction 𝘒/𝑏 as the quantity formed by 𝘒 parts of size 1/𝘣.
1,586
3,529
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3
3
CC-MAIN-2020-50
latest
en
0.589276
https://jeeneetqna.in/647/line-parallel-the-straight-line-tangent-hyperbola-the-point
1,653,771,672,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00328.warc.gz
381,393,527
9,566
# A line parallel to the straight line 2x − y = 0 is tangent to the hyperbola x2/4 − y2/2 = 1 at the point (x1, y1). more_vert A line parallel to the straight line $2x-y=0$ is tangent to the hyperbola ${x^2\over4}-{y^2\over2}=1$ at the point $(x_1,\ y_1)$. Then $x_1^2+5y_1^2$ is equal to : (1) 10 (2) 5 (3) 8 (4) 6 more_vert Hyperbola topic is interesting more_vert verified Ans. (4) 6 Sol. Tangent at $(x_1, y_1)$ $xx_1-2yy_1-4 = 0$ This is parallel to $2x-y = 0$ $\implies{x_1\over2y_1}=2$ $\implies x_1=4y_1\quad$.......(1) Point $(x_1, y_1)$ lie on hyperbola. ${x_1^2\over4}-{y_1^2\over2}-1=0\quad$.......(2) On solving eq. (1) and (2) We get $x_1^2+5y_1^2=6$ more_vert It was really hard question, But you make it easy and interesting.
332
759
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.0625
4
CC-MAIN-2022-21
latest
en
0.748005
http://jsxgraph.org/wiki/index.php?title=Riemann_sum_III&diff=cur&oldid=3590
1,701,403,308,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679100264.9/warc/CC-MAIN-20231201021234-20231201051234-00502.warc.gz
29,287,676
8,520
# Riemann sum III: Difference between revisions Appromximate the integral of $\displaystyle{ f: R\to R, x\mapsto x^2 }$ Riemann sum type: ### The underlying JavaScript code <form>Riemann sum type: <select id="sumtype" onChange="brd.update()"> <option value='left' selected> left <option value='right'> right <option value='middle'> middle <option value='trapezoidal'> trapezoidal <option value='simpson'> simpson <option value='lower'> lower <option value='upper'> upper </select></form> var brd = JXG.JSXGraph.initBoard('box', {axis:true, boundingbox:[-2,40,8,-5]}); var s = brd.create('slider',[[-1,30],[2,30],[3,50,500]],{name:'n',snapWidth:1}); var a = brd.create('slider',[[-1,20],[2,20],[-10,0,0]],{name:'start'}); var b = brd.create('slider',[[-1,10],[2,10],[0,6,10]],{name:'end'}); var f = function(x){ return x*x; }; var plot = brd.create('functiongraph',[f,function(){return a.Value();}, function(){return b.Value();}]); var os = brd.create('riemannsum',[f, function(){ return s.Value();}, function(){ return document.getElementById('sumtype').value;}, function(){return a.Value();}, function(){return b.Value();} ], {fillColor:'#ffff00', fillOpacity:0.3}); brd.create('text', [1,35,function(){ return 'Sum='+(JXG.Math.Numerics.riemannsum(f,s.Value(),document.getElementById('sumtype').value,a.Value(),b.Value())).toFixed(4); }]);
406
1,346
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.765625
3
CC-MAIN-2023-50
latest
en
0.1985
https://simplywall.st/stocks/in/materials/nse-prolife/prolife-industries-shares/news/should-you-be-tempted-to-buy-prolife-industries-limited-nseprolife-at-its-current-pe-ratio/
1,556,081,481,000,000,000
text/html
crawl-data/CC-MAIN-2019-18/segments/1555578626296.62/warc/CC-MAIN-20190424034609-20190424060609-00518.warc.gz
542,298,478
15,674
# Should You Be Tempted To Buy Prolife Industries Limited (NSE:PROLIFE) At Its Current PE Ratio? This analysis is intended to introduce important early concepts to people who are starting to invest and want to begin learning about how to value company based on its current earnings and what are the drawbacks of this method. Prolife Industries Limited (NSE:PROLIFE) is currently trading at a trailing P/E of 7.4, which is lower than the industry average of 16.6. Although some investors might think this is a real positive, that might change once you understand the assumptions behind the P/E. Today, I will explain what the P/E ratio is as well as what you should look out for when using it. ### Demystifying the P/E ratio P/E is often used for relative valuation since earnings power is a chief driver of investment value. By comparing a stock’s price per share to its earnings per share, we are able to see how much investors are paying for each dollar of the company’s earnings. Formula Price-Earnings Ratio = Price per share ÷ Earnings per share P/E Calculation for PROLIFE Price per share = ₹27.2 Earnings per share = ₹3.676 ∴ Price-Earnings Ratio = ₹27.2 ÷ ₹3.676 = 7.4x On its own, the P/E ratio doesn’t tell you much; however, it becomes extremely useful when you compare it with other similar companies. Ideally, we want to compare the stock’s P/E ratio to the average of companies that have similar characteristics as PROLIFE, such as size and country of operation. A quick method of creating a peer group is to use companies in the same industry, which is what I will do. Since similar companies should technically have similar P/E ratios, we can very quickly come to some conclusions about the stock if the ratios differ. Since PROLIFE’s P/E of 7.4 is lower than its industry peers (16.6), it means that investors are paying less for each dollar of PROLIFE’s earnings. This multiple is a median of profitable companies of 25 Chemicals companies in IN including Mysore Petro Chemicals, Hindcon Chemicals and Vikas Proppant & Granite. One could put it like this: the market is pricing PROLIFE as if it is a weaker company than the average company in its industry. ### A few caveats Before you jump to conclusions it is important to realise that our assumptions rests on two important assertions. The first is that our “similar companies” are actually similar to PROLIFE. If the companies aren’t similar, the difference in P/E might be a result of other factors. For example, if you inadvertently compared lower risk firms with PROLIFE, then investors would naturally value PROLIFE at a lower price since it is a riskier investment. Similarly, if you accidentally compared higher growth firms with PROLIFE, investors would also value PROLIFE at a lower price since it is a lower growth investment. Both scenarios would explain why PROLIFE has a lower P/E ratio than its peers. The second assumption that must hold true is that the stocks we are comparing PROLIFE to are fairly valued by the market. If this does not hold, there is a possibility that PROLIFE’s P/E is lower because firms in our peer group are being overvalued by the market. ### What this means for you: Since you may have already conducted your due diligence on PROLIFE, the undervaluation of the stock may mean it is a good time to top up on your current holdings. But at the end of the day, keep in mind that relative valuation relies heavily on critical assumptions I’ve outlined above. Remember that basing your investment decision off one metric alone is certainly not sufficient. There are many things I have not taken into account in this article and the PE ratio is very one-dimensional. If you have not done so already, I urge you to complete your research by taking a look at the following: 1. Future Outlook: What are well-informed industry analysts predicting for PROLIFE’s future growth? Take a look at our free research report of analyst consensus for PROLIFE’s outlook. 2. Past Track Record: Has PROLIFE been consistently performing well irrespective of the ups and downs in the market? Go into more detail in the past performance analysis and take a look at the free visual representations of PROLIFE’s historicals for more clarity. 3. Other High-Performing Stocks: Are there other stocks that provide better prospects with proven track records? Explore our free list of these great stocks here. To help readers see past the short term volatility of the financial market, we aim to bring you a long-term focused research analysis purely driven by fundamental data. Note that our analysis does not factor in the latest price-sensitive company announcements. The author is an independent contributor and at the time of publication had no position in the stocks mentioned. For errors that warrant correction please contact the editor at editorial-team@simplywallst.com.
1,048
4,878
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.96875
3
CC-MAIN-2019-18
latest
en
0.968437
https://brainmass.com/economics/preferences-choice/game-theory-129189
1,624,270,043,000,000,000
text/html
crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00141.warc.gz
154,825,311
76,543
Explore BrainMass # Game Theory Not what you're looking for? Search our solutions OR ask your own Custom question. This content was COPIED from BrainMass.com - View the original, and get the already-completed solution here! 1) Explain your reasoning- Consider a vote being taken by a group of friends A, B and C. They are trying to decide which of the three elective courses to take together in this term. (Each one has a different concentration and is taking required courses in her field for the rest of her courses.) Their choices are Economics, Accounting and Finance and their preferences for three courses are as shown. Player A Player B Player C Economics Finance Accounting Accounting Economics Finance Finance Accounting Economics They have decided to have a two-round vote: alternative X is first paired with alternative Y, the alternative which gets a majority is then paired against alternative Z - the majority winner in this round is the final choice. Player A won the right to choose the order. How should she proceed? How would your answer change if you would consider 2n+1 friends where n of them are of type B and n are of type C? 2) A professor of game theory recognizes that every hour students devote to his class is an hour lost doing other, perhaps more entertaining things. Further, psychologists have demonstrated that studying for uninteresting subjects takes a heavy emotional toll on a student. Each student foregoes \$200 in outside consulting fees for every hour he or she studies for the class. Further, students who dislike game theory suffer additional costs (of \$800 per hour) in emotional distress while studying game theory. Employers use the grade in game theory courses to screen applicants. An applicant with an MBA degree and a High Pass in game theory signals her quality as a capable worker. Specifically, employers pay a premium for a High Pass in game theory because they have found that the kind of people who enjoy the subject are valuable employees. The lifetime earnings (net present value) of a worker with an MBA are \$1.4M; for a student with an MBA and a High Pass in game theory, this increases by 5% (by \$70K). This is summarized below: Lifetime earnings with an MBA degree: \$1,400,000 Lifetime earnings with MBA and a High Pass in game theory: \$1,470,000 Cost per hour of study for a student who enjoys game theory: \$ 200 who doesn't enjoy game theory: \$ 1,000 a. Over a fourteen week course, what is the minimum number of study hours per week that a game theory class should require in order to earn a High Pass and effectively screen out and award High Pass only to the students who enjoy it? Demonstrate or Explain. b. What is the maximum number of study hours per week that would still serve as an effective screen? Demonstrate or Explain. 3) (I'm interested only in the numbers, no graphs needed) You're running ABC Communications a cell-phone provider. Your competitor is XYZ Wireless. Currently XYZ is offering the following plan to all its customers \$65 per month subscription fee 1000 free minutes and 20 cents for all additional minutes. Market research reveals that there are two potential types of customers in the market: Type 1 consumers' demand per month is Q=300-5P Type 2 consumers' demand per month is Q=300-6P (P is expressed in cents and Q is expressed in minutes per month) Currently your cost of providing airtime is 20 cents/minute. a) A management consultant tells you that your area contains only Type 1 consumers. What plan should you design to attract all new consumers of Type 1 consumers while maintaining the highest possible profit level? b) Suppose the consultant tells you that only Type 2 consumers reside in your area. What plan should you design to attract all new consumers of Type 2 consumers while maintaining the highest possible profit level? c) Suppose your market is composed of both types of consumers but due to a hike in interest rates your per-minute cost went up to 30 cents. What type of plan should you offer? https://brainmass.com/economics/preferences-choice/game-theory-129189 #### Solution Preview 1) Explain your reasoning- Consider a vote being taken by a group of friends A, B and C. They are trying to decide which of the three elective courses to take together in this term. (Each one has a different concentration and is taking required courses in her field for the rest of her courses.) Their choices are Economics, Accounting and Finance and their preferences for three courses are as shown. Player A Player B Player C Economics Finance Accounting Accounting Economics Finance Finance Accounting Economics They have decided to have a two-round vote: alternative X is first paired with alternative Y, the alternative which gets a majority is then paired against alternative Z - the majority winner in this round is the final choice. Player A won the right to choose the order. How should she proceed? How would your answer change if you would consider 2n+1 friends where n of them are of type B and n are of type C? Player A will choose the order in a manner that would help him to win his first preference. He will select Accounting (X) and Finance (Y) for the first round of voting. Since, Player A prefers Accounting to Finance, he will vote in favor of Accounting. Player B prefers Finance to accounting and will vote for Finance. Player C prefers Accounting to Finance and will vote for Accounting. Thus, accounting will be the majority winner in first round with 2-1. In the second round, we have Accounting (X) and Economics (Z). Since, Player A prefers Economics to Accounting, he will vote in ... #### Solution Summary A scenario to attract customers is presented. \$2.49
1,201
5,723
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.140625
3
CC-MAIN-2021-25
latest
en
0.963157
https://www.hindawi.com/journals/mpe/2014/657170/
1,547,678,918,000,000,000
application/xhtml+xml
crawl-data/CC-MAIN-2019-04/segments/1547583657907.79/warc/CC-MAIN-20190116215800-20190117001800-00360.warc.gz
802,199,541
78,443
• Views 587 • Citations 4 • ePub 16 • PDF 298 `Mathematical Problems in EngineeringVolume 2014, Article ID 657170, 16 pageshttp://dx.doi.org/10.1155/2014/657170` Research Article ## A Quasiphysical and Dynamic Adjustment Approach for Packing the Orthogonal Unequal Rectangles in a Circle with a Mass Balance: Satellite Payload Packing 1School of Information and Engineering, Xiangtan University, Xiangtan, Hunan 411105, China 2School of Mathematics and Computer Science, Xiangtan University, Xiangtan, Hunan 411105, China 3Department of Aeronautics, Xiamen University, Xiamen, Fujian 361005, China Received 2 June 2014; Revised 28 August 2014; Accepted 22 September 2014; Published 2 December 2014 Copyright © 2014 Ziqiang Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. #### Abstract Packing orthogonal unequal rectangles in a circle with a mass balance (BCOURP) is a typical combinational optimization problem with the NP-hard nature. This paper proposes an effective quasiphysical and dynamic adjustment approach (QPDAA). Two embedded degree functions between two orthogonal rectangles and between an orthogonal rectangle and the container are defined, respectively, and the extruded potential energy function and extruded resultant force formula are constructed based on them. By an elimination of the extruded resultant force, the dynamic rectangle adjustment, and an iteration of the translation, the potential energy and static imbalance of the system can be quickly decreased to minima. The continuity and monotony of two embedded degree functions are proved to ensure the compactness of the optimal solution. Numerical experiments show that the proposed QPDAA is superior to existing approaches in performance. #### 1. Introduction 2D rectangle packing problems are derived from the industry and antiaircraft field [13]. They occur in logistics packing, plate cutting, the layout design of the very large scale integration (VLSI), and satellite modules. They can be divided into unconstrained rectangle packing problems [1] and constrained ones [3]. Both are NP-hard problems and are difficult to be solved. However they have attracted much attention and some packing approaches for different containers have been reported in literatures. For the 2D rectangle container, the packing approaches mainly include graph theories [47], branch-and-bound methods [810], dynamic planning [11], heuristics [1215], artificial intelligent [16], evolutionary approaches [17], and hybrid approaches [1820]. Regarding the strip container, main packing approaches are branch-and-bound methods [21], heuristics [22, 23], and evolutionary approaches [24, 25]. On the 2D polygon or 3D polyhedron container, the existing packing approaches have heuristics [26], evolutionary approaches [2729], and integer programming [30]. Some scholars are interested in the packing problem of the convex region and have proposed heuristics [26] and branch-and-bound approaches [31]. The layout design problem of the satellite module described in [32] is an important packing problem, which can be transformed into the problem of packing 2D orthogonal unequal rectangles within a circular container with the mass balance (BCOURP). In 1999, Feng et al. [33] built a mathematical model of this problem and analyzed the isomorphism and equivalent intrinsic properties among its layout schemes by using the graph theory and group theory and proposed a theoretical global optimization algorithm. In 2007, Xu et al. [34] defined embedded degree functions between two rectangles and between the rectangle and circular container and presented a compaction algorithm with the particle swarm local search (CA-PSLS). Their idea is that a feasible solution with a smaller envelope radius obtained through the gradient method is taken as an elite individual and the optimal solution is obtained by the PSO iteration. In 2010, Xu et al. [35] suggested a heuristic algorithm ordered by GA (GA-HA, see its algorithm steps in Appendix A and Figure 1). Its key technology is the positioning strategy of constructing the feasible solution. By combining it with GA, the computational efficiency and solution quality are improved. Figure 1: The sketch map for available positions of the rectangle. Generally, there exist strong points and deficiencies for each type of approaches.(i)For approaches based on the graph theory, there exists combinatorial explosion when the adjacent topological relation is transformed into the layout diagraph without a size limit for the large-scale layout problem. This is because only two limited relations called the vicinity and distance can be used in pruning branch.(ii)The heuristic method can be used to quickly construct a feasible solution. But it is generally not easy to devise a good heuristic strategy, unless the designer makes a long time painstaking effort and has good luck.(iii)Stochastic algorithms have the global search ability, but there exists the bottleneck of time-costing overlapping area calculation for them [33]. By combining the heuristic method with the stochastic algorithm, their respective advantages can be exerted to the utmost. Based on this mechanism, CA-PSLS and GA-HA are consecutively proposed to solve this problem. According to No Free Lunch Theory [36], how to obtain the knowledge from the problem itself and its area and fuse it into the heuristic and stochastic search mechanism is a way of designing a high performance approach for this problem. Huang et al. [3740] presented a quasiphysical and quasihuman heuristic algorithm and its variants for the circle packing problem. They obtained excellent results. For BCOURP, Xu et al. [34] proposed CA-PSLS based on embedded degrees between circumcircles of two rectangles and between the container and rectangle’s circumcircle. But due to the discontinuity of the two embedded degree functions, it is difficult to obtain a high quality solution by using CA-PSLS. That is, constructing the continuous rectangular embedded function and exploring a better optimized mechanism are necessary for this problem. Therefore, in this paper, we consider two definitions of monotonous and continuous embedded degrees between two orthogonal rectangles and between an orthogonal rectangle and the container and suggest a dynamic adjustment strategy. We merge them into the proposed QPDAA to improve the solution quality of BCOURP. Numerical experiments will test effectiveness of the considered QPDAA. The remainder of this paper is organized as follows. The problem statement and mathematical model are in Section 2. The compact and feasible solution strategy and dynamic adjustment strategy are given in Sections 3 and 4, respectively. This algorithm is presented in Section 5. Section 6 is experiments and analysis. The conclusion is shown in Section 7. The final part is acknowledgment. #### 2. Problem Statement and Mathematical Model Consider the following two related definitions where and is the number of rectangles. Definition 1. As shown in Figure 2, let the origin of the Cartesian coordinate system be the center of the container. Let denote the th () rectangle and let (), , , , and be its center, length, width, mass, and direction angle between its long side and the positive direction of the -axis, respectively. Then a layout scheme of rectangles () () can be denoted by (). Figure 2: The definition of a rectangle. Definition 2. For a layout scheme , if or 90°, then is an orthogonal rectangle packing scheme, () is an orthogonal rectangle, and the packing is the orthogonal rectangle packing (see Figure 3). Figure 3: The orthogonal rectangle packing scheme. Herein this paper considers only orthogonal rectangle packing schemes. Suppose that the center of each rectangle coincides with its mass center. Let be a rectangle set ; then the mathematical model of this problem can be described as follows. Find a solution and satisfies Formulas (1)–(4). Consider In Formula (1), denotes the radius of the enveloping circle of the scheme whose circular center is at . Formula (2) indicates that there is no overlap region between two rectangles and . Formula (3) indicates that all rectangles are contained in the container. In Formulas (2) and (3), denotes the interior region of the rectangle . Formula (4) means that the static imbalance of the solution is less than its threshold , where . #### 3. Compact and Feasible Solution Strategy Based on the potential energy function of the embedded degree between two circles, Huang et al. [3840] proposed the quasiphysical strategy and its variants for the circle packing problem. Inspired by the quasiphysical idea, we suggest a compact and feasible strategy for BCOURP. ##### 3.1. Embedded Degree Function and Related Properties Xu et al. [34] defined the embedded degrees between two rectangles and between the rectangle and container by Definitions 3 and 4, respectively. Definition 3. Let and ( and ) denote the radii of the circumscribed circles of two rectangles and , respectively, and (see Figure 4(a)) the embedded degree between them (see Figure 4(a)); then can be calculated by Figure 4: Two embedded degree definitions in [34]. Definition 4. Let and () denote the radii of the container and circumscribed circle of the rectangle , respectively, and (see Figure 4(b)) the embedded degree between the rectangle and container; then can be calculated by Both of the above two embedded degree functions are discontinuous in the critical state from overlapping to separating, as has been discussed by Stoyan and Yaskov [41]. For example, as shown in Figure 5, by moving one rectangle along a direction, two rectangles with the overlap area of (state 1) are changed into and (state 2). By Formula (5), we know that for state 1, but for state 2. So, in Definition 3 is discontinuous where ( and ). Similarly, () in Definition 4 is also discontinuous (see Figure 6). Owing to their discontinuity, it is difficult to select an appropriate step length to obtain the feasible and compact layout scheme for the gradient iteration of CA-PSLS. Inspired by [41], Definitions 5 and 6 are given for the considered QPDAA. Figure 5: The embedded degree between two rectangles in the critical state. Figure 6: The embedded degree between the rectangle and container in the critical state. Definition 5. For two rectangles and ( or , and ) (shown in Figure 7), let and be the radii of their circumscribed circles, respectively, and let denote their embedded degree; then can be calculated by In Formula (7), , . Here, if , , , , and (see Figure 7(a)); otherwise, , , , and (see Figure 7(b)). Figure 7: Two cases of the embedded degree definition between two rectangles. In Formula (7), the embedded degree between two rectangles is the moving distance of the rectangle from an overlap state with the stationary to the separation state along the direction from the center to the center . If and are two squares, and the center of is on the diagonal line of and enough close to its center (i.e., and ), the moving distance of from the initial state (shown in Figure 8(a)) to the separate state (shown in Figure 8(b)) along the direction of their diagonal lines is about . Thus in the initial state, their embedded degree is close to the maximal value . In addition, when and/or , and/or . That is, . Figure 8: The geometric interpretation of definition of the embedded degree between two rectangles. Definition 6. For the rectangle ( or , ) and the circle container as shown in Figure 7, let denote the embedded degree between the rectangle and container; then can be calculated by The geometric interpretation of Definition 6 is that when the farthest vertex of the rectangle () from the coordinate origin is within the container, their embedded degree ; otherwise it is the length of the straight line segment pointed by in Figure 9. Figure 9: The schematic diagram overlapped between the orthogonal rectangle and container. For embedded degree functions in Definitions 3 and 5, their geometric figures are two curved semi-cone surfaces shown in Figures 10(a) and 10(b), respectively, where it is obvious that there is a gap between the semi-cone surface and plane in Figure 10(a) but there is no gap between them in Figure 10(b). It is not difficult to assert that the difference of geometric figures of two embedded degree functions in Definitions 4 and 6 is the same as the above one. After describing Lemma 7, we propose properties of two embedded degree functions in Definitions 5 and 6, respectively. Figure 10: The curved semi-cone surface of the embedded degree of two orthogonal rectangles. Lemma 7. If a binary function is continuous for each variable in a domain, respectively, and is monotonous for the variable or , then the function is continuous in the domain. Property 1. , let and ( and or ) be two rectangles, and the domain . Then in Definition 5 is a continuous binary function in the domain . Proof. and , set . From Definition 5, we know that the function is continuous on both and . Here, we prove that it is continuous on the domain . For , Simultaneously, This is because According to Lemma 7, the binary function is continuous on the domain . Therefore, is continuous on the domain . Property 2. and , for the container with the radius and rectangle ( or , ), set and ; then the binary function in Definition 6 is continuous on the domain . ##### 3.2. Extruded Force and Energy Function In order to quickly decrease the overlapping area of the rectangle packing system, we define the extruded forces between two rectangles and between the rectangle and container. Definition 8. Let and be two rectangles with the embedded degree ( and ). Then the extruded force between and is calculated by Definition 9. Let () and denote the rectangle and container. Then the extruded force between and can be calculated by Formula (13), whose direction is the direction from the center of the container to the furthermost rectangular vertex (see Figure 6): By experiments, we can know that . So, the extruded resultant force of () in the rectangle packing scheme can be calculated by Definition 10. Let and ( and ) denote extruded potential energies of with respect to and the container, respectively. Then can be calculated by Formula (15), where and denote two embedded degrees between two rectangles and and between the rectangles and container: Definition 11. The total extruded potential energy of can be calculated by Let be the area of the rectangle ; then and () are its absolute and relative extruded potential energies, respectively. ##### 3.3. Compact and Feasible Solution Strategy By predetermining envelope radius of this problem, the extruded force and direction can be calculated by Formula (13). The extruded force of the envelope circle makes each rectangle close to the center of the container in the direction. For each rectangle, the extruded forces of all others with respect to the rectangle drive away themselves to relieve the pressure in respective direction and their extruded resultant force is calculated by Formula (12). In this paper, the above two steps are used to decrease the overlapping area of the packing scheme and make it compact. Considering the problem of the low efficiency and possible local optimum (e.g., large static imbalance) of the iteration of two steps in Section 3.3, we propose Property 3 and dynamic adjustment strategy for QPDAA. ##### 4.1. Related Property For optimizing the static imbalance of the layout scheme, we introduce Property 3. Property 3. Assume is the mass center of an orthogonal packing scheme of this problem and and is another scheme obtained by interchanging centers of two rectangles with a mass and ( and ) with a mass in . If , , centers and satisfy Formula (17), then , where (): In Formula (17), . Proof . Consider Property 3 indicates that, for selecting rectangle , we can find a rectangle in the sector area with an angle (shown as in Figure 11) and interchange them to obtain whose static imbalance is less than . Here, the angle satisfies Figure 11: The geometric area of Formula (17). Let ; we consider the following dynamic adjustment strategy. ###### 4.2.1. Rectangle-Interchanging According to Property 3, the static imbalance of the packing scheme can be decreased by interchanging positions of two rectangles. Two rectangles, with a smaller mass and with a larger mass , are selected from . If and satisfy Formula (17), then we update by , , . ###### 4.2.2. Rotation and Off-Trap (i) A rectangle with a larger pain degree is found out from and is rotated 90° round its center counterclockwise direction to relieve its pain. (ii) A rectangle with a larger pain degree is found out from and is moved to such a place in the container where its pain degree is smaller. We know that the role of (ii) is similar to a construction of the nonisomorphic layout pattern [42, 43]. ###### 4.2.3. Center Translation If its mass center , then . #### 5. The Proposed Algorithm Through an organic combination of the compact and feasible solution strategy and dynamic adjustment strategy, we present QPDAA for BCOURP. Let and be the predetermined value and an allowable maximum of the envelope radius of the solution, respectively; is the number of rectangles. , , and denote the mass center, envelope radius, and extruded potential energy of the packing scheme , respectively. () is the extruded potential energy of the rectangle ; is the step length; is the maximum translation times. Then steps of the proposed QPDAA are shown in Algorithm 1. Algorithm 1 #### 6. Experiments and Analysis ##### 6.1. Experiments The proposed QPDAA is coded in VC++ 6.0 and carried on a Pentium 3 GHZ PC with 512 MB memory. CA-PSLS [34], GA-HA [35] are coded in VC++ 6.0 and two algorithms are carried on a Pentium 1.83 GHZ with 512 MB memory; IGA [44] is carried on an IBM 586 166 MHz. Experiment 1. Five examples are taken from [33, 35] and are used in testing the performance of the proposed QPDAA. Data of all examples are shown in Table 1. For the proposed algorithm, we take , , , , , and , respectively. For Examples , we take = 11.4, 14.3, 17.5, 22.3, and 115.5 and take , , , , and , respectively. Running the proposed QPDAA 30 times for each example (the success rate is 100%), we show its average running time, average envelope radius, standard variance of the radius, and the maximum and minimum envelope radii in Table 2; for each example, its layout scheme diagraph is shown in Figure 12; the other data in Table 2 is taken from [3335]. The optimal layout schemes of the proposed QPDAA are shown in Tables 3, 4, 5, and 6 for 5 examples. Table 1: Parameters of rectangles for five layout examples. Table 2: Performance comparisons of four algorithms. Table 3: The layout schemes of the proposed QPDAA for Examples  1 and  2. Table 4: The layout schemes of the proposed QPDAA for Example  3. Table 5: The layout scheme of the proposed QPDAA for Example  4. Table 6: The layout scheme of the proposed QPDAA for Example  5. Figure 12: The layout diagraphs of five examples for the proposed QPDAA. Experiment 2. For testing the effects of and on the minimal radius and running time with the proposed QPDAA, we take another set of and (five examples and other parameters are the same as those of  Experiment 1) and run the proposed QPDAA procedure 30 times for each example. The minimal radii and running times are given in Table 7. Their layout schemes and layout diagraphs are shown in Tables 8, 9, 10, and 11 and Figure 13. It can be found from Table 7 that changing values of and we can obtain the packing scheme with a smaller envelop radius, but it costs more time for each example. So, values and in Experiment 1 can be applied to make a tradeoff between the computational effectiveness and solution quality. Table 7: The effect of parameters and on the optimal radius and running time for the proposed QPDAA. Table 8: The optimal layout schemes of Examples  1 and  2 for the proposed QPDAA. Table 9: The optimal layout schemes of Example  3 for the proposed QPDAA. Table 10: The optimal layout scheme of Example  4 for the proposed QPDAA. Table 11: The optimal layout scheme of Example  5 for the proposed QPDAA. Figure 13: The optimal layout scheme diagraphs of five examples for the proposed QPDAA. In order to further test the effectiveness of the proposed QPDAA, we consider Experiment 3. Experiment 3. Numbers of rectangles (generated randomly) of three examples are 50, 55, and 60 and their lengths and widths are between 20 and 40. For the proposed QPDAA, we take , , , , , , = 125.9, 133.8, 137.2, and = 126.8, 134.6, 138.8 for Examples , respectively. For GA + HA, the population size, mutation probability, and max number of the iteration are 30, 0.125, and 50, respectively. By running HA + GA and the proposed QPDAA 30 times for three examples, respectively, their optimal envelop radii and average times are shown in Table 12, respectively. The optimal packing scheme diagrams of GA + HA and the proposed QPDAA are shown in Figures 14(a)14(c) and Figures 14(d)14(f). We can know from Table 12 that both the solution quality and computational quality of the proposed QPDAA are obviously higher than those of GA + HA. Table 12: The effect of and on the optimal radius and running time for the proposed QPDAA. Figure 14: Packing scheme diagrams of GA + HA and the proposed QPDAA for Experiment 3. Note that, in Experiment 3, the procedure of GA + HA is coded by author and is carried on a Pentium 3 GHZ with 512 MB memory. ##### 6.2. Analysis From data of Tables 2, 7, and 12, we know that the solution quality of the proposed QPDAA algorithm is higher than those of CA-PSLS and GA-HA. Compared with those of CA-PSLS, embedded degree functions of the proposed QPDAA can make the layout scheme more compact. Due to the fixed candidate positions of GA + HA, it is difficult to find the best position for some rectangles close to the marginal region of the container. The deficiency of a mechanism of decreasing the static balance in the process of packing rectangles also limits the solution quality of GA + HA. The experimental results illustrate the effectiveness of the proposed QPDAA. The computational efficiency of the proposed QPDAA is one magnitude higher than those of CA-PSLS. There are two reasons. (i) Owing to orthogonal packing, searching the optimal solution in the 2D solution of the proposed QPDAA is easier than that in the 3D solution space of CA-PSLS. (ii) In order to improve the solution quality of CA-PSLS, PSO is used to optimize the feasible solution obtained through the gradient method based on two discontinuous embedded degree functions, but the proposed QPDAA need not do the PSO optimization without reducing its solution quality. Except for Example 1, the computational efficiency of the proposed QPDAA is higher than that of GA + HA. And with the increase of the size of the packing problem, the advantage of the proposed QPDAA is more obvious. This is because computational complexities of the extruded resultant force and potential energy are in this paper, but the computational complexity of noninterference judgment is . In addition, for GA + HA, with the increasing of the number of rectangles, the number of candidate positions of each rectangle increases dramatically. These reasons lead to the computational efficiency of the proposed QPDAA higher than that of GA + HA for the BCOURP with a large size. #### 7. Conclusions Taking the layout design of a satellite module as the application background, we have proposed the QPDAA for the BCOURP problem in this paper. Two continuous embedded functions between orthogonal rectangles and between the rectangle and container are constructed to overcome the weakness of embedded functions in [34]. And the suggestion of the extruded resultant force formula and the potential energy function of the rectangle packing system based on the proposed embedded functions make solving the BCOURP problem simple and effective as solving the circle packing problem [3740]. The proposed dynamic adjustment strategy can quickly decrease the static imbalance of the packing scheme and make the iteration skip the local optimum. The experiment results show that the proposed QPDAA is superior to existing algorithms in performance for the BCOURP problem, especially for the BCOURP problem with the large size. The next work is to extend the above algorithm into solving the 3D satellite module payload packing problem. #### A. HA + GA [35] Input the length, width, and mass of the rectangle () in turn, initialize the number of the maximal iteration times, and generate their placing sequence set . Step 1. Set . Step 2. Set the center of at and its long side is parallel to -axis, , . Step 3. For , calculate centers and direction angles of 16 candidate positions (see Figure 1) of the rectangle with respect to the rectangle . From its candidate positions eliminate unfeasible ones and calculate the optimal one (i.e., compared with other feasible candidate positions, it makes the packing scheme of the first rectangles have less envelop radius). Step 4. If then update the current packing scheme, , and go to Step 3; otherwise, go to Step 5. Step 5. If then update the optimal packing scheme and use GA to generate a new placing sequence set and go to Step 2; otherwise, go to Step 6. Step 6. Output the optimal packing scheme and envelop radius; algorithm ends. #### B. Results of Experiment 2 See Tables 8, 9, 10, and 11 and Figure 13. #### Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. #### Acknowledgments This work is supported by the National Natural Science Foundation of China (Grant no. 61272294) and Research Foundation of Education Bureau of Hunan Province, China (Grant no. 11A120), and the Construct Program of the Key Discipline in Hunan Province. And the authors are also grateful to the anonymous referees for their advice and reviews for this paper. #### References 1. K. He, W. Huang, and Y. Jin, “An efficient deterministic heuristic for two-dimensional rectangular packing,” Computers & Operations Research, vol. 39, no. 7, pp. 1355–1363, 2012. 2. N. Lesh, J. Marks, A. McMahon, and M. Mitzenmacher, “Exhaustive approaches to 2D rectangular perfect packings,” Information Processing Letters, vol. 90, no. 1, pp. 7–14, 2004. 3. E.-M. Feng and X.-L. Wang, “An optimization model for the layout of a group of cuboids in a satellite module and its global optimization algorithm,” Operation Research Transaction, vol. 5, no. 3, pp. 71–77, 2001 (Chinese). 4. J. Roth and R. Hasimshony, “Comparison of existing three-room apartment plans with computer-generated layouts,” Planning and Design, vol. 14, no. 2, pp. 149–161, 1987. 5. J. Leung, “A new graph-theoretic heuristic for facility layout,” Management Science, vol. 38, no. 4, pp. 594–606, 1992. 6. J. M. V. de Carvalho, “Exact solution of bin-packing problems using column generation and branch-and-bound,” Annals of Operations Research, vol. 86, pp. 629–659, 1999. 7. R. Macedo, C. Alves, and J. M. V. de Carvalho, “Arc-flow model for the two-dimensional guillotine cutting stock problem,” Computers and Operations Research, vol. 37, no. 6, pp. 991–1001, 2010. 8. Y.-D. Cui, C. L. Zhang, and Y. Zhao, “A continued fractions and branch-and-bound algorithm for generating cutting patterns with equal rectangles,” Journal of Computer-Aided Design & Computer Graphics, vol. 16, no. 2, pp. 252–256, 2004. 9. F. Clautiaux, A. Jouglet, J. Carlier, and A. Moukrim, “A new constraint programming approach for the orthogonal packing problem,” Computers & Operations Research, vol. 35, no. 3, pp. 944–959, 2008. 10. K. Yoon, S. Ahn, and M. Kang, “An improved best-first branch-and-bound algorithm for constrained two-dimensional guillotine cutting problems,” International Journal of Production Research, vol. 51, no. 6, pp. 1680–1693, 2013. 11. E. G. Birgin, R. D. Lobato, and R. Morabito, “Generating unconstrained two-dimensional non-guillotine cutting patterns by a recursive partitioning algorithm,” Journal of the Operational Research Society, vol. 63, no. 2, pp. 183–200, 2012. 12. Y.-L. Wu, W. Huang, S.-C. Lau, C. Wong, and G. H. Young, “An effective quasi-human based heuristic for solving the rectangle packing problem,” European Journal of Operational Research, vol. 141, no. 2, pp. 341–358, 2002. 13. D.-F. Zhang, S.-H. Han, and W.-G. Ye, “Bricklaying heuristic algorithm for the orthogonal rectangular packing problem,” Chinese Journal of Computers, vol. 31, no. 3, pp. 509–514, 2008. 14. R. E. Korf, M. D. Moffitt, and M. E. Pollack, “Optimal rectangle packing,” Annals of Operations Research, vol. 179, no. 1, pp. 261–295, 2010. 15. C. Charalambous and K. Fleszar, “A constructive bin-oriented heuristic for the two-dimensional bin packing problem with guillotine cuts,” Computers and Operations Research, vol. 38, no. 10, pp. 1443–1451, 2011. 16. S. Polyakovsky and R. M'Hallah, “An agent-based approach to the two-dimensional guillotine bin packing problem,” European Journal of Operational Research, vol. 192, no. 3, pp. 767–781, 2009. 17. S. Khebbache1, C. Prins, and A. Yalaoui, “Iterated local search algorithm for the constrained two-dimensional non-guillotine cutting problem,” Journal of Industrial and Systems Engineering, vol. 2, no. 3, pp. 164–179, 2008. 18. J. F. Gonçalves and M. G. C. Resende, “A parallel multi-population genetic algorithm for a constrained two-dimensional orthogonal packing problem,” Journal of Combinatorial Optimization, vol. 22, no. 2, pp. 180–201, 2011. 19. E. G. Birgin, R. D. Lobato, and R. Morabito, “An effective recursive partitioning approach for the packing of identical rectangles in a rectangle,” Journal of the Operational Research Society, vol. 61, no. 2, pp. 306–320, 2010. 20. M. Dolatabadi, A. Lodi, and M. Monaci, “Exact algorithms for the two-dimensional guillotine knapsack,” Computers and Operations Research, vol. 39, no. 1, pp. 48–53, 2012. 21. S. Martello, M. Monaci, and D. Vigo, “An exact approach to the strip-packing problem,” INFORMS Journal on Computing, vol. 15, no. 3, pp. 310–319, 2003. 22. B. S. Baker, J. Coffman, and R. . Rivest, “Orthogonal packings in two dimensions,” SIAM Journal on Computing, vol. 9, no. 4, pp. 846–855, 1980. 23. D. Liu and H. Teng, “An improved BL-algorithm for genetic algorithm of the orthogonal packing of rectangles,” European Journal of Operational Research, vol. 112, no. 2, pp. 413–420, 1999. 24. L. H. W. Yeung and W. K. S. Tang, “Strip-packing using hybrid genetic approach,” Engineering Applications of Artificial Intelligence, vol. 17, no. 2, pp. 169–177, 2004. 25. D. Ye, X. Han, and G. Zhang, “A note on online strip packing,” Journal of Combinatorial Optimization, vol. 17, no. 4, pp. 417–423, 2009. 26. A. Cassioli and M. Locatelli, “A heuristic approach for packing identical rectangles in convex regions,” Computers and Operations Research, vol. 38, no. 9, pp. 1342–1350, 2011. 27. J. Błazewicz, P. Hawryluk, and R. Walkowiak, “Using a tabu search approach for solving the two-dimensional irregular cutting problem,” Annals of Operations Research, vol. 41, no. 4, pp. 313–325, 1993. 28. V. Petridis, S. Kazarlis, and A. Bakirtzis, “Varying fitness functions in genetic algorithm constrained optimization: the cutting stock and unit commitment problems,” IEEE Transactions on Systems, Man, and Cybernetics. Part B: Cybernetics, vol. 28, no. 5, pp. 629–640, 1998. 29. Y. Chen, M. Tang, R. Tong, and J. Dong, “Packing of polygons using genetic simulated annealing algorithm,” Journal of Computer-Aided Design and Computer Graphics, vol. 15, no. 5, pp. 598–609, 2003 (Chinese). 30. R. Andrade and E. G. Birgin, “Symmetry-breaking constraints for packing identical rectangles within polyhedra,” Optimization Letters, vol. 7, no. 2, pp. 375–405, 2013. 31. E. G. Birgin and R. D. Lobato, “Orthogonal packing of identical rectangles within isotropic convex regions,” Computers and Industrial Engineering, vol. 59, no. 4, pp. 595–602, 2010. 32. H.-F. Teng, S. L. Sun, W. H. Ge, and W. X. Zhong, “Layout optimization for the dishes installed on a rotating table—the packing problem with equilibrium behavioural constraints,” Science in China A: Mathematics, Physics, Astronomy, vol. 37, no. 10, pp. 1272–1280, 1994. 33. E. Feng, X. Wang, X. Wang, and H. Teng, “An algorithm of global optimization for solving layout problems,” European Journal of Operational Research, vol. 114, no. 2, pp. 430–436, 1999. 34. Y.-C. Xu, R.-B. Xiao, and M. Amos, “Particle swarm algorithm for weighted rectangle placement,” in Proceedings of the 3rd International Conference on Natural Computation (ICNC '07), China IEEE Press, Haikou, China, August 2007. 35. Y.-C. Xu, F.-M. Dong, Y. Liu, and R.-B. Xiao, “Genetic algorithm for rectangle layout optimization with equilibrium constraints,” Pattern Recognition and Artificial Intelligence, vol. 23, no. 6, pp. 794–801, 2010 (Chinese). 36. D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 67–82, 1997. 37. W.-Q. Huang and R.-C. Xu, “Two Quasi-human strategies for the circle packing problem,” Science China (Series E), vol. 29, no. 4, pp. 347–353, 1999. 38. W.-Q. Huang and M. Chen, “Note on: an improved algorithm for the packing of unequal circles within a larger containing circle,” Computers and Industrial Engineering, vol. 50, no. 3, pp. 338–344, 2006. 39. W.-Q. Huang and T. Ye, “Quasi-physical algorithm for the equal circle packing problem,” Journal System Science & Math, vol. 50, no. 3, pp. 993–1001, 2008. 40. W.-Q. Huang and T. Ye, “Quasi-physical global optimization method for solving the equal circle packing problem,” Science China: Information Sciences, vol. 54, no. 7, pp. 1333–1339, 2011. 41. Y. G. Stoyan and G. N. Yaskov, “Mathematical model and solution method of optimization problem of placement of rectangles and circles taking into account special constraints,” International Transactions in Operational Research, vol. 5, no. 1, pp. 45–57, 1998. 42. H.-F. Teng, Z.-Q. Li, Y.-J. Shi, and Y.-S. Wang, “An approach to constructing isomorphic or non-isomorphic layout pattern,” Chinese Journal of Computers, vol. 29, no. 6, pp. 985–991, 2006. 43. H.-F. Teng, Y. Chen, W. Zeng, Y.-J. Shi, and Q.-H. Hu, “A dual-system variable-grain cooperative coevolutionary algorithm: satellite-module layout design,” IEEE Transactions on Evolutionary Computation, vol. 14, no. 3, pp. 438–455, 2010. 44. E.-M. Feng, Z.-H. Gong, C.-Y. Liu, and Z. Xu, “Improved GA for satellite module layout problem with performance constraints,” Journal of Dalian University of Technology, vol. 45, no. 3, pp. 457–463, 2005.
8,443
35,064
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.59375
3
CC-MAIN-2019-04
latest
en
0.856484
https://mailman.ntg.nl/pipermail/ntg-context/2018/090599.html
1,638,460,293,000,000,000
text/html
crawl-data/CC-MAIN-2021-49/segments/1637964362230.18/warc/CC-MAIN-20211202145130-20211202175130-00138.warc.gz
466,085,448
2,668
# [NTG-context] Problem with \definemathmatrix Thu Jan 18 04:53:57 CET 2018 On Wed, 17 Jan 2018, Fabrice Couvreur wrote: > Hello, > This macro that is not mine worked very well. I just tested the following > file and there is a problem. > \math{M^3=\startpmatrix > \NC 2 \NC 3 \NC 6 \NC 4 \NC 2 \NC 7 \NC 3 \NC 1 \NR > \NC 3 \NC 0 \NC 1 \NC 1 \NC 2 \NC 3 \NC 6 \NC 4 \NR > \NC 6 \NC 1 \NC 4 \NC 4 \NC 4 \NC 9 \NC 10 \NC 6 \NR > \NC 4 \NC 1 \NC 4 \NC 4 \NC 5 \NC 8 \NC 8 \NC 3 \NR > \NC 2 \NC 2 \NC 4 \NC 5 \NC 2 \NC 7 \NC 3 \NC 1 \NR > \NC 7 \NC 3 \NC 9 \NC 8 \NC 7 \NC 8 \NC 10 \NC 3 \NR > \NC 3 \NC 6 \NC 10 \NC 8 \NC 3 \NC 10 \NC 4 \NC 1 \NR > \NC 1 \NC 4 \NC 6 \NC 3 \NC 1 \NC 3 \NC 1 \NC 0 \NR > > \stoppmatrix} If you type a lot of such matrices, you might find the attached module interesting. Using it you can use matlab-like syntax for writing matrices: \usemodule[simplematrix] \definesimplematrix[MATRIX][fence=parenthesis, align=middle] \starttext $\MATRIX{1,2,3,4;5,6,7,8;9,10,11,12}$ \stoptext -------------- next part -------------- %D \module %D [ file=t-simplematrix, %D version=2014.02.18, %D title=\CONTEXT\ User Module, %D subtitle=Simple matrix, %D date=\currentdate, %D email=adityam <at> ieee <dot> org, \startmodule[simplematrix] \unprotect \definenamespace [simplematrix] [ \c!type=module, \c!name=simplematrix, \c!command=\v!yes, setup=\v!list, \s!parent=simplematrix, ] \setupsimplematrix [ \c!distance=\emwidth, \c!mathstyle=, fence=bracket, \c!align= ] \appendtoks \setevalue{\currentsimplematrix}{\usesimplematrix[\currentsimplematrix]} \to \everydefinesimplematrix \newtoks\simplematrixtoks \define[1]\simplematrix_row {\processcommalist[#1]\simplematrix_col \appendtoks \NR \to \simplematrixtoks} \define[1]\simplematrix_col {\appendtoks \NC #1 \to \simplematrixtoks} \unexpanded\def\usesimplematrix {\dodoubleargument\usesimplematrix_indeed} \def\simplematrix_left {\edef\p_left{\namedmathfenceparameter{\simplematrixparameter{fence}}\c!left}% \normalleft\ifx\p_left\empty.\else\Udelimiter\plusfour\fam\p_left\relax\fi \,} \def\simplematrix_right {\edef\p_right{\namedmathfenceparameter{\simplematrixparameter{fence}}\c!right}% \, \normalright\ifx\p_right\empty.\else\Udelimiter\plusfive\fam\p_right\relax\fi} \def\usesimplematrix_indeed[#name][#options]#matrix% {\begingroup \edef\currentsimplematrix{#name}% \setupsimplematrix[#name][#options]% \simplematrixtoks\emptytoks \startusemathstyleparameter\simplematrixparameter \appendtoks \bgroup \startmathmatrix [ \c!distance=\simplematrixparameter\c!distance, \c!left=\simplematrix_left, \c!right=\simplematrix_right, \c!align=\simplematrixparameter\c!align, ] \to \simplematrixtoks \processlist[];\simplematrix_row[#matrix]% \appendtoks \stopmathmatrix \egroup \to \simplematrixtoks \the\simplematrixtoks \stopusemathstyleparameter \endgroup} \protect \stopmodule
1,112
2,981
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.625
3
CC-MAIN-2021-49
latest
en
0.380048
http://passporttoknowledge.com/solarsystem/researchers/journals/hst/sherbert_jnl1.html
1,516,129,821,000,000,000
text/html
crawl-data/CC-MAIN-2018-05/segments/1516084886639.11/warc/CC-MAIN-20180116184540-20180116204540-00754.warc.gz
265,020,081
2,440
"Trying to Understand the Changing Wavelengths" Lisa Sherbert - March 13, 1996 Today I am zeroing in on the answer to a GO (General Observer) question about why wavelengths from data taken in March 94 are off by 2 angstroms After looking at his data and seeing how little signal there was, I wondered how he could tell anything about the wavelengths associated with the individual spectra. I found that the signal improves (i.e. you could start to see features as opposed to noise) when the individual spectra are combined or co-added, i.e. summed together. Still I was confused because I thought he was trying to compare the same feature (an absorption (usually) or emission line) in wavelength space but the data weren't taken at the same central wavelengths at all. There was hardly any overlap. It turned out he was not comparing the same feature but different features in redshift space, not wavelength space. Redshift is the amount a feature moves in wavelength or velocity, etc. due to the fact that it is moving away from us. This relates to something you may have heard of called the Doppler effect? It is expressed in terms of "delta lambda over lambda" which is equal to "v over c". Lambda (the Greek letter lambda) stands for the rest wavelength, where you expect to find the feature if it weren't moving away from you. Delta lambda is the difference in wavelength between where you found the object and the rest wavelength. "v" is velocity, the speed at which it is moving away. And "c" is, of course, the speed of light. The General Observer) was plotting his lines of different wavelength on a redshift (or velocity) scale, then trying to fit them with theoretical profiles. What he found was that the absorption lines in the March 94 data line up at the same redshift, but the line in the October 95 data appear to be at a slightly different redshift. And that is the shift he didn't understand. But I figured out that the wavelength calibration has changed since the first data were taken. It is quite possible, probable, and hopeful that if he recalibrates his data the unexpected shift will go away. Of course, while I am working on this problem, I also have several more that I am trying to keep on top of...but my ability or lack of ability to do more than one chore at a time is the topic for another journal. Back to Lisa Sherbert's Journals Trying to Understand the Changing Wavelengths    1
523
2,419
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.671875
3
CC-MAIN-2018-05
latest
en
0.979479
https://stats.stackexchange.com/questions/92522/linearly-dependent-features/141770
1,579,807,676,000,000,000
text/html
crawl-data/CC-MAIN-2020-05/segments/1579250613416.54/warc/CC-MAIN-20200123191130-20200123220130-00112.warc.gz
653,550,492
33,353
# Linearly dependent features I have a matrix A of 1000 observations (rows) and 100 features (cols). I would like to find: 1. Linearly dependent features so that I can remove them and simplify the problem. rank(A) gives me 88, which I assume means that 12 of the features are linearly dependent. Am I right? 2. After the above step, how do I determine which 12 out of the 100 columns are linearly dependent? I know there is no unique answer. But does that mean I can choose any 12 columns? 3. Let's say I choose to remove the last 12 columns. But before removing them, I what to find the 12 linear combinations that compute to the last 12 columns. How do I get these? So far I have tried using Matlab's PCA, QR and SVD, but each of them give different matrices and I don't know how to use these matrices to get what I want. A little late but... There's a measure called Pearson correlation that can be used to find linear correlation (dependence) between two variables X and Y. In short it is the covariance of the two variables divided by the product of their standard deviations: The result is a value between +1 and −1 inclusive, where 1 is total positive correlation, 0 is no correlation, and −1 is total negative correlation. Using it you can find which columns correlate and ignore (some of) them. • This is incorrect. Linear dependency has nothing to do with correlation. Linear dependency occurs when one variable is linear function of one or more variables in your data. What you are talking about here is called multicollinearity. – Indrajit Nov 23 '17 at 12:34 • Agree with Indrajit that, mathematically, linear independence and correlation are not the same. However, in practice, high-correlated feature removal does a decent job on dimension reduction. – Diansheng Mar 21 '19 at 9:37 One approach would be to use an incomplete Cholesky factorisation, I have some MATLAB code here, see the paper by Fine and Scheinberg mentioned on that page for details. • I get the following error with your code: >> [L, p] = cholincsp(mydata, 10^-10); Index exceeds matrix dimensions. Error in cholincsp (line 78) L(diagonal(i:m)) = X(diagonal(p(i:m))) - sum(L(i:m,1:i-1).^2,2)'; – Prometheus Apr 4 '14 at 10:41 • It is a while since I last used it, I think you need to perform the CHolesky decomposition of A^TA rather than A. – Dikran Marsupial Apr 4 '14 at 11:14 1. Yes, rank roughly tells you how many column-vectors (features) are independent. 2. No, you can't remove arbitrary columns. You can try removing random columns and calculating the rank of a result — you'll see different numbers. You need to remove only those features that are dependent. Or, suppose that you generated data in a way that only the last 12 columns depend on the first ones. Can you still delete any 12 columns? What you can do, though, is to perform PCA with number of components equal to the rank. This will not only reduce the dimension of your space, but will also decorrelate your features. The drawback of such transformation is a (possible) loss of interpretability, but it's not a mathematical thing, if you want to preserve it, you should delete features based on your insight. 3. Suppose your data matrix is of form $(X; Y)$ where $X$ and $X$ are submatrices of form n_rows × (n_cols - 12) and n_rows × 12, respectively. That means, that $X$ and $Y$ are submatrices formed by slicing out last 12 columns. If you want to find 12 linear combinations of $X$ to get $Y$, you actually want to solve 12 systems of linear equations of the form $$X b_i = Y_i$$ Where $Y_i$ is $i$th column of $Y$. Now unite all these equations into one by gluing together all the $b_i$ into a matrix $B$ of shape (n_cols - 12) × 12. The equations then become $X B = Y$ Note that X is not square, thus is not invertible. Fortunately, the solution still exists, and is given by pseudoinversion: $$B = (X^T X)^{-1} X Y$$ Note: now somebody thoughtful enough might notice that the way I provided the above formula looks like we can express any set of columns of original data matrix as the linear combination of the rest. How can it be? Does it mean that any set of columns is linear dependent on all others? This sounds crazy! Of course, there is an important note about $X$ here. Notice $X^T X$ that is inverted. In order to invert a matrix it needs to be of a full rank. That means that $X$ has to have full column rank, that is, have rank equal to the number of columns. Another interesting thing is a case when some of $Y_i$ are actually independent of $X$. Clearly, there's no place for the formula to break, but what's the meaning of a result, what have we got? In that case the result $b_i$ is a coefficient vector of an orthogonal projection of $Y_i$ onto space spawned by columns of $X$. In a sense it's a closest linear combination of $X$.
1,200
4,833
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.46875
3
CC-MAIN-2020-05
latest
en
0.946542
https://www.dataunitconverter.com/nibble-to-megabit/3
1,679,905,194,000,000,000
text/html
crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00213.warc.gz
839,941,948
11,443
# Nibble to Megabit - 3 Nibble to Mbit Conversion ## Conversion History (Last 6) Input Nibble - and press Enter Nibble RESULT ( Nibble → Megabit ) : 3 Nibble = 0.000012 Mbit Copy Calculated as → 3 x 4 / 10002...view detailed steps ## Nibble to Mbit - Conversion Formula and Steps Nibble and Megabit are units of digital information used to measure storage capacity and data transfer rate. Nibble is one of the very basic digital unit where as Megabit is a decimal unit. One Nibble is equal to 4 bits. One Megabit is equal to 1000^2 bits. There are 250,000 Nibbles in one Megabit. Source Data UnitTarget Data Unit Nibble Equal to 4 bits (Basic Unit) Megabit (Mbit) Equal to 1000^2 bits (Decimal Unit) The formula of converting the Nibble to Megabit is represented as follows : Mbit = Nibble x 4 / 10002 Now let us apply the above formula and, write down the steps to convert from Nibble to Megabit (Mbit). This way, we can try to simplify and reduce to an easy to apply formula. FORMULA Megabit = Nibble x 4 / 10002 STEP 1 Megabit = Nibble x 4 / (1000x1000) STEP 2 Megabit = Nibble x 4 / 1000000 STEP 3 Megabit = Nibble x 0.000004 If we apply the above Formula and steps, conversion from 3 Nibble to Mbit, will be processed as below. 1. = 3 x 4 / 10002 2. = 3 x 4 / (1000x1000) 3. = 3 x 4 / 1000000 4. = 3 x 0.000004 5. = 0.000012 6. i.e. 3 Nibble is equal to 0.000012 Mbit. (Result rounded off to 40 decimal positions.) #### Definition : Nibble A Nibble is a unit of digital information that consists of 4 bits. It is half of a byte and can represent a single hexadecimal digit. It is used in computer memory and data storage and sometimes used as a basic unit of data transfer in certain computer architectures. #### Definition : Megabit A Megabit (Mb or Mbit) is a unit of digital information that is equal to 1,000,000 bits and it is commonly used to express data transfer speeds, such as the speed of an internet connection and to measure the size of a file. In the context of data storage and memory, the binary-based unit of mebibit (Mibit) is used instead. ### Excel Formula to convert from Nibble to Mbit Apply the formula as shown below to convert from 3 Nibble to Megabit. ABC 1NibbleMegabit (Mbit) 23=A2 * 0.000004 3
659
2,254
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.890625
4
CC-MAIN-2023-14
latest
en
0.857282
https://docs.classiq.io/latest/user-guide/function-library/builtin-functions/hamiltonian-evolution/exponentiation/
1,701,508,262,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679100381.14/warc/CC-MAIN-20231202073445-20231202103445-00687.warc.gz
251,000,958
16,958
# Exponentiation¶ The exponentiation function produces a quantum gate that implements the exponentiation, $$\exp(-iHt)$$, of any input Hermitian operator, $$H$$. ## Example¶ This example demonstrates synthesis of an exponentiation function in the Classiq engine. All options are specified below. { "functions": [ { "name": "main", "body": [ { "function": "Exponentiation", "function_params": { "pauli_operator": { "pauli_list": [ ["IIZXXXII", 0.1], ["IIXXYYII", 0.2], ["IIIIZZYX", 0.3], ["XZIIIIIX", 0.4], ["IIIIIZXI", 0.5], ["IIIIIIZY", 0.6], ["IIIIIIXY", 0.7], ["IIYXYZII", 0.8], ["IIIIIIXZ", 0.9], ["IIYZYIII", 1.0] ] }, "evolution_coefficient": 0.05, "constraints": { "max_depth": 100, "max_error": 0.2 }, "optimization": "MINIMIZE_DEPTH" } } ] } ] } from classiq.builtin_functions import Exponentiation from classiq.builtin_functions.exponentiation import ( ExponentiationConstraints, ExponentiationOptimization, PauliOperator, ) from classiq import Model, synthesize, show pauli_operator = PauliOperator( pauli_list=[ ("IIZXXXII", 0.1), ("IIXXYYII", 0.2), ("IIIIZZYX", 0.3), ("XZIIIIIX", 0.4), ("IIIIIZXI", 0.5), ("IIIIIIZY", 0.6), ("IIIIIIXY", 0.7), ("IIYXYZII", 0.8), ("IIIIIIXZ", 0.9), ("IIYZYIII", 1.0), ] ) exponentiation_params = Exponentiation( pauli_operator=pauli_operator, evolution_coefficient=0.05, constraints=ExponentiationConstraints( max_depth=100, max_error=0.2, ), optimization=ExponentiationOptimization.MINIMIZE_DEPTH, ) model = Model() model.Exponentiation(exponentiation_params) quantum_program = synthesize(model.get_model()) show(quantum_program) ## Options¶ ### Input Operator¶ You can input any $$n$$-qubit operator in its Pauli basis [1] $H=\sum_i c_i\left[\sigma_{j_{1,i}}\otimes\sigma_{j_{2,i}}\otimes\cdots\otimes\sigma_{j_{n,i}}\right]$ where $$\sigma_{0,1,2,3}=I,X,Y,Z$$ are the single-qubit Pauli operators, and $$j\in\{0,1,2,3\}$$. Implement it using the pauli_list field of the PauliOperator class. For example, the operator $$H=0.1\cdot I\otimes Z+0.2\cdot X\otimes Y$$ is input as follows. { "pauli_list": [ ["IZ",0.1], ["XY",0.2] ] } from classiq.builtin_functions.exponentiation import PauliOperator operator = PauliOperator(pauli_list=[("IZ", 0.1), ("XY", 0.2)]) print(operator.show()) +0.100 * IZ +0.200 * XY Provide the operator to exponentiate using the pauli_operator field of the Exponentiation. Provide a global evolution coefficient using the evolution_coefficient field of the Exponentiation; defaults to 1.0. ### Constraints¶ Provide local constraints for the exponentiation of either max_depth or max_error using the fields of the ExponentiationConstraints class; both default to no constraint. The max_error bounds the algorithmic error of the quantum program as measured by the operator norm [2] and evaluated according to Ref. [3] . Provide the constraints using the constraints field of the Exponentiation; defaults to no constraints. { "max_depth": 100, "max_error": 0.2 } from classiq.builtin_functions.exponentiation import ExponentiationConstraints ExponentiationConstraints( max_depth=100, max_error=0.2, ) ### Optimization¶ Set the optimization target for the exponentiation to MINIMIZE_DEPTH or MINIMIZE_ERROR. The Classiq engine automatically generates an efficient higher-order Trotter-Suzuki quantum program [4] that satisfies the optimization target within the provided constraints. Provide the optimization using the optimization field of the Exponentiation; defaults to MINIMIZE_DEPTH. Use a naive evolution for the operator by setting the field use_naive_evolution to True. ## References¶ [3] A. M. Childs et al, Toward the first quantum simulation with quantum speedup, https://arxiv.org/abs/1711.10980 (2017). [4] N. Hatano and M. Suzuki, Finding Exponential Product Formulas of Higher Orders, https://arxiv.org/abs/math-ph/0506007 (2005).
1,142
3,847
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.734375
3
CC-MAIN-2023-50
latest
en
0.469303
https://opess.ethz.ch/course/section-15-7/15-7-1-exercise-load-oriented-order-release-loor/
1,721,904,636,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763857355.79/warc/CC-MAIN-20240725084035-20240725114035-00660.warc.gz
377,594,435
33,054
# 15.7.1 Exercise: Load-Oriented Order Release (Loor) ### Intended learning outcomes: Explain the Loor algorithm with given data for a scheduling problem, considering anticipation, loading percentage, and conversion factor. The first table in Figure 15.7.1.1 shows five orders with their sequence of operations. The data for each operation include the work center, the standard load (e.g., setup plus run time), and a blank column for entering the converted load. The second table in Figure 15.7.1.1 shows parameters for load-oriented order release, as introduced in Section 15.1.2, as well as their values given for this exercise. The third table holds data for each work center, namely, the weekly capacity, the existing (pre-)load before loading the five orders, a blank column for entering the capacity upgraded by the loading percentage, and blank columns for the summarized load after releasing orders 1 to 5 (that is in the sequence given by the Loor algorithm). Fig. 15.7.1.1       Given data for a Loor problem. a.    Load the five orders according to the Loor algorithm. b.    What would have happened if for operation 3 of order 2 the standard load had been 200 units of time instead of 120? c.    Discuss whether in your solution the treatment of order 3 was efficient. d.    What would have happened if order 3 had been loaded before order 2? Solutions: a.    The time filter eliminates order 5. This order is declared as not urgent. For the other orders, the conversion factor is applied to their operations. In the third table, the loading percentage multiplies the weekly capacity. Then, order 1 is loaded, followed by order 2. Order 2 is accepted, but it overloads work center B (220 units of time against 200 units resulting from the loading percentage). Hence, order 3 cannot be loaded, because its last operation is at work center B. However, order 4 can be loaded, since it has no operation at work center B. b.    Order 2 would have overloaded work center A. Hence, order 4 would not have been loaded. c.    The converted load of order 3 on work center B had only 5 units of time. This would have changed the total load only very slightly. As there was no overloading of other work centers by orders 1, 2, and 4, it might have been wise to release order 3 as well. d.    Order 3 would have overloaded work center A (405 units of time against 400 units resulting from the loading percentage). Therefore, the algorithm would formally reject both orders 2 and 4. This would result in a low utilization of the other work centers B, C, and D. ## Course section 15.7: Subsections and their intended learning outcomes • ##### 15.7.1 Exercise: Load-Oriented Order Release (Loor) Intended learning outcomes: Explain the Loor algorithm with given data for a scheduling problem, considering anticipation, loading percentage, and conversion factor. • ##### 15.7.2 Exercise: Corma — Capacity-Oriented Materials Management Intended learning outcomes: Describe results of applying the capacity-oriented materials management (Corma) principle in order release. • ##### 5.7.3 Scenario: Finite Forward Scheduling Intended learning outcomes: Perform finite forward scheduling for eight products manufactured on three machines by using a Gantt-type chart. • ##### 15.7.4 Scenario: Order Picking Intended learning outcomes: Differentiate between the main characteristics of several picking strategies, by listing the advantages and disadvantages of each, and deriving possible fields of application. • ##### 15.7 Scenarios and Exercises Intended learning outcomes: Calculate examples for load-oriented order release (Loor) and for finite forward scheduling. Assess characteristics of capacity-oriented materials management (Corma) and of order Picking.
841
3,777
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.03125
4
CC-MAIN-2024-30
latest
en
0.924471
https://amader-alo.com/qa/quick-answer-does-voltage-or-current-cause-heat.html
1,628,022,135,000,000,000
text/html
crawl-data/CC-MAIN-2021-31/segments/1627046154471.78/warc/CC-MAIN-20210803191307-20210803221307-00705.warc.gz
115,367,611
8,167
# Quick Answer: Does Voltage Or Current Cause Heat? ## Why does voltage decrease with temperature? Since we know that the heat will increase the resistivity of the wires, the voltage drop increases so as the power loss on the wires. The increase of load will increase the current and thus the temperature of the wires. With the increase of temperature so as the resistance and the voltage drop.. ## Do electrons move faster when heated? We keep adding heat, it translates faster and faster. There is more kinetic energy, so the temperature is higher. Eventually we add so much energy that now it can go into the electronic modes. Electrons start to move to higher orbits. ## Does higher resistance mean higher voltage? Voltage, Current and Resistance Summary This means that if the voltage is high the current is high, and if the voltage is low the current is low. Likewise, if we increase the resistance, the current goes down for a given voltage and if we decrease the resistance the current goes up. ## Does resistance increase voltage? This equation, i = v/r, tells us that the current, i, flowing through a circuit is directly proportional to the voltage, v, and inversely proportional to the resistance, r. In other words, if we increase the voltage, then the current will increase. But, if we increase the resistance, then the current will decrease. ## Why does voltage increase? The difference in electric potential energy (per charge) between two points is what we have given the name voltage. Thus, the voltage directly tells us which way charges want to move – and if they can, then they will speed up in that direction, so the current will increase. ## Does increasing CPU voltage increase temperature? Registered. Increasing the voltage will not only increase the heat much faster in most cases, but it also shortens the life of the CPU. ## What is the relation between heat and current? Joule’s law states the amount of heat production in a conductor is : Directly proportional to the square of electric current flowing through it. Is directly proportional to the resistance of the conductor. Directly proportional to the time for which electric current flows through the conductor. ## Why does high current cause heat? In metal conductors, electrical current flows due to the exchange of electrons between atoms. As electrons move through a metal conductor, some collide with atoms, other electrons or impurities. These collisions cause resistance and generate heat. ## At what temperature does electricity stop flowing? In other words, they slowed things down enough to study individual electrons as they flow through a conductor. To do this, the team cooled a scanning tunnelling microscope down to a fifteen-thousandth of a degree above absolute zero, which is roughly –273.135 degrees Celsius (–459.65 degrees Fahrenheit). ## Does voltage affect heat? Why does voltage increase (for a constant current) if temperature increases? Voltage is directly proportional to resistant (V=IR) and resistance increases with temperature due to increased vibrations of the molecules inside the conductor. Therefore voltage increases as temperature increases. ## Does higher voltage mean more heat? The more current flowing, the more the wires are heated. So the purpose of using high voltage is not because the high voltage itself heats the wires less than a high current, but because because using a higher voltage allows a lower current to be used (given equal power, as the question states). ## What is Joule’s heating effect of current? Joule heating is the physical effect by which the pass of current through an electrical conductor produces thermal energy. This thermal energy is then evidenced through a rise in the conductor material temperature, thus the term “heating”. ## Does heat stop electricity? Temperature affects how electricity flows through an electrical circuit by changing the speed at which the electrons travel. This is due to an increase in resistance of the circuit that results from an increase in temperature. Likewise, resistance is decreased with decreasing temperatures. ## Is higher CPU voltage better? When you increase the voltage into a computer’s CPU it increases the clock speed of the processor and can lead to better performance. ## Why do electrical conductors get hot? Due to the wires having electrical resistance, which means that they resist the motion of electrons, the electrons bump into atoms on the outside of the wire, and some of their kinetic energy is given to the atoms as thermal energy. This thermal energy causes the wire to heat up. ## Does increasing current increase voltage? It’s just ohm’s law,Current and voltage share a direct relationship meaning increase in current means increase in voltage and vice versa. according to V=IR; if the value of resistance remain constant then V is directly proportional to I so value of current increases with increase in voltage. ## Is heat directly proportional to resistance? This is exactly as you have stated, the heat is directly proportional to the resistance and the square of the current. Because the current term is squared in the power equation, the heat given off by the circuit is more highly dependent on the current flowing through it than the resistance. ## What happens to current when temperature increases? Originally Answered: Why is it when the temperature increases current also increases? An electric current is made of electrons flowing between positive ions. When you heat something up the kinetic energy or its particles increases, making them more difficult to get past, hence the increase in resistance.
1,076
5,673
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.9375
4
CC-MAIN-2021-31
latest
en
0.912768
https://communities.sas.com/t5/Base-SAS-Programming/Repeated-measure-to-be-sorted-by-three-variable-and-select/td-p/453321?nobounce
1,529,576,859,000,000,000
text/html
crawl-data/CC-MAIN-2018-26/segments/1529267864139.22/warc/CC-MAIN-20180621094633-20180621114633-00493.warc.gz
589,984,223
26,575
## Repeated measure to be sorted by three variable and select minimum by a variable Solved Super Contributor Posts: 324 # Repeated measure to be sorted by three variable and select minimum by a variable Hello, I have this data that I want to sort by SID New_dt and NIPR. For each subject I want just the minimum NPIR for that day. I also want to delete all observations with missing NPIR For example this data should be reduced to Data A below: Obs SID New_dt NPiR 1 10155 . . 2 101004 10/06/15 . 3 101004 10/14/15 4.2 4 101004 10/16/15 4.5 5 101004 10/19/15 4.1 6 101004 10/19/15 4.3 7 101004 10/19/15 4.6 8 101023 11/03/15 . 9 101023 11/05/15 4.2 10 101023 11/06/15 1.3 11 101023 11/07/15 3.8 12 101023 11/07/15 2.4 Should be reduced TO  Data A : 3 101004 10/14/15 4.2 4 101004 10/16/15 4.5 5 101004 10/19/15 4.1 9 101023 11/05/15 4.2 10 101023 11/06/15 1.3 12 101023 11/07/15 2.4 Accepted Solutions Solution ‎04-11-2018 04:45 PM Super User Posts: 6,632 ## Re: Repeated measure to be sorted by three variable and select minimum by a variable [ Edited ] A one-step way: proc summary data=have nway; where npir > .; class sid new_dt; var npir; output out=want (drop=_type_ _freq_) min=; run; If your actual data set contains additional variables, it becomes more problematic and we'd have to examine some of the details within the data. All Replies Solution ‎04-11-2018 04:45 PM Super User Posts: 6,632 ## Re: Repeated measure to be sorted by three variable and select minimum by a variable [ Edited ] A one-step way: proc summary data=have nway; where npir > .; class sid new_dt; var npir; output out=want (drop=_type_ _freq_) min=; run; If your actual data set contains additional variables, it becomes more problematic and we'd have to examine some of the details within the data. ☑ This topic is solved.
636
1,841
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.0625
3
CC-MAIN-2018-26
latest
en
0.79317
https://econowaugh.blogspot.com/2016/05/2006-ap-micro-frq-question-2.html
1,555,708,468,000,000,000
text/html
crawl-data/CC-MAIN-2019-18/segments/1555578528058.3/warc/CC-MAIN-20190419201105-20190419223105-00086.warc.gz
391,357,582
19,521
## Wednesday, May 11, 2016 ### 2006 AP Micro FRQ (question 2) 2006 AP Micro FRQ (question 2) (a) What is the dollar value of the firm's total fixed cost? Understand that Total Cost = Variable Cost + Fixed Costs,,, Variable costs (think labor) only exist when there is actual production. If costs are (\$20) and there is no production then the cost of \$20 must be a fixed cost. (b) Calculate the marginal cost of producing the first unit of output. If Fixed Costs are (\$20) and the Total Costs are (\$27) then the Variable Costs/ Marginal Cost must be (\$7). (c) If the price the firm receives for its product is \$20, indicate the firms profit-maximising quantity of output and explain how you determined the answer. If \$20 is the products price, and it is a perfectly competitive firm, then the Marginal Revenue = Price,, as the perfectly competitive firm's (MR. DARP) is perfectly elastic. For every unit sold the revenue increases by \$20 = the MR = P. Profit Maximising quantity is where MR = MC, which is closest with the fourth unit. Understand that just saying the 4th unit does not explain,,, to explain you must clearly explain WHY? EXPLAIN - Using marginal analysis we compare where a firm's MR = MC, this is where profit is maximised. The 4th unit of production the MR \$20 > MC \$19. We are as close to Profit Max as possible, if we produce the 5th unit MR \$20 < MC \$23 (a loss). The firm profit max is to produce a quantity of 4. (d) Given your results in part (c), Explain what will happen to the number of firms in the industry in the long-run. Understand that the firm is making a profit of \$8 with a production of 4 units. This is positive/abnormal/super economic profit in the short-run. Profits attract firms and therefore firms will enter the market hunting for profits. Number of firms will increase (e) Assume that this firms operates in a constant cost industry (clue), and has reached long-run equilibrium. If the government imposes a per-unit tax of \$2, indicate what will happen to the firm's profit maximising output in the long-run. If in the Long-run firms are making zero economic profit, and the government imposes a \$2 tax, it is fair to assume that the marginal costs will shift to the left as input costs increase.
539
2,270
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.546875
4
CC-MAIN-2019-18
latest
en
0.897993
https://www.maa.org/press/periodicals/convergence/an-investigation-of-historical-geometric-constructions-using-the-quadratrix-to-trisect-an-angle
1,618,125,684,000,000,000
text/html
crawl-data/CC-MAIN-2021-17/segments/1618038061562.11/warc/CC-MAIN-20210411055903-20210411085903-00079.warc.gz
988,124,453
22,146
# An Investigation of Historical Geometric Constructions - Using the Quadratrix to Trisect an Angle Author(s): Suzanne Harper and Shannon Driskell Once the quadratrix has been formed, move point E anywhere along arc BED. Using Hippias’ method, we will find an angle one-third the measure of angle DAE. Move points C' and E such that point F is the intersection of the quadratrix and AE (see Figure 10). Figure 10: Using the Quadratrix To Trisect Angle DAE Construct a perpendicular line to AD through the point F. Construct the intersection point of this perpendicular line and segment AD and label it H. Next construct segment FH and hide line FH. The trisection of an angle has now been reduced to trisecting the line segment FH. Formally construct a point (K) along FH such that HK is one-third the length of FH.  Click here  to see the animation of this construction. Construct a parallel line to AD through point K. Identify the intersection of the parallel line with the quadratrix, and name this point J. Construct ray AJ.  The measure of angle DAJ is one-third the measure of angle DAE (see Figure 11). Figure 11:  Using the Quadratrix to Trisect Angle DAE As one can see, the quadratrix can be used to easily reduce the problem of trisecting an angle to that of trisecting a line segment; furthermore Hippias generalized this method to subdivide any given angle into any number of equal subangles. Although many scholars attempted to trisect an angle, the more popular construction of the three famous geometric construction problems was to find a square with the same area as a given circle. Hippocrates of Chios made major contributions toward mathematics while trying to solve this problem. Suzanne Harper and Shannon Driskell, "An Investigation of Historical Geometric Constructions - Using the Quadratrix to Trisect an Angle," Convergence (August 2010)
430
1,875
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.96875
4
CC-MAIN-2021-17
latest
en
0.86489
http://datascience.sharerecipe.net/2019/04/05/a-short-tutorial-on-fuzzy-time-series%E2%80%8A-%E2%80%8Apart-iii/
1,582,267,080,000,000,000
text/html
crawl-data/CC-MAIN-2020-10/segments/1581875145443.63/warc/CC-MAIN-20200221045555-20200221075555-00079.warc.gz
40,401,694
12,789
A short tutorial on Fuzzy Time Series — Part III = [u, l]. Some models (such as EnsembleFTS and PWFTS) allow the specification of the method of interval. ifts. IntervalFTS (IFTS): The most basic method for generating prediction intervals, it is an extension of the HighOrderFTS. The generated prediction intervals do not have some probabilistic mean, they just measure the upper and lower bounds of the fuzzy sets that were involved on forecasting process, i. e. , the fuzzy uncertainty. The method is described here. ensemble. AllMethodEnsembleFTS: The EnsembleFTS is a meta model composed by several base models. The AllMethodEnsembleFTS creates one instance of each monovariate FTS method implemented in pyFTS and set them as its base models. The forecasting is computed from the forecasts of the base models. A brief description of the method can be found here. There is basically two ways to compute prediction intervals in EnsembleFTS: extremum and quantile (default). In extremum method the maximum and minimum values between the forecasts of the base models are chosen. On quantile method the alpha parameter must be informed and then the forecasts of the base models will be ordered and the quantile interval will be extracted. from pyFTS. models. ensemble import ensemblepart = Grid. GridPartitioner(data=train, npart=11)model = ensemble. AllMethodEnsembleFTS(partitioner=part)forecasts = model. predict(test, type='interval', mode='extremum')forecasts2 = model. predict(test, type='interval', mode='quantile', alpha=. 1)pwfts. ProbabilisticWeightedFTS (PWFTS): As its name says, this is the most complex method and still under review (on its way to be published). There is basically two ways to produce prediction intervals on PWFTS: heuristic (default) and quantile. In the heuristic method the interval bounds are calculated as the expected value of the fuzzy sets bounds and its empirical probabilities and the quantile method generates a full probability distribution and then extracts the quantiles (using the alpha parameter). forecasts1 = model. predict(test, type='interval', method='heuristic')forecasts2 = model. predict(test, type='interval', method='quantile', alpha=. 05)multivariate. mvfts. MVFTS: This multivariate method uses the same approach of IFTS to produce prediction intervals. multivariate. wmvfts. WeightedMVFTS: This weighted multivariate method uses the same approach of IFTS to produce prediction intervals. In the module pyFTS. common. Util we can find the function plot_interval which allows us to easily draw the intervals:Intervals generated by the monovariate methods (source)Intervals generated by the multivariate methods (source)The generated intervals try to demonstrate the range of possible variations that the model takes into account. You can see that some models generate wider intervals than others and sometimes (especially on the weighted models that have the thinner intervals) the original values fall outside the interval. The best intervals have balanced widths, neither too wide to show high uncertainty and neither too thin to not cover the real values. In contrast to interval forecasting, the probabilistic forecasting has its own class to represent a probability distribution — the class pyFTS. probability. ProbabilityDistribution. There are several ways to represent this probability distribution that is, by definition, a discrete probability distribution. Some methods of this class have especial interest for us now: density (returns the probability of the input value(s)), cumulative (returns the cumulative probability of the input value(s)), quantile (returns the quantile value of the input value(s)) and plot (plots the probability distribution on the input matplotlib axis). Like the intervals, the probabilistic forecasting has its own boolean flag to indicate which models are enabled to perform it:if model. has_probabilistic_forecasting: distributions = model. predict(test, type='distribution')Now let’s take a look on some probabilistic forecasting enabled methods on pyFTS:pwfts. ProbabilisticWeightedFTS (PWFTS): This method was entirely designed for probabilistic forecasting and is the best method for this on pyFTS. Its rules contains empirical probabilities associated with the fuzzy sets and also present a specific defuzzyfication rule that transforms an input (crisp) value on a probability distribution to the future value;ensemble. EnsembleFTS: The before mentioned ensemble creates probabilistic forecastings using Kernel Density Estimators (KDE) over the point forecasts of the base models. The KDE also requires the specification of the kernel function and the width parameter. Let’s see how the probability forecasting looks like:The probabilistic forecasting of 4 days using the Util. plot_density function (source)The probabilistic forecasting of 24 hours using ProbabilityDistribution. plot function (source)In above pictures the probabilistic forecasting is shown in two different perspectives. The first picture is generated with the method plot_density in the module common. Util, where each probability distribution is plotted as a shade of blue and its intensity corresponds to the probability. This method allows to plot the original time series and the forecast probability distributions on top of it. The second picture shows each probability distribution individually in relation to the universe of discourse using the method plot of the class ProbabilityDistribution. Of course it is not everything!.We have to consider the interval and probabilistic forecasting for many steps ahead, which we expect to tell us how the uncertainty evolve as the prediction horizon increases. Yes!.It is fascinating but I still have many things to show, so I will let it as an exercise for you, ok?Let’s walk now on a trickiest road…The land of the Non-StationaritiesYou may remember that old and universally known quote:“the only certainty is that nothing is certain”. Yes. Forecasting may be unfair because things change all the time and we have to deal with it. Stationarity means, in layman terms, that the statistical properties of a stochastic process (like their expected value and variance) do not change along the time, whose the ultimate mean is stability. This is awesome for the forecasting models: the test data set will behave exactly as the train data set!In other hand, the non-stationarity means that the statistical properties change. But not all non-stationarities are created equal. Some of them are predictable as trend and seasonality. Dealing with seasonality is not tricky, you can use High Order, Seasonal or Multivariate methods (you may remember our last tutorial). To deal with trends it is not too complicated either, you can de-trend the data using a difference transformation. The original time series — NASDAQ — and the differentiated time series (source)Suppose that we split the above time series in half, and call these subsets training and testing data. You can see that the test subset (after the instance number 2000) have values that did not appear before, in the train subset. This is a drawback for most FTS methods: what happens when the input data fall outside the known Universe of Discourse?.The model never saw that region before and doesn’t know how to proceed, than it fails tragically. You can also see on the above image that the differentiated time series is much more well behaved and, indeed, it is stationary. How can we use the Difference transformation on pyFTS?.Just import the Transformations module from pyFTS. common and instantiate it. Don’t forget to inform the transformation to the partitioning method and also add it to the model (with the method append_transformation). from pyFTS. data import NASDAQfrom pyFTS. models import chenfrom pyFTS. partitioners import Gridfrom pyFTS. common import Transformationsdiff = Transformations. Differential(1)train = data[:2000]test = data[2000:]part = Grid. GridPartitioner(data=train, npart=15, transformation=diff)model = chen. ConventionalFTS(partitioner=part)model. append_transformation(diff)model. fit(train)forecasts = model. predict(test)Look the behavior of the classical Chen’s model with and without the Differential transformation for the NASDAQ dataset:The degradation effect of the FTS when the test data falls out of the known Universe of Discourse (source)While the time series is still fluctuating inside the known Universe of Discourse both models performed well. But when the time series jumped out the Universe of Discourse of the training data, the model without the Differentiate transformation started to deteriorate because it does not know what to do with that unknown data. Then the transformations help us not only with trending patterns, but also with the unknown ranges of the universe of discourse. But some non-stationarities are unpredictable, and sometimes they are painful to deal with. The nightmare of the Concept-DriftConcept drifts are unforeseen changes (on mean, variance or both) which can happen gradually or suddenly. Some times these drifts occur in cycles (with irregular periods) and, in other scenario, the drift is temporary. There are some questions to answer when a concept drifts happen: Is it temporary?.Is the change finished (established) or it will keep changing?We have also to make the distinction between concept-drift and outlier (or a blip). Outliers are not change, they belong to the known signal but are rare events. Concept drifts are nightmares — not only FTS methods, other computational intelligence and statistical techniques fear them too — but we need to learn how to live together them. Despite the complexity of the problem there are some simple (somehow expensive unfortunately) techniques to tackle them. Time Variant MethodsAll FTS methods we saw before are time invariant methods, which means that they assume that the future fluctuations of the time series will behave according to patterns already happened before. In other words: the behavior of the time series, which was described by the fuzzy temporal rules of the model, will not change in the future. This is works fine for many time series (for example the environmental seasonal time series like we studied before) but fails terribly in others (for instance the stock exchange asset prices). In that cases we need to apply time variant models,incremental. TimeVariant. Retrainer: The Time Variant model is the simplest (but efficient) approach to tackle concept drifts and non stationarity. This class implement a metamodel, what means that you can choose any FTS method to be its base method, then at each batch_size inputs the metamodel retrain its internal model with the last window_length inputs. These are the main parameters of the model: the window_length and the batch_size. As an meta model you can also specify which FTS method to use (the fts_method parameter) and which partitioner you want to use inside him (the partitioner_method and partitioner_params parameters). from pyFTS. models. incremental import TimeVariantmodel = TimeVariant. Retrainer( partitioner_method=Grid. GridPartitioner, partitioner_params={'npart': 35}, fts_method=hofts. WeightedHighOrderFTS, fts_params={}, order=2 , batch_size=50, window_length=200)incremental. IncrementalEnsembleFTS: Works similarly to the TimeVariant but in a EnsembleFTS approach. In TimeVariant there is only one internal model that is recreated after n inputs (which means that the batch_size is the unique memory it has). On IncrementalEnsemble we also have the window_length and the batch_size parameters but also the num_models that says how many internal models to hold. As soon new models are created (with the incoming data) the older ones are dropped from the ensemble. from pyFTS. models. incremental import IncrementalEnsemblemodel = IncrementalEnsemble. IncrementalEnsembleFTS( partitioner_method=Grid. GridPartitioner, partitioner_params={'npart': 35}, fts_method=hofts. WeightedHighOrderFTS, fts_params={}, order=2 , batch_size=50, window_length=200, num_models=3)nonstationary. nsfts. NonStationaryFTS (NSFTS): The non-stationary fuzzy sets are fuzzy sets that can be modified along the time, allowing them to adapt to changes in the data by translating and/or scaling its parameters. The NSFTS method is very similitar to time invariant FTS methods, with the exception that their fuzzy sets are not static: for each forecast performed by an NSFTS model the error is calculated and stored and the fuzzy sets are changed to fix that error. For this method, the error is a measure of how much the test data is different from the train data. This method is on this way to be published. from pyFTS. models. nonstationary import partitioners as nspartfrom pyFTS. models. nonstationary import nsftspart = nspart. simplenonstationary_gridpartitioner_builder( data=train, npart=35, transformation=None)model3 = nsfts. NonStationaryFTS(partitioner=part)The pyFTS. data module contains a lot of non-stationary and concept drifted time series as NASDAQ, TAIEX, S&P 500, Bitcoin, Ethereum, etc. You can also use the class data. artificial. SignalEmulator to create synthetic and complex patterns. The SignalEmulater is designed to work as an method chain / fluent interface, so you can simulate complex signals by chaining methods that produce specific signals that are added to previous one or replacing it. The method stationary_signal creates a simple stationary signal with constant mean and variance, the method incremental_gaussian creates a signal where the mean and/or variance is incremented at each time, the method periodic_gaussian fluctuate the mead and/or variance in constant periods and the blip method adds an outlier on a random location. Every time one of this methods is called its effects are added to the previous signal except if you inform the start parameter — indicating when (which iteration) the method start to work — or set the boolean parameter additive to False, making the stop the previous signal and start this new one. To render the whole signal you just need to call the function run. from pyFTS. data import artificialsignal = artificial. SignalEmulator(). stationary_gaussian(mu=2. 5,sigma=. 1,length=100,it=10). incremental_gaussian(mu=0. 02,sigma=0. 01,length=500,start=500). run()Now let’s put it all together, create 3 non-stationary time series, with concept drifts and employ the above presented methods to forecast them:Performance of the time variant methods for artificial time series with concept drifts (source)Time Variant methods have to balance some kind of exploitation and exploration when dealing with non-stationarities and concept drifts. To exploit what the model already knows — its memory, the last learned patterns from the data — or to explore new data and learn new patterns. Each method has its own mechanisms: the Retrainer is controlled by the window_length and batch_size, the Incremental Ensemble for the both and the num_models, the NSFTS uses the magnitude of its own errors to adjust the fuzzy sets . After all, the time spent to adapt to concept drifts is one of the most important aspects of the time variant methods. The same principle we saw in previous tutorials apply in this one: each FTS method has its own features and parameters and the best method will depend on the context. Well guys!.That’s enough for today, ok?In these tutorials we have covered — even superficially — a good portion of the time series forecasting field, with their problems and solutions using FTS methods. We did not finished yet!.We will always have problems to solve, new improved methods and optimizations. In next tutorials I will cover some new approaches, like hyperparameter optimization and how to tackle big time series with distributed computing. See you there guys!.
3,362
15,989
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.9375
3
CC-MAIN-2020-10
latest
en
0.879119
https://milesquarefabricstudio.com/19ke1z05p/
1,620,851,793,000,000,000
text/html
crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00078.warc.gz
361,132,720
21,519
# Subtraction Sheets For 2nd Grade Published at Wednesday, September 16th 2020. by in Math Grade 5. Clear your doubts thoroughly and memorize formulas for their right implementation. Understanding math formulas are not enough to score well in exams. Students should know their right implementation and hence, they can achieve their learning goal. Take learning help from online tutors at your convenient time. Online tutoring is a proven method to get requisite learning help whenever required. This innovative tutoring process does not have any time and geographical restriction. Students from any part of the world can access this learning session especially for math by using their computer and internet connection. Most importantly, the beneficial tools like the white board and attached chat box which are used in this process make the entire session interactive and similar to live sessions. Hence, it enhances student has confidence and meets their overall educational demands in the best possible manner.Students can get help on steps to improve their grades in Maths, and they can also work on different grades like 7th grade math and with online Math help students can work on different math related topics. ###### 2nd And 3rd Grade Math Worksheets The answer for the above question is hidden in a simple example. I always give the example of stairs to my students, and giving the same example in this article. I compare the steps of a staircase to the concepts in mathematics. As this is very hard to reach higher floors of a building without stairs (or elevators these days), same way learn higher concepts in mathematics without learning basic concepts is very hard. People have to start from the ground, then first step, second, third and so on to reach their destination floor. Exactly the same way students have to start from Kindergarten, then grade one, grade two and three and so on to reach their math destination. Also, if some of the steps are broken in the staircase, it is still hard to reach the desired floor using those steps. Same way, if you are missing some of the basic concepts from elementary grades, math for you is still hard. With adaptive learning programs, your child will not just play one level and complete the program. The games offer a comprehensive learning tool that works with kids from kindergarten through third grade. With hundreds of levels, different ways to play and constant interaction, the online games never lose their meaning. The same children can play the games but in different ways, since the programs are tailored toward the learning styles of each child. This is what makes adaptive learning an essential tool in classrooms as well. For 3rd grade math, you can expect a balance of fractions, graphs, money and multiplication that challenge the mind with each lesson. If you are unsure about investing in a particular program, try a program with a free trial. By implementing these valuable learning aids, you can help your child make the most of third grade. Children who struggle in a traditional learning environment can also get great benefit from digital learning games. Interactive platforms provide a fun way to learn without fear of failure and give rewards that are in line with what is being learned. Through games, your child can gain the confidence he or she needs to approach math concepts that once seemed impossible. This confidence helps improve school performance and can lead to more positive participation in a classroom environment. Unlike basic school curriculum, digital learning games can be designed to move at your child has pace. Many games feature levels that build upon each other, so your child does not have to sit through lessons that he or she has already mastered. Instead, each level of the game increases in difficulty depending on how well certain ideas have been grasped. This creates a custom learning environment catered to the pace your child feels comfortable with. Without the stress of worrying about being left behind or the boredom that can result from having to wait to move on, kids can work at the speed they prefer and learn in a way that is just right for them. Quality may be a little more expensive, but good worksheets will motivate your child to produce neat work that they can be proud of. If you want to start preparing your child for preschool, kindergarten or even junior school, you need to find preschool worksheets that provide a variety of activities. Literacy, numeracy, reading, writing, drawing, social and natural sciences are some of the areas that children between the ages of 3 and 7 can and should start learning about. Look for variety in the worksheets, as repeating the same exercise over and over will bore your child. Lots of pictures, fun activities and clearly laid out worksheets are what you are looking for. If you are just looking for a few fun pages to keep the kids busy while you cook dinner, then many of the free printable worksheets available will be suitable.
956
5,016
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.6875
3
CC-MAIN-2021-21
latest
en
0.949174
https://cr4.globalspec.com/blogentry/30238/CR4-Challenge-Question-Big-clock-in-a-New-Year-Jan-2022
1,675,278,061,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764499949.24/warc/CC-MAIN-20230201180036-20230201210036-00635.warc.gz
205,436,706
752,040
Challenge Questions Blog ### Challenge Questions So do you have a Challenge Question that could stump the community? Then submit the question with the "correct" answer and we'll post it. If it's really good, we may even roll it up to Specs & Techs. You'll be famous! Answers to Challenge Questions appear by the last Tuesday of the month. Previous in Blog: CR4 Challenge Question: Remember, no googleing (Dec. 2021) Next in Blog: Challenge Question: Buckets and Quarts (Jan. 2022) # CR4 Challenge Question: Big clock in a New Year (Jan. 2022) Posted December 31, 2021 12:00 AM Pathfinder Tags: challenge question The starting time is 00:00:01 on New Year's Day. On what day and time will the clock's hour and minute hands overlap for the 1,000th different time? (And keep in mind if it's a Leap Year or not!) Author's note: the correct answer is Feb. 15 at 10:54 a.m. A previously reply mistakenly identified the day as Feb. 14. Interested in this topic? By joining CR4 you can "subscribe" to Check out these comments that don't yet have enough votes to be "official" good answers and, if you agree with them, rate them! Guru Join Date: Mar 2007 Location: by the beach in Florida Posts: 31850 #1 ### Re: CR4 Challenge Question: Big clock in a New Year (Jan. 2022) 12/31/2021 2:36 AM 1:00 am feb 12... __________________ All living things seek to control their own destiny....this is the purpose of life Guru Join Date: Apr 2010 Location: About 4000 miles from the center of the earth (+/-100 mi) Posts: 9252 #2 ### Re: CR4 Challenge Question: Big clock in a New Year (Jan. 2022) 12/31/2021 8:42 AM Well, the hands overlap 22 times a day. 1000/22 = 45 remainder 10. That puts it on Feb 14, the 10th overlap after 12:00:01. 12 hours = 43200 seconds. Overlaps occur 11 times in 12 hours or every 3927.272727... seconds. The 10th overlap would occur at second 39272.727272... or 10:54:32.72727272... in the morning of Feb 14. Guru Join Date: Apr 2010 Location: About 4000 miles from the center of the earth (+/-100 mi) Posts: 9252 #3 ### Re: CR4 Challenge Question: Big clock in a New Year (Jan. 2022) 12/31/2021 2:04 PM Times hour and minute hands align each day: 1 01:05:27 2 02:10:54 3 03:16:21 4 04:21:49 5 05:27:16 6 06:32:43 7 07:38:10 8 08:43:38 9 09:49:05 10 10:54:32 11 12:00:00 12 13:05:27 13 14:10:54 14 15:16:21 15 16:21:49 16 17:27:16 17 18:32:43 18 19:38:10 19 20:43:38 20 21:49:05 21 22:54:32 22 00:00:00 Guru Join Date: Mar 2007 Location: by the beach in Florida Posts: 31850 #9 ### Re: CR4 Challenge Question: Big clock in a New Year (Jan. 2022) 01/01/2022 3:17 PM The clock hands must cross every hour, that's why a broken clock is right at least 2 times a day... __________________ All living things seek to control their own destiny....this is the purpose of life Score 1 for Off Topic Guru Join Date: May 2006 Location: Placerville, CA (38° 45N, 120° 47'W) Posts: 6043 #10 ### Re: CR4 Challenge Question: Big clock in a New Year (Jan. 2022) 01/01/2022 3:32 PM NO! Since the hour hand has advanced 1 hour (roughly 30°) The minute hand must rotate a little over 390° before it reaches the hour hand position the next time. The "little over" is because the hour hand rotated further while the minute hand was rotating that 30+°. So it is a little over 5 minutes more than an hour between crossings. __________________ Teaching is a great experience, but there is no better teacher than experience. Guru Join Date: Mar 2007 Location: by the beach in Florida Posts: 31850 #11 ### Re: CR4 Challenge Question: Big clock in a New Year (Jan. 2022) 01/01/2022 5:48 PM Not if it's a digital clock... __________________ All living things seek to control their own destiny....this is the purpose of life Guru Join Date: May 2006 Location: Placerville, CA (38° 45N, 120° 47'W) Posts: 6043 #12 ### Re: CR4 Challenge Question: Big clock in a New Year (Jan. 2022) 01/01/2022 9:50 PM Now you've opened a whole new can of worms! All answers so far (including mine) have been assuming a standard mechanical 12-hour clock with physical hands. The image you just showed is either a very non-standard clock, or it is defective. Why is it missing some minute marks? Perhaps they appear one at a time to show seconds... The hour hand appears to point at 10, or possibly a bit before 10, yet the minute hand is clearly pointing at 9 minutes past the hour. This made me go back and look at the original post. It says a large clock, and the illustration does indeed appear to show a large, complex clock. The numbering on the clock is clearly not Arabic numerals, but it is pretty clear that it is a 24 hour clock. The hour hand must be the one with a sun on it, and I presume the black circle represents the moon. there are other pointers and their shadows that make it difficult to tell which is which. I haven't yet fully analyzed it, but I suspect a 24-hour clock would have 23 occurrences of coincidence per day, rather than the 22 of a 12-hour clock. __________________ Teaching is a great experience, but there is no better teacher than experience. Guru Join Date: Apr 2010 Location: About 4000 miles from the center of the earth (+/-100 mi) Posts: 9252 #16 ### Re: CR4 Challenge Question: Big clock in a New Year (Jan. 2022) 01/02/2022 10:49 AM If it's a 24 hour clock with a minute hand, you are correct, it should have 23 overlaps per 24 hour period. 1000/23 gives 43 days + remainder 11. Overlap 1000 should be on day 44, or Feb 13 The daily overlaps should occur at: 1 01:02:36 2 02:05:13 3 03:07:49 4 04:10:26 5 05:13:02 6 06:15:39 7 07:18:15 8 08:20:52 9 09:23:28 10 10:26:05 11 11:28:41 12 12:31:18 13 13:33:54 14 14:36:31 15 15:39:07 16 16:41:44 17 17:44:20 18 18:46:57 19 19:49:33 20 20:52:10 21 21:54:46 22 22:57:23 23 00:00:00 Overlap 1000 should be on day 44, or Feb 13, overlap 11 (11:28:41). Guru Join Date: May 2009 Location: Richland, WA, USA Posts: 21011 #20 ### Re: CR4 Challenge Question: Big clock in a New Year (Jan. 2022) 01/02/2022 9:50 PM An interesting point, but are there actually any 24-hour clocks that divide the circle into 24 parts? All the ones have seen have 1-12 around the dial, with 13-24 shown in parallel for the second sweep. This keeps the minute positions normal; e.g., 15 minutes always "east." __________________ In vino veritas; in cervisia carmen; in aqua E. coli. Guru Join Date: May 2006 Location: Placerville, CA (38° 45N, 120° 47'W) Posts: 6043 #21 ### Re: CR4 Challenge Question: Big clock in a New Year (Jan. 2022) 01/02/2022 10:36 PM Did you look at the image of a clock in the original Post, enlarged in my #12? A simple search brought up several, including this one: __________________ Teaching is a great experience, but there is no better teacher than experience. Guru Join Date: May 2006 Location: Placerville, CA (38° 45N, 120° 47'W) Posts: 6043 #17 ### Re: CR4 Challenge Question: Big clock in a New Year (Jan. 2022) 01/02/2022 1:03 PM I can't help wondering whether the clock illustrated in the OP is intended to be a clue, a distraction, or was simply a convenient illustration of a large clock. Also, why is the word "Large" included at all? Does a large clock differ from a small or medium clock in the timing of hand crossings? The symbols on the frontmost dial of the illustrated clock are clearly the signs of the zodiac. The less prominent Roman numerals agree with our current 12-hour time numbers, with one of the 12s at the top. There are (in black) a few Arabic numerals, with 1 at the left and 12 at the right. The more prominent symbols/numbers on the outside are a mystery to me. Why are they offset by 8 hours, and what is the origin of those symbols? The pacific time zone is offset by 8 hours from GMT, but I'd be quite surprised if that photo represents a clock made for use in the Pacific Time Zone! __________________ Teaching is a great experience, but there is no better teacher than experience. Guru Join Date: Aug 2005
2,443
8,024
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.125
3
CC-MAIN-2023-06
latest
en
0.928714
http://slideplayer.com/slide/4317590/
1,516,333,660,000,000,000
text/html
crawl-data/CC-MAIN-2018-05/segments/1516084887729.45/warc/CC-MAIN-20180119030106-20180119050106-00193.warc.gz
310,569,898
18,163
# 8.1 Monomials and Factoring Objective Students will be able to: 1. find the prime factorization of a monomial. 2. find the greatest common factor (GCF) ## Presentation on theme: "8.1 Monomials and Factoring Objective Students will be able to: 1. find the prime factorization of a monomial. 2. find the greatest common factor (GCF)"— Presentation transcript: 8.1 Monomials and Factoring Objective Students will be able to: 1. find the prime factorization of a monomial. 2. find the greatest common factor (GCF) for a set of monomials. A prime number is a number that can only be divided by one and itself. A composite number is a number greater than one that is not prime. Prime or composite? 37 prime 51 composite Ex 1: Prime or Composite? 89 1.Prime 2.Composite 3.Both 4.Neither Ex 2) Find the prime factorization of 84. 84=4 21 =2 2 3 7 Ex 3) Find the prime factorization of -210. -210=-1 210 =-1 30 7 = -1 6 5 7 =-1 2 3 5 7 Ex 4) Find the prime factorization of 45a 2 b 3 45a 2 b 3 = 9 5 a a b b b =3 3 5 a a b b b = 3 2 5 a a b b b Write the variables without exponents. The Greatest Common Factor (GCF) of 2 or more numbers is the largest number that can divide into all of the numbers. Ex 5) Find the GCF of 42 and 60. Write the prime factorization of each number. Ex 5) Find the GCF of 42 and 60. What prime factors do the numbers have in common? Multiply those numbers. The GCF is 2 3 = 6 6 is the largest number that can go into 42 and 60! 42 =2 3 7 60=2 2 3 5 Ex 6) Find the GCF of 40a 2 b and 48ab 4. 40a 2 b = 2 2 2 5 a a b 48ab 4 = 2 2 2 2 3 a b b b b What do they have in common? Multiply the factors together. GCF = 8ab Ex 7) What is the GCF of 48 and 64? 1.2 2.4 3.8 4.16 8.1 HW PG. 472 #10 – 21 ALL, 43 – 47 ODD (15 PROBLEMS) What is the prime factorization of 48? 1.3  16 2.3  4  4 3.2  2  3  4 4.2  2  2  2  3 Download ppt "8.1 Monomials and Factoring Objective Students will be able to: 1. find the prime factorization of a monomial. 2. find the greatest common factor (GCF)" Similar presentations
742
2,043
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.8125
5
CC-MAIN-2018-05
longest
en
0.854749
https://askfilo.com/user-question-answers-mathematics/so-litres-so-the-capacity-of-the-milk-container-is-6-litres-33333135343430
1,723,550,460,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722641076695.81/warc/CC-MAIN-20240813110333-20240813140333-00681.warc.gz
85,367,608
30,633
Question # So, litres So, the capacity of the milk container is 6 litres. Fxample 2. A water tanker is filled widh 40,000 litres of water. The length of water tanker is breadrh is , find irs height. Solution: Exercise 1. A copper cube with edge is melted and made into smaller cubes each having edge of 9 cm. How many such small copper cubes do we get? 2. The edge of a cubical water tank is . How many kilo litres of water it can store? 3. The volume of a cube is . Find the edge of the cube. 158 ## Video solutions (1) Learn from their 1-to-1 discussion with Filo tutors. 8 mins 126 Share Report Found 5 tutors discussing this question Discuss this question LIVE 6 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Trusted by 4 million+ students Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text So, litres So, the capacity of the milk container is 6 litres. Fxample 2. A water tanker is filled widh 40,000 litres of water. The length of water tanker is breadrh is , find irs height. Solution: Exercise 1. A copper cube with edge is melted and made into smaller cubes each having edge of 9 cm. How many such small copper cubes do we get? 2. The edge of a cubical water tank is . How many kilo litres of water it can store? 3. The volume of a cube is . Find the edge of the cube. 158 Updated On Dec 12, 2022 Topic All topics Subject Mathematics Class Class 9 Answer Type Video solution: 1 Upvotes 126 Avg. Video Duration 8 min
465
1,838
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.734375
4
CC-MAIN-2024-33
latest
en
0.867306
http://codeforces.com/topic/78135/en2
1,653,272,959,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652662552994.41/warc/CC-MAIN-20220523011006-20220523041006-00150.warc.gz
14,085,249
13,952
Connecting Circles Revision en2, by Newtech66, 2020-05-18 16:22:18 ### The problem: Assume there are $n$ circles on the plane. The $i^{th}$ circle has an initial radius $r_i$ $(r_i \geq 0)$. We are allowed to increase or decrease the radius of the $i^{th}$ circle by $1$ unit at a cost $c_i$ $(c_i > 0)$. Let us make a graph such that each circle is a node, and there is an undirected edge between two circles $C_i$ and $C_j$ if their intersection is not empty (just to be clear, the cases are: they touch internally/externally, they intersect at two points, one lies inside the other). Find the minimum cost to make the graph connected. #### Source: Trying to think of new and interesting problems and then creating this problem which I can't solve at all. The inspiration here was from radio stations. Every radio station has a coverage radius, and if we make the network connected, a message can travel between any two radio stations. I have given up on this problem. I would appreciate it if someone can enlighten me on how to solve this problem or with any restrictions on it (eg. "$r_i=0$", "All $c_i$ are equal", etc). #### Time complexity required: Anything works, I haven't even been able to figure out an approach. #### History Revisions Rev. Lang. By When Δ Comment en2 Newtech66 2020-05-18 16:22:18 0 (published) en1 Newtech66 2020-05-18 16:21:09 1192 Initial revision (saved to drafts)
377
1,410
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.25
3
CC-MAIN-2022-21
latest
en
0.923716
http://www.themathpage.com/arith/ar_pr/div1.htm
1,394,471,722,000,000,000
text/html
crawl-data/CC-MAIN-2014-10/segments/1394010914773/warc/CC-MAIN-20140305091514-00077-ip-10-183-142-35.ec2.internal.warc.gz
532,619,054
4,819
THE MEANING OF DIVISION PROBLEMS 11.   a)  What number times 100 will be 287?  Answer using the division 1.   a)   sign ÷ . Do the problem yourself first! 287 ÷ 100 = 2.87 "287 divided by 100 equals (or is) 2.87." 11.   c)  What is 287 called?   The dividend. 11.   d)  What is 100 called?  The divisor. 11.   e)  What is 2.87 called?  The quotient. 11.   f)  Prove your answer to part a).    2.87 × 100 = 287 12.   Practice the following. 3 × 6 = 18. 4 × 8 = 32. 5 × 9 = 45. 6 × 9 = 54. 7 × 6 = 42. 8 × 6 = 48. 9 × 3 = 27. 3 × 8 = 24. 4 × 6 = 24. 5 × 6 = 30. 6 × 5 = 30. 7 × 4 = 28. 8 × 9 = 72. 9 × 6 = 54. 3 × 9 = 27. 4 × 9 = 36. 5 × 7 = 35. 6 × 7 = 42. 7 × 3 = 21. 8 × 5 = 40. 3 × 4 = 12. 4 × 4 = 16. 6 × 3 = 18. 4 × 5 = 20 9 × 6 = 54. 13.   a)  Let this straight line represent 24, and illustrate 24 ÷ 4. 13.   b)  Prove   24 ÷ 4 = 6.   6 × 4 = 24 13.   c)  Illustrate 24 ÷ 6. 13.   d)  Prove   24 ÷ 6 = 4.   4 × 6 = 24 14.   a)  How many times could you subtract 8 from 40?  5 times. 13.   b)  Write that problem using the division sign ÷ . 40 ÷ 8 = 5 14.   c)  Write that problem using the division bar. 40 8 = 5 14   d)  In that form, what is 40 called?   The dividend. 14.   e)  What is 8 called?  The divisor. 14.   f)  What is 5 called?  The quotient. 15.   a)  From a bottle that contains 48 oz, how many times could you fill 14.   a)  a 6 oz glass?  8 times.  48 ÷ 6 = 8. 15.   b)  From a bottle that contains 2 quarts, how many times could you 14.  b)   fill an 8 oz glass?  (1 quart = 32 oz) 8 times.  64 oz ÷ 8 oz = 8. To divide, the units must be of the same kind. 16.   A farmer has a field that is 100 yards long, and he wants to put a 16.   fence post every 4 feet.  How many fence posts must he put? How many times is 4 feet contained in 100 yards? To answer, we have to change yards to feet because, again, the units must be the same. Since 1 yard = 3 feet, then 100 yards = 300 feet.  How many 4's are there in 300? Now, one hundred is made up of twenty-five 4's:  100 = 25 × 4. Therefore, three hundred = (3 × 25) × 4 = 75 × 4. The farmer must put 75 fence posts. (Alternatively, to divide a number by 4 is equivalent to taking its fourth part. And to take the fourth part, we can take half of half.  Lesson 16, Question 9.  Half of 300 is 150. Half of 150 is 75.) But what about the fence post at the beginning of the field -- at the "zero" point? In reality, then, he would put 76 fence posts! 17.   If you divide a number by 8, what are the possible remainders? 1, 2, 3, 4, 5, 6, 7. 18.   Divide each number by 8 mentally.  Write the whole number quotient 16.  and the remainder. 17  18  2 R 2.     29  3 R 5.     39  4 R 7.     54  6 R 6.     76  9 R 4. 19.   Divide each number by 7 mentally.  Write the whole number quotient 19.   and the remainder. 19.   17  2 R 3.     25  3 R 4.     31  4 R 3.     41  5 R 6.     61  8 R 5. 10.   Divide each number by 9 mentally.  Write the whole number quotient 10.   and the remainder. 10.   17  1 R 8.     25  2 R 7.     31  3 R 4.     41  4 R 5.     62  6 R 8. 11.   Prove:  157 ÷ 14 = 11 R 3 11 × 14 = 140 + 14 = 154, plus 3 is 157. 12.   How will 436 change if you multiply it by 10 and then divide the 12.   product by 10? It will not change! 13.   Divide. a) 40 5 = 8 b) 35 7 = 5 c) 28 4 = 7 d) 63 9 = 7 e) 54 6 = 9 f) 24 4 = 6 g) 72 9 = 8 h) 27 3 = 9 i) 35 7 = 5 j) 45 9 = 5 k) 32 8 = 4 l) 48 6 = 8 m) 240  6 = 40 n) 420  6 = 70 o) 720  8 = 90 p) 120  4 = 30 q) 320  4 = 80 r) 450  5 = 90 s) 480  8 = 60 t) 400  5 = 80 u) 540  9 = 60 v) 300  6 = 50 w) 630  7 = 90 x) 360  9 = 40 y) 2100   3 = 700 z) 6400   8 = 800 a') 4200   7 = 600 b') 1000   5 = 200 c') 2000   5 = 400 d') 3000   5 = 600 e') 2000   4 = 500 f') 4000   5 = 800 14. a) 245 10 = 24.5 b) 245100 = 2.45 c) 245 1000 = .245 d) 9 10 = .9 e) 9  100 = .09 f) 9   1000 = .009 Continue on to the next section.
1,767
3,893
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.40625
4
CC-MAIN-2014-10
longest
en
0.69119
https://questions.examside.com/past-years/jee/question/let-c1-be-the-circle-of-radius-1-with-center-at-the-origi-jee-advanced-mathematics-hcmryzpauqaypmlo
1,721,334,774,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763514859.56/warc/CC-MAIN-20240718191743-20240718221743-00147.warc.gz
417,858,087
42,855
1 JEE Advanced 2023 Paper 2 Online Numerical +4 -0 Let $C_1$ be the circle of radius 1 with center at the origin. Let $C_2$ be the circle of radius $r$ with center at the point $A=(4,1)$, where $1 < r < 3$. Two distinct common tangents $P Q$ and $S T$ of $C_1$ and $C_2$ are drawn. The tangent $P Q$ touches $C_1$ at $P$ and $C_2$ at $Q$. The tangent $S T$ touches $C_1$ at $S$ and $C_2$ at $T$. Mid points of the line segments $P Q$ and $S T$ are joined to form a line which meets the $x$-axis at a point $B$. If $A B=\sqrt{5}$, then the value of $r^2$ is : Your input ____ 2 JEE Advanced 2022 Paper 1 Online Numerical +3 -0 Let $$A B C$$ be the triangle with $$A B=1, A C=3$$ and $$\angle B A C=\frac{\pi}{2}$$. If a circle of radius $$r>0$$ touches the sides $$A B, A C$$ and also touches internally the circumcircle of the triangle $$A B C$$, then the value of $$r$$ is __________ . Your input ____ 3 JEE Advanced 2021 Paper 2 Online Numerical +2 -0 Consider the region R = {(x, y) $$\in$$ R $$\times$$ R : x $$\ge$$ 0 and y2 $$\le$$ 4 $$-$$ x}. Let F be the family of all circles that are contained in R and have centers on the x-axis. Let C be the circle that has largest radius among the circles in F. Let ($$\alpha$$, $$\beta$$) be a point where the circle C meets the curve y2 = 4 $$-$$ x. The radius of the circle C is ___________. Your input ____ 4 JEE Advanced 2021 Paper 2 Online Numerical +2 -0 Consider the region R = {(x, y) $$\in$$ R $$\times$$ R : x $$\ge$$ 0 and y2 $$\le$$ 4 $$-$$ x}. Let F be the family of all circles that are contained in R and have centers on the x-axis. Let C be the circle that has largest radius among the circles in F. Let ($$\alpha$$, $$\beta$$) be a point where the circle C meets the curve y2 = 4 $$-$$ x. The value of $$\alpha$$ is ___________. Your input ____ EXAM MAP Medical NEET Graduate Aptitude Test in Engineering GATE CSEGATE ECEGATE EEGATE MEGATE CEGATE PIGATE IN Civil Services UPSC Civil Service Defence NDA CBSE Class 12 © ExamGOAL 2024
688
1,999
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.21875
3
CC-MAIN-2024-30
latest
en
0.747876
https://www.authorstream.com/Presentation/studioussim-1465194-maths-square-roots/
1,627,707,223,000,000,000
text/html
crawl-data/CC-MAIN-2021-31/segments/1627046154053.17/warc/CC-MAIN-20210731043043-20210731073043-00420.warc.gz
659,234,093
26,372
# maths square n square roots Views: Category: Education ## Presentation Description No description available. ## Presentation Transcript ### Squares & Square Roots: Squares & Square Roots Lesson 1 By Ms.Simran Chandna RollNO : 10 Std : V111 C ### Square Numbers : Square Numbers 1 x 1 = 1 2 x 2 = 4 3 x 3 = 9 4 x 4 = 16 5 x 5 = 25 6 x 6 = 36 7 x 7 = 49 8 x 8 = 64 9 x 9 = 81 10 x 10 = 100 11 x 11 = 121 12 x 12 = 144 13 x 13 = 169 14 x 14 = 196 15 x 15 = 225 ### Square Numbers: Square Numbers One property of a perfect square is that it can be represented by a square array. Each small square in the array shown has a side length of 1cm. The large square has a side length of 4 cm. 4cm 4cm 16 cm 2 ### Square Numbers: Square Numbers The large square has an area of 4cm x 4cm = 16 cm 2 . The number 4 is called the square root of 16. We write: 4 = 16 4cm 4cm 16 cm 2 ### Finding Square Roots : Finding Square Roots 256 = 4 x Find the square root of 256 64 = 2 x 8 = 16 ### Squares & Square Roots: Squares & Square Roots Estimating Square Root ### Square Roots: Square Roots Not all numbers are perfect squares. Not every number has an Integer for a square root. We have to estimate square roots for numbers between perfect squares. ### Perfect and non perfect squares: Perfect and non perfect squares of above numbers, 4,25,64,81,100 are squares of integers. A number like this which is the square of an integer is called a perfect square. The numbers, 2,3,5,20,27,93 are not the squares of any integer. and in other words, these numbers are not perfect square. ### PowerPoint Presentation: Thanks for watching my power point presentation  Hope you all liked it  ### PowerPoint Presentation: By MS.Simran Chandna
512
1,740
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.8125
5
CC-MAIN-2021-31
latest
en
0.822007
https://physics.stackexchange.com/tags/navier-stokes/info
1,653,715,130,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652663012542.85/warc/CC-MAIN-20220528031224-20220528061224-00774.warc.gz
500,907,477
19,930
Tag Info The Navier-Stokes equations describe fluid flows in continuum mechanics. ## When to Use the Tag and aims of this description Use the tag when asking questions about fluid flows as modeled by the Navier-Stokes equations. Hopefully, this description may give a basis for unified notation in discussions. Perhaps it will even help people to formulate questions in a clearer way. ## Introduction The Navier-Stokes equations model fluid flows based on the hypothesis: the molecular nature of matter is ignored. This motivates the use of differential equations to express basic mechanical principles. When reasoning in terms of "particles" in this context, one should understand "a small amount" of matter (much larger than the size of molecules, but still small enough for them to be "infinitesimal" with respect to the differential mathematics involved), not molecules. In what follows, bold symbols denote vectors (2D or 3D) and the usual differential operators are used without explanation. The following quantities are used throughout: • $\boldsymbol{u}$: the velocity field, • $p$: the pressure field, • $\rho$: the density of the fluid, • $E$: the total energy of the flow, • $\boldsymbol{q}$: the heat flux, • $\sigma$: the Cauchy stress tensor, expressing the internal forces that neighbouring particles of fluid exert upon each other, • $\mu$: the dynamic viscosity (assumed constant throughout), • $\boldsymbol{f}$: external (volumic) forcing term (for example, the acceleration of gravity $\boldsymbol{g}$). ## General formulation We need to respect three principles: • Mass conservation: no matter is created nor destroyed, • The rate of change of momentum of a fluid particle is equal to the force applied to it (Newton's second law) • Energy conservation: it is neither created nor destroyed. In all generality, these may be expressed as: Mass conservation: $\partial_t \rho + \nabla \cdot (\rho \boldsymbol{u}) = 0$ Balance of momentum: $\rho (\partial_t \boldsymbol{u} + (\boldsymbol{u}\cdot\nabla) \boldsymbol{u}) = \nabla \cdot \sigma + \rho \boldsymbol{f}$ Energy conservation: $\partial_t E + \nabla \cdot (\boldsymbol{u} E + \sigma \boldsymbol{u} - \boldsymbol{q}) = 0$ ## Incompressible fluids In a context where we ignore heat phenomena (isothermal fluid), assume constant density of the fluid as well as a linear relation between stress and strain, $\sigma = -p \mathbb{I} + \mu ( \nabla \boldsymbol{u} + (\nabla \boldsymbol{u})^T)$ (Newtonian fluid), the important conservation laws are those for mass and momentum (i.e., Newton's second law), which are given by Mass conservation: $\nabla \cdot \boldsymbol{u} = 0$ Balance of momentum: $\rho (\partial_t \boldsymbol{u} + (\boldsymbol{u}\cdot \nabla)\boldsymbol{u}) = -\nabla p + \mu \Delta \boldsymbol{u} + \rho \boldsymbol{f}$ The pressure acts as a means to enforce the incompressibility (divergence-free) condition represented by the mass conservation equation; it does not have the same meaning as in the compressible case. Energy is not conserved in this context, it is dissipated by the viscous nature of the fluid (internal friction) and lost as heat (which we don't "track" in this context). These are perhaps the most commonly encountered form of the Navier-Stokes equations. They are famously the subject of a Clay Mathematics Institute Millennium Prize. ## Compressible fluids When the fluid is compressible, the density becomes a field to be solved for, and we need an additional equation for the system, provided by the energy conservation principle. The exact form depends on the nature of the fluid, more exactly it's thermodynamic behaviour. Mass conservation: $\partial_t \rho + \nabla \cdot (\rho \boldsymbol{u}) = 0$ Balance of momentum: $\rho (\partial_t \boldsymbol{u} + (\boldsymbol{u}\cdot \nabla)\boldsymbol{u}) = -\nabla p + \mu \Delta \boldsymbol{u} + (\mu/3 + \mu^v)\nabla(\nabla\cdot \boldsymbol{u})+ \rho \boldsymbol{f}$ where $\mu^v$ is the bulk viscosity coefficient. Conservation of energy may be expressed in various ways depending on the fluid, a general discussion would require to delve into thermodynamic considerations which are outside the scope of this article. [Simple example?] ## Remarks As in all physical problems, to obtain a unique and physically reasonable solution one must know the initial conditions and the conditions at all boundaries. An example of boundary conditions are the noslip boundary conditions, which require the fluid to adhere to the boundary: $\boldsymbol{u}\vert_{\text{boundary}} = \boldsymbol{v}_{\text{boundary}}$ The equations above are expressed in physical dimensions. It is possible to rescale time and space and normalize the velocity and pressure fields in a number of ways to get rid of the physical constants or to make a specific new one appear. The best known formulation involves the Reynolds number: $$\boldsymbol{u}(\boldsymbol{x},t), p(\boldsymbol{x},t) \mapsto U~\boldsymbol{u}(\boldsymbol{x}/L,~t~U/L),\rho U^2~p (\boldsymbol{x}/L,~t~U/L)$$ with $L, U$ a reference length and velocity (of a moving body, for example). This leads to writing the balance of momentum equation for incompressible flow (neglecting the forcing term) as: $$\partial_t \boldsymbol{u} + (\boldsymbol{u}\cdot \nabla)\boldsymbol{u} = -\nabla p + \frac{1}{\mathrm{Re}} \Delta \boldsymbol{u}$$ where $$\mathrm{Re} = \frac{\rho L U}{\mu}$$ is the Reynolds number, expressing the ratio of inertial to viscous effects. The higher the number, the bigger the influence of inertial effects. In the infinite Reynolds number limit, we recover the Euler equations. High Reynolds numbers flows tend to exhibit . ## Prerequisites to Navier-Stokes Phys: Newtonian Mechanics; Classical Mechanics; Continuum Mechanics; ... Math: Partial Differential Equations (PDE); ... ## Recommended books Batchelor, G.K., An introduction to fluid dynamics, Cambridge University Press (1967) Chorin, A.J. and Marsden, J.E., A mathematical introduction to fluid mechanics, Springer (1993)
1,485
6,031
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.609375
4
CC-MAIN-2022-21
latest
en
0.845959
https://windowssecrets.com/forums/showthread.php/74562-Sum-2-IIF-Txt-Boxes-%282003-SP1%29
1,508,687,921,000,000,000
text/html
crawl-data/CC-MAIN-2017-43/segments/1508187825308.77/warc/CC-MAIN-20171022150946-20171022170946-00700.warc.gz
892,049,755
15,465
# Thread: Sum 2 IIF Txt Boxes (2003 SP1) 1. ## Sum 2 IIF Txt Boxes (2003 SP1) I have a Report that calculates the Total amount due based on certain criteria, ie, =IIf([FeeBand]='U',([Sum Of budget_amount]/2),"") =IIf([FeeBand]='T',([Sum Of budget_amount]*10/100),"") These 2 Controls are called txtUnsecuredAchieve & txtTargetAchieve I tried to create another Text Box that would sum these Totals =Sum([txtTargetAchieve])+([txtUnsecuredAchieve]), but I just get an error message, and I don't understand what I'm doing wrong. Is it not possible to Sum two IIF Fields? 2. ## Re: Sum 2 IIF Txt Boxes (2003 SP1) I would use : =IIf([FeeBand]='U',([Sum Of budget_amount]/2),0) =IIf([FeeBand]='T',([Sum Of budget_amount]*10/100),0) and to total those two, use =[txtTargetAchieve]+[txtUnsecuredAchieve] as the sum (I suppose) is already done in the textboxes. Or do I missunderstand and you want. =Sum([txtTargetAchieve])+Sum([txtUnsecuredAchieve]), 3. ## Re: Sum 2 IIF Txt Boxes (2003 SP1) You're correct in that putting a 0 instead of "" gets the calculation to work (kind of), but there are 2 problems as I have the 2 text boxes overlayed to ensure that I only get a result if the condition is true (ie, I don't want to see 4. ## Re: Sum 2 IIF Txt Boxes (2003 SP1) Try =IIf([FeeBand]='U',([Sum Of budget_amount]/2),Null) =IIf([FeeBand]='T',([Sum Of budget_amount]*10/100),Null) =Sum(IIf([FeeBand]='U',([Sum Of budget_amount]/2),Null))+Sum(IIf([FeeBand]='T',([Sum Of budget_amount]*10/100),Null)) 5. ## Re: Sum 2 IIF Txt Boxes (2003 SP1) Thank you both for your replies. The Null Element works, meaning I can overlay the text boxes and only get one displaying (as should be), but with the second argument supplied, I get prompted to enter the Value of Sum Of budget_amount, meaning that I get a blank result? <img src=/S/confused.gif border=0 alt=confused width=15 height=20> Thanks again for your help. <img src=/S/smile.gif border=0 alt=smile width=15 height=15> 6. ## Re: Sum 2 IIF Txt Boxes (2003 SP1) What is "Sum Of budget_amount"? Is it the name of a field in the record source of the report, or is it the name of a control on the report? 7. ## Re: Sum 2 IIF Txt Boxes (2003 SP1) It's the Name of the Text Box on the Fee Band Footer---its Control Source is =Sum([budget_amount]). Does that help locate the problem? 8. ## Re: Sum 2 IIF Txt Boxes (2003 SP1) Yep, it does. You cannot sum a control, only a field in the record source (or an expression based on a field in the record source). One possible solution goes as follows: Duplicate the text boxes txtUnsecuredAchieve & txtTargetAchieve within the Fee Band footer. Set the Visible property of the duplicates to No. Set the Running Sum property of the duplicates to Over All. Name them for example txtUnsecuredRunning and txtTargetRunning. Put a text box in the report footer with control source =[txtUnsecuredRunning]+[txtTargetRunning] 9. ## Re: Sum 2 IIF Txt Boxes (2003 SP1) Hans You are SUCH a genius--I wish I had your brains and knowledge. Thanks so much. All I had to change was 'Over All' to 'Over Group', as that's actually what I wanted, and it worked. You're a star! Thanks again (to both of you) for your replies. God Bless <img src=/S/starstruck.gif border=0 alt=starstruck width=15 height=15> <img src=/S/clapping.gif border=0 alt=clapping width=19 height=23> <img src=/S/cheers.gif border=0 alt=cheers width=30 height=16> 10. ## Re: Sum 2 IIF Txt Boxes (2003 SP1) OK, this is embarrassing, but I can't get a Grand Total at the End of the Report. I've tried all the Running Sum Options, and I tried to duplicate the solution outlined above, but that didn't work either. Sorry for being so stupid, but . . . Help? 11. ## Re: Sum 2 IIF Txt Boxes (2003 SP1) Grand total of what? Please provide specific information. 12. ## Re: Sum 2 IIF Txt Boxes (2003 SP1) Sorry, the Grand Total of the Running Totals, ie, =[txtTargetRunning]+[txtUnsecuredRunning] (called txtTotalACTUAL). Thanks. <img src=/S/smile.gif border=0 alt=smile width=15 height=15> 13. ## Re: Sum 2 IIF Txt Boxes (2003 SP1) If you put a text box with control source =[txtTargetRunning]+[txtUnsecuredRunning] in the report footer, it should display the grand total. (I thought you had already done that) 14. ## Re: Sum 2 IIF Txt Boxes (2003 SP1) No, believe it or not, that just gives me the same figure as that which appears as the final Amount on the Last Detail Page of the Report, eg, If the Final Unsecured Amount is 15. ## Re: Sum 2 IIF Txt Boxes (2003 SP1) Have you tried yet another set of duplicates with Running Sum set to Over All, and a text box in the report footer that refers to these? If that doesn't do what you want, I don't understand the situation. Page 1 of 2 12 Last #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •
1,387
4,916
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.125
3
CC-MAIN-2017-43
latest
en
0.771668
http://math.stackexchange.com/questions/249483/proving-equation-with-hilbert-symbol
1,469,357,721,000,000,000
text/html
crawl-data/CC-MAIN-2016-30/segments/1469257823996.40/warc/CC-MAIN-20160723071023-00043-ip-10-185-27-174.ec2.internal.warc.gz
164,987,002
15,880
# Proving equation (with Hilbert symbol) Let $K$ be a field with non-zero elements $a,b,c \in K$ and let $(. , .)$ be the Hilbert symbol. Let $(a,-c)=(-1,ac)$ and $(b,-c)=(-1,bc)$. How to show that $(-ab,-c)=(-1,-abc)$ ? - I've computed: $(-ab,-c)=(-c,a)(-c,-b)=(-1,ac)(-c,-b)=(-1,ac)(-c,b)(-c,-1)=(-1,ac)(-1,bc)(-1,-c‌​)=(-1,-acbcc)$. But where is a mistake? – David75 Dec 2 '12 at 21:28 There is no mistake. Recall the definition of Hilbert symbol: $(-1, c^2) = 1$. – user27126 Dec 2 '12 at 21:29 Thanks! It's clear now :-) – David75 Dec 2 '12 at 21:33
233
558
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.328125
3
CC-MAIN-2016-30
latest
en
0.62863
http://www.aboutmech.com/2016/08/venturimeter.html
1,606,803,596,000,000,000
text/html
crawl-data/CC-MAIN-2020-50/segments/1606141652107.52/warc/CC-MAIN-20201201043603-20201201073603-00258.warc.gz
101,149,424
49,424
### Venturimeter Venturimeter is an instrument used to measure the discharge of liquid flowing in a pipe. It consists of three parts, i.e the converging cone, the throat and the diverging cone. The length of the divergent cone is made about three to four times convergent cone in order to avoid tendency of breaking away the stream of liquid and to minimise frictional losses. It may be noted that (a) The velocity of liquid at the throat is higher than that of inlet. (b) The pressure of liquid at the throat is lower than that of inlet. (c) The velocity and pressure of liquid flowing through the divergent portion decreases. The discharge through a venturimeter is given by where Cd = Coefficient of discharge, a1 = Area at inlet, a2 = Area at throat, and 1. i venturimeter flow takes place at atm or gauge or absloute pressure ? 1. Atmospheric pressure 2. Atmospheric pressure ### Are Engineers Favourite Among Girls? What do girls think about men who are engineers and what type of engineers are most favourite among girls? A survey was conducted by a team of aboutmech with members chosen from different countries. Each team member carried out a neutral survey in his/ her country to evaluate what females think about most of the engineers from different engineering branches. An overview of different questions asked in survey is given below: Question: Which male engineers do you find more attractive as a husband? Almost 50% women voted for mechanical engineers to be the most attractive husbands among other engineers. So mechanical engineers are most favourite among girls as a husband. They find them more trustworthy, intelligent, and creative. A married women from California said, “If your husband is a mechanical engineer, he will help you in everyday tasks. Because mechanical engineers are good team players and they have creative ideas about everything.” Which male engineers do you find more
387
1,922
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.0625
3
CC-MAIN-2020-50
latest
en
0.960955
https://fiveable.me/ap-stats/unit-2/correlation/study-guide/LlS81pC6QricXgIKNuFM
1,632,419,646,000,000,000
text/html
crawl-data/CC-MAIN-2021-39/segments/1631780057427.71/warc/CC-MAIN-20210923165408-20210923195408-00194.warc.gz
309,402,287
59,968
📚 All Subjects > 📊 AP Stats > ✌️ Unit 2 # 2.5 Correlation Peter Cao ### AP Statistics📊 Bookmarked 4.3k • 246 resources See Units ## What is Correlation? Correlation is when two variables are related to each other, and this is numerically represented with the correlation coefficient, which in stats we denote as r. The correlation coefficient shows the degree to which there is a linear correlation between the two variables, that is, how close the points are to forming a line. It can be positive or negative and this is the same as the direction of the scatterplot. The coefficient takes a value between -1 and 1, where r=-1 means that the points fall exactly on an decreasing line while r=1 means that the points fall exactly on a increasing line. A correlation coefficient of 0 means that there is no correlation between the data points. ### Examples Here are some scatterplots and their values of r: image courtesy of: math.nayland.school.nz Also, there are a few things to keep in mind about correlation. • Even if r has a high magnitude, the relationship may not be linear, but instead it may be curved. We will discuss this more in later sections. • A high magnitude of correlation does not imply causation. • The correlation coefficient is not resistant to outliers, which makes sense, given that the formula that we shall learn uses the mean and standard deviation, which by themselves are not resistant. ## Calculating the Correlation Coefficient To find the value of r, we have this formula that is found on the formula sheet: Although this may seem like a complicated formula, it’s not that bad to understand (but harder to compute) To find r, first find the mean and standard deviations of both the x and y variables. Then, for each data point, multiply the x and y z-scores for that point. Finally, add all the individual products up and divide by the number of data points minus 1. You will seldom need to do this by hand, and most graphing calculators can easily find this. On the most common graphing calculator used in AP Stats (TI-84), you will enter your data into L1 and L2, go to Stats>Calc>LinReg like below: To be sure that you get the r-value, verify that "Stats Diagnostics" is on via MODE. 🎥Watch: AP Stats - Scatterplots and Association ## Resources: Thousands of students are studying with us for the AP Statistics exam. Studying with Hours = the ultimate focus mode Start a free study session ##### 🔍 Are you ready for college apps? Take this quiz and find out! Start Quiz Browse Study Guides By Unit 📆Big Reviews: Finals & Exam Prep ✏️Blogs ✍️Free Response Questions (FRQs) 👆Unit 1: Exploring One-Variable Data ✌️Unit 2: Exploring Two-Variable Data 🔎Unit 3: Collecting Data 🎲Unit 4: Probability, Random Variables, and Probability Distributions 📊Unit 5: Sampling Distributions ⚖️Unit 6: Inference for Categorical Data: Proportions 😼Unit 7: Inference for Qualitative Data: Means ✳️Unit 8: Inference for Categorical Data: Chi-Square 📈Unit 9: Inference for Quantitative Data: Slopes
749
3,031
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.5
4
CC-MAIN-2021-39
latest
en
0.935198
https://www2.clarku.edu/faculty/djoyce/ma120/background.html
1,638,178,709,000,000,000
text/html
crawl-data/CC-MAIN-2021-49/segments/1637964358702.43/warc/CC-MAIN-20211129074202-20211129104202-00069.warc.gz
1,183,091,089
2,921
## Mathematics background needed for calculus ### Clark University You need to know a fair amount of mathematics before embarking on a study of calculus. Listed below are topics in mathematics that are used in calculus. Some are essential for the development of the subject. They're marked with the symbol . Others are used incidently in applications of calculus. Most of them we assume that you know and we won't review them at all, but we'll remind you a bit about a few of them as we use them. Most of the topics are used in the first semester of calculus, but a few aren't used until later. • Topics from arithmetic. We assume you know these: • Kinds of numbers. Fractions and decimals. We'll refer to integers (whole numbers, either positive, negative, or zero), rational and irrational numbers. The number line • Conventions for arithmetic notation including order of operations (precedence), proper use of parentheses • Expression manipulation. Distibutive laws, law of signs • Exponents and laws for exponents • Roots, laws for roots, rational exponents, rationalizing denominators • Absolute value, order (less than, etc.), and their properties • Factorials (e.g., 5! is the product of the integers from 1 through 5) • Topics from geometry • Pythagorean theorem • Similar triangles • Areas of triangles, circles, and other simple plane figures • Perimeters of simple plane figures, circumference of circles • Volumes of spheres, cones, cylinders, pyramids • Surface areas of spheres and other simple solid figures • Topics from algebra. We use algebra constantly. You've got to know algebra well. Topics: • Translating word problems into algebra • Expression manipulation. Addition, subtraction, and multiplication of polynomials • Rational functions and their domains, least common denominators • Techniques for simplifying algebraic expressions • Factoring quadratic polynomials and other simple polynomials • Techniques for solving linear equations in one unknown • Solving quadratic equations in one unknown, completing the square, quadratic formula • Solving linear equations in two or more unknowns • Techniques for solving inequalities and both equations and inequalities involving absolute value • The concept of function, functional notation and substitution, domain and range of a function • Composition of functions • Uniform motion in a straight line. When objects move with constant velocity, the relation among distance, time, and velocity • Notation and concepts from set theory. We only use a bit of the notation from set theory and only the most basic concepts • Sets, membership in sets, subsets, unions, intersections, empty set • Open and closed intervals and their notations • Topics from analytic geometry. Mainly the basics, straight lines, circles, a little on quadratic functions • Coordinates of points in the plane • Linear equations. Slope-intercept form especially, but also other forms • Distance between two points • Equations of circles, especially the unit circle • Slopes of straight lines, parallel lines • Graphs of functions. Vertical line test • Symmetries of functions, even and odd functions. Transformation of functions • Graph of a quadratic function is a parabola • Graph of y = 1/x is a rectangular hyperbola • Topics from trigonometry. For a review of trigonometry, see "Dave's Short Course in Trig" at http://www.clarku.edu/~djoyce/trig/ • Angle measurement, both degrees and radians, but radians are more important in calculus. Negative angles. • Length of an arc of a circle • Understanding of trig functions of angles, especially sine, cosine, tangent, and secant. Trig functions and the unit circle • Right triangles, trig functions sine, cosine, and tangent of acute angles. Values of these trig functions for standard angles of 0, π/6, π/4, π/3, π/2 • Solving right triangles • Obtuse triangles. Law of sines, law of cosines. Solving obtuse triangles • Basic trig identities. Pythagorean identities, trig functions in terms of sines and cosines • Other trig identities. Double angle formulas for sine and cosine, addition formulas for sine and cosine • Exponential functions and logarithms. Although these are topics in algebra, they deserve to be separated for emphasis • Exponential functions. Growth of exponential functions • Laws for exponents. Manipulation of algebraic expressions involing exponents, solving equations involving exponents • Logarithms and their relation to exponential functions • Laws for logs. Manipulation of algebraic expressions involing logs, solving equations involving logs • An understanding of mathematical proof. We'll develop more in calculus. You should be able to follow proofs like the ones you've already seen in geometry, algebra, and your other mathematics courses Back to course page
1,026
4,792
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.671875
4
CC-MAIN-2021-49
latest
en
0.90633
https://physics.stackexchange.com/questions/584132/the-relation-between-chemical-potential-and-gibbs-free-energy-n-mu-g-is-gl
1,713,590,025,000,000,000
text/html
crawl-data/CC-MAIN-2024-18/segments/1712296817474.31/warc/CC-MAIN-20240420025340-20240420055340-00328.warc.gz
424,464,006
37,679
# The relation between Chemical potential and Gibbs free energy ($n\mu = G$) is global? The change of Gibbs free energy (for a single phase or constitute system) is $$dG = -SdT + VdP +{\mu}dn$$ By using the fact that $$T$$ and $$P$$ are intensive properties and $$n$$(mole) is an extensive property, $$G = n\mu$$ is derived in my lecture note. But I am not sure about that, because, with the definition of $$dG$$, $$G$$ is the function of $$(T,P,n)$$. Where are the dependency of $$T$$ and $$P$$? This interpretation is only valid for constant temperature and pressure situations? like, $$G = G_{0}(T,P) + n\mu$$? $$G$$ depends on $$T$$ and $$P$$ through the chemical potential: $$G(T,P,n) = \mu(T,P,n) \cdot n$$ Recall that $$\mu = \left(\frac{\partial G}{\partial n}\right)_{T,P}$$. If $$G$$ is a function of some set of variables, then its derivatives (and in particular, $$\mu$$) are functions of the same set of variables. Note also in this case that $$\left(\frac{\partial \mu}{\partial n}\right)_{T,P} = 0$$, in accordance with the assumptions used to derive $$G = \mu n$$ (though these assumptions do not universally hold for all thermodynamical systems).
331
1,172
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.796875
3
CC-MAIN-2024-18
latest
en
0.854313
https://encyclopediaofmath.org/wiki/Euler%E2%80%93Lagrange_equation
1,695,667,793,000,000,000
text/html
crawl-data/CC-MAIN-2023-40/segments/1695233510085.26/warc/CC-MAIN-20230925183615-20230925213615-00048.warc.gz
265,285,036
6,753
# Euler-Lagrange equation (Redirected from Euler–Lagrange equation) for a minimal surface $z=z(x,y)$ The equation $$\left(1+\left(\frac{\partial z}{\partial x}\right)^2\right)\frac{\partial^2z}{\partial y^2}-2\frac{\partial z}{\partial x}\frac{\partial z}{\partial y}\frac{\partial^2z}{\partial x\partial y}+\left(1+\left(\frac{\partial z}{\partial y}\right)^2\right)\frac{\partial^2z}{\partial x^2}=0.$$ It was derived by J.L. Lagrange (1760) and interpreted by J. Meusnier as signifying that the mean curvature of the surface $z=z(x,y)$ is zero. Particular integrals for it were obtained by G. Monge. The Euler–Lagrange equation was systematically investigated by S.N. Bernshtein, who showed that it is a quasi-linear elliptic equation of order $p=2$ and that, consequently, its solutions have a number of properties that distinguish them sharply from those of linear equations. Such properties include, for example, the removability of isolated singularities of a solution without the a priori assumption that the solution is bounded in a neighbourhood of the singular point, the maximum principle, which holds under the same conditions, the impossibility of obtaining a uniform a priori estimate for $z(x,y)$ in an arbitrary compact subdomain of a disc in terms of the value of $z$ at the centre of the disc (that is, the absence of an exact analogue of Harnack's inequality), facts relating to the Dirichlet problem, the non-existence of a non-linear solution defined in the entire plane (the Bernstein theorem), etc. The Euler–Lagrange equation can be generalized with respect to the dimension: The equation corresponding to a minimal hypersurface $z=z(x_1,\dots,x_n)$ in $\mathbf R^{n+1}$ has the form $$\sum_{i=1}^n\frac{\partial}{\partial x_i}\left(\frac{\partial z/\partial x_i}{\sqrt{1+|\nabla z|^2}}\right)=0,\quad\nabla z=\left(\frac{\partial z}{\partial x_1},\dots,\frac{\partial z}{\partial x_n}\right).$$ For this equation $(n\geq3)$ the solvability of the Dirichlet problem has been studied, the removability of the singularities of a solution, provided that they are concentrated inside the domain on a set of zero $(n-1)$-dimensional Hausdorff measure, has been proved, and the validity of Bernstein's theorem for $n\leq7$ and the existence of counter-examples for $n\geq8$ has been proved.
625
2,317
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.703125
3
CC-MAIN-2023-40
longest
en
0.881677
http://www.hindawi.com/journals/afs/2012/635043/
1,472,150,784,000,000,000
application/xhtml+xml
crawl-data/CC-MAIN-2016-36/segments/1471982293922.13/warc/CC-MAIN-20160823195813-00156-ip-10-153-172-175.ec2.internal.warc.gz
495,570,954
47,184
`Advances in Fuzzy SystemsVolume 2012 (2012), Article ID 635043, 5 pageshttp://dx.doi.org/10.1155/2012/635043` Research Article ## Belief Merging and Judgment Aggregation in Fuzzy Setting 1Department of Mathematics and Statistics, Faculty of Management Studies, University of Central Punjab, Lahore, Pakistan 2University of Western Ontario, London, ON, Canada N6A 3K7 Received 12 April 2012; Accepted 28 May 2012 Copyright © 2012 Ismat Beg and Nabeel Butt. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. #### Abstract We explore how judgment aggregation and belief merging in the framework of fuzzy logic can help resolve the “Doctrinal Paradox.” We also illustrate the use of fuzzy aggregation functions in social choice theory. #### 1. Introduction Social choice theory defines “preference aggregation” as forming collective preferences over a given set of alternatives. Likewise, “judgment aggregation” pertains to forming collective judgments on a given set of logically interrelated propositions. This paper extends beyond classical propositional logic into the realm of general multivalued logic, so that we can handle realistic collective decision problems (see Dietrich and List [1, 2], List [3], Beg and Butt [4], and Manzini and Mariotti [5]). List and Pettit [6, 7] were the first to give an axiomatic treatment to the problem associated with judgment aggregation. In their classic example, a set of propositions is expressed in propositional calculus as . The set consists of all assignments of 0 or 1 to the propositions in that are logically consistent. A procedure for judges to decide on the truthfulness of each proposition in amounts to an aggregator that maps . The “Doctrinal Paradox” illustrates that proposition-wise majority rule leads to inconsistent collective decisions. This paradox has made the literature on “judgment aggregation” grow appreciably. Most of the discussions on this paradox have been in the domain of social choice theory, and a number of “(im)possibility theorems,” similar to those of Arrow [8] and Sen [9] have been proved. In fact, these theorems show that there cannot exist any judgment aggregation procedure that simultaneously satisfies certain minimal consistency requirements (see Dietrich [10]). List and Pettit [6] have shown that the majority rule is but one member of a class of aggregation procedures that fails to ensure consistency in the set of collective judgments. Van Hees [11] has further generalized the paradox by showing that there is even a larger class of aggregation procedures for which this is true. The aim of this paper is to resolve the paradox and also to illustrate optimal judgment aggregation. We abandon the assumption that individual and collective beliefs necessarily have a binary nature (true or false) and so our analysis is in a fuzzy logic framework. Pigozzi [12] discusses a possibility result in binary logic in which the paradox is avoided at the price of “indecision.” Distance-based aggregation procedures like that of Pigozzi [12] often result in dictatorship. Accordingly, aggregation procedures in fuzzy logic can help us make the collective judgment set more “democratic” in nature. In this paper we try to give the present literature in this area a more realistic touch by using fuzzy logic. The structure of the paper is as follows: Section 2 illustrates an example of the “Doctrinal Paradox” and its reformulation in a fuzzy setting. Section 3 illustrates how the paradox is resolved in a fuzzy framework to find optimal fuzzy aggregation functions. Section 4 further elaborates on the results in Section 3 to present a democratic fuzzy aggregation function. Section 5 presents the entire previous discussion in utility maximization framework. The doctrinal paradox can emerge when the members of a group have to make a judgment (in the form of yes or no) on several logically interconnected propositions, and the individually logically consistent judgments need to be combined into a collective decision. For example, consider a set of propositions, where some (the “premises”) are taken to be equivalent to another proposition (the “conclusion”). When majority voting is applied to premises, it may give a different outcome than majority voting applied to conclusion. Suppose that three customers have to decide what response a product launch by a multinational will receive. According to the company, if the price is good (proposition ) and product is attractive (proposition ), then the customer likes the product (proposition ). Now assume that each customer makes a consistent judgment over these propositions , , and as in Table 1. Table 1 Each customer assigns a binary truth value to the propositions , , and which gives rise to the doctrinal paradox. The paradox lies precisely in the fact that the two procedures may lead to contradictory results depending on whether the majority is taken on the individual judgments of and , or whether the majority is calculated on the individual judgments of . Arguably, in some decision problems, propositions are “vague” and hence can have truth values between “true” and “false.” This might be so for “the economy is in a good shape,” as “in a good shape” is not precisely defined. To account for vagueness, one might use a fuzzy logic framework. Let us reformulate the entire problem in a fuzzy logic framework, so that individual judgments can take values on the interval . In this context “” is replaced by the fuzzy Lukasiewicz  t-norm given by . The ordinary implication “” is replaced by the fuzzy Lukasiewicz implication which is defined as follows: One important property that our fuzzy aggregation operator satisfies in this problem is , ,  and in Table 2. Table 2 At the same time, let denote the degree of truth of the proposition , and the fuzzy integrity constraint is . Assuming that the customers are “rational,” they never violate the fuzzy integrity constraints (see List [13]). Now by using the above given Lukasiewicz t-norm and fuzzy implication , the constraint can be translated as . Here is a particular rule of inference for individual (see Claussen and Roisland [14]). #### 3. The Doctrinal Paradox and Belief Merging in Fuzzy Framework Given a finite set of individuals, a finite set of propositions over which individuals have to make their judgments is called an agenda. A judgment set for an individual is an -tuple containing degree of truth for each proposition. Let , denote the cardinality of . A profile is an -tuple of individual judgment sets. An aggregation is a function that assigns to each profile a collective judgment set . Here for . Let us denote as the truth value of some proposition for the collective judgment set . Similarly, is the truth value of some proposition for the judgment set . Define as a “dictatorship” if for some and every . We define as “manipulable” if and only if there exists some voter , proposition , and profile such that but for some alternate judgment set . We define as “independent” if and only if for all propositions there is a function such that for all we have Belief merging formally investigates how to aggregate a finite number of belief bases into a collective one. This formal framework consists of a propositional language which is built up from a finite set of propositional letters standing for atomic propositions and the usual connectives (. These are the connectives in fuzzy logic, namely, fuzzy negation , t-norm , t-conorm , and fuzzy implication (see Nguyen and Walker [15]). Let the belief base for the agent be the following set , denotes the cardinality of . Here represents the truth function that maps elements in set to . A belief set is the set . Given a set of integrity constraints IC in a fuzzy framework, maps and IC into a new (collective) belief base . We call this process fuzzy aggregation. An interpretation is a function from to . Let W denote the set of all interpretations. A distance between interpretations is a real-valued function such that for all as follow:(1),(2) if and only if ,(3),(4). One possible choice for distance function is the following Euclidean metric: for some real number . Now let us define a belief merging operator in the fuzzy framework which helps us avoid the “Doctrinal Paradox.” For any interpretation and any profile of belief basis , the distance between an interpretation and a profile can now be defined as Our objective is to choose which minimizes this distance and also does not violate any integrity constraint in the fuzzy framework. The distance minimization procedure seeks to minimize a measure of “disagreement” in the society by bringing the collective judgment set as close as possible to the individual judgment sets. We believe that individual “disagreement” brings about individual disutility. Accordingly, we seek to minimize the societal disutility which is assumed to be the sum of individual disutilities. For this purpose any deviation of the collective judgment from the individual judgment has a penalty in our objective function. We concede that a wide variety of distance and dissimilarity measures exist like the Manhattan distance, Chebyshev distance, Jaccard dissimilarity, Yule dissimilarity, and so forth. We have chosen Euclidean distance only for the sake of illustration in this paper. Choosing any particular distance or dissimilarity measure is solely at our discretion provided that it satisfies certain normative principles. See Section 5 to view our distance minimization procedure in a utility maximization framework. Let be any arbitrary interpretation. In this case, where we have and denotes the cardinality of . We can now show that if is Hamming distance then the doctrinal paradox can be avoided in a binary logic framework (see Pigozzi [12]). In this case, is a generalization of Hamming distance, and we can use it in a fuzzy framework to help us avoid the doctrinal paradox. By conforming to democratic values, we can formulate the fuzzy aggregation as an optimization problem which can be stated as follows. Minimize subject to the fuzzy integrity constraints IC. Here, consider that where denotes the element of the belief base . The above optimization problem helps us avoid doctrinal paradox, and we can also find an optimal fuzzy aggregation function. We say that an aggregation function is optimal if the collective judgment set is as close as possible to the individual judgments. Finding collective social choice function in Table 2 now becomes an optimization problem which can have multiple optimal solutions. The problem in Table 2 is framed in Mathematica language assuming that in . The optimal fuzzy aggregation function gives the solution for Table 2 as . The fact that there is at least one solution to the problem shows that doctrinal paradox cannot occur in this case. In fact, the paradox is resolved, not at the price of “indecision” or “dictatorship” (see Pigozzi [12]). Ideally we would like our aggregation procedure to be strategy proof. Dietrich and List [16] prove impossibility theorems similar to the Gibbard-Satterthwaite theorem on strategy-proof aggregation rules. Given these theorems we do not claim that our liberal distance-based aggregation procedure is strategy proof. In fact, Dietrich [10] has proved that independence and monotonicity are two properties of an aggregator that result in strategy proofness. Since we do not claim that our distance-based aggregator is independent and monotone simultaneously, the strategy proofness of our aggregator is unclear. However, the nature of our objective function in the optimization problem is such that if an individual was to submit an insincere judgment (in an attempt to manipulate the collective judgment), any deviation of the collective judgment set from this insincere judgment has a penalty in the objective function. In this sense there appears to be a “partial” corrective mechanism whereby our aggregation procedure is not easily prone to manipulation. Now consider Table 3. For simplicity assume that there is a “small” economy with three individuals, and represents three goods. The individual binary relations over , namely, , and are linear orders. Any optimal fuzzy social preference aggregation function must map individual preferences into social preference set that must be a linear order. This accordingly becomes an optimization problem in which we minimize the sum of the distances of social preference from the individual preferences using (as defined earlier subject to fuzzy integrity constraint of linear order). Here preference aggregation is modeled as a case of judgment aggregation by representing preference orderings as truth values in fuzzy logic. Suppose that the individuals have the following preference structure. Table 3 The problem in Table 3 is framed in Mathematica language assuming that in . The optimal fuzzy preference function gives the solution for Table 3 as = . Note, however, that the optimal solution is not necessarily unique. We admit that our aggregation procedure suffers from nonuniqueness problem (in other words “indecision”) whereby the aggregation function could become set valued. Yet it is an improvement upon Pigozzi’s [12] aggregation procedure (defined in the context of binary logic) because cases of dictatorship are highly unlikely in our procedure. In fact, we could devise an appropriate tie-breaking procedure in case our optimal solution is not unique. Suppose we have a judgment set = . Such a set might violate the fuzzy integrity constraints. A tie-breaking procedure would narrow down solutions by picking solutions which are at a minimal distance from . Another useful procedure would be to select an aggregation function from the optimal solution which is at minimal disagreement with other aggregation functions using appropriate dissimilarity and distance measures. However, we must confess that such a remedy does not ensure nonuniqueness of our final solution. An appropriate social welfare function can be useful in such cases. The important question is how well behaved is our aggregation operator. On the one hand, we want the collective judgment to be responsive to the judgment of individuals. On the other hand, we want the collective judgment to obey rationality constraints. We note that our fuzzy aggregation procedure satisfies social axioms like unanimity (Pareto conditions), compensativeness, anonymity, nondictatorship, universal domain, and collective rationality. Other properties like monotonicity and citizen sovereignty are, however, unclear. It is worthy to compare our aggregation operator to operators in the fuzzy aggregation. For example, fuzzy LAMA operator (see Peláez and Doña [17]) has unrestricted domain, anonymity, monotonicity, unanimity, and citizen sovereignty. Ironically, it does not ensure collective rationality. Yager [18] has introduced order weighted averaging (OWA) operators which are idempotent, monotone, neutral, and compensative and yet again do not ensure collective rationality. #### 4. Democratic Fuzzy Aggregation Function The task of aggregating judgments arises in many situations like promotion committees, corporations complying with shareholders, governments bound to party’s principles, and so forth. In each case we can think of a collective judgment set which is the weighted average of individual judgment sets. The closer our collective judgment set is to we believe the more legitimate and democratic is our final judgment. In other words the decision making group is displaying a degree of “integration” (see List and Petit [19]). There can be different possible interpretations for the term “closeness.” The interpretation used in our paper is solely for the purpose of illustration and is by no means exhaustive. A democratic fuzzy aggregation imparts “anonymity” to ensure that all individuals have equal weight in determining collective sets of judgments. Define a particular fuzzy averaging operator as follows: where is the weight assigned to the truthfulness of . Similarly, we can apply the same averaging operator to individual profiles as Now we assert that an optimal fuzzy aggregation function is “democratic” if the solution is as close as possible to the average of individual judgments. Such a view is held only for the purposes of illustration in our paper, and there could be different possible ways for “democratization.” Consider Table 1. Assume that each customer gives an equally truthful judgment, we can calculate the average of individual judgments as follows: A decision-making group is exposed to “rationality challenge” and “knowledge challenge” whenever it is appropriate to “personify” it (see List and Pettit [19]). The point (0.500, 0.4667, and 0.267) is itself not used as a solution because it might violate the fuzzy integrity constraints. A fuzzy aggregation function is democratic as well as optimal if we make the solution of Table 1 as close as possible to the point (0.500, 0.4667, and 0.267). This could be easily achieved by adding the following penalty to the objective function: Here is the degree of democracy. The problem, in Table 2, can be “democratized” in Mathematica language assuming that in . In this case, the optimal fuzzy aggregation function gives the solution as . It remains a good exercise to “democratize” the optimal social preference aggregation function for Table 3. #### 5. Optimal Judgment Aggregation Viewed as a Fuzzy Utility Maximization We have assumed that agents have only “epistemic” preferences; that is, they only care about the “distance” between the collective judgment set which is collectively adopted and the individual judgment set they personally favor (see Van Hees [11]). This distance was originally measured as for any real number , where is a finite set of propositional letters standing for atomic propositions. We rescale this distance to form a new distance measure as follows Now this rescaling makes . Let be the profile of individual . Individual receives fuzzy “utility,” if the collective judgment set is collectively accepted, given by the formula , where is the strong fuzzy negation (see Nguyen and Walker [15]) that satisfies the following:(i),(ii) is nonincreasing, and(iii).For simplicity sake assumes that . Now implies that Choosing a collective judgment set which minimizes is, therefore, equivalent to choosing which maximizes , that is, the sum of individual utilities. Therefore, optimization problems in Table 3 can be viewed as social utility maximization problems. #### 6. Conclusion Fuzzy aggregation procedures are useful in constructing optimal fuzzy social preference aggregation functions as already illustrated in the previous discussions. Finding such optimal fuzzy preference structures could have great applications in social choice theory by bringing it closer to reality. The real challenge is to construct aggregation procedures that satisfy desirable social properties and at the same time do not violate collective rationality. Authors believe that modeling impossibility theorems in fuzzy setting will have tremendous applications in the field of belief merging and judgment aggregation. #### Acknowledgment The present version of the paper owes much to the precise and kind remarks of the learned referees. #### References 1. F. Dietrich, “A generalised model of judgment aggregation,” Social Choice and Welfare, vol. 28, no. 4, pp. 529–565, 2007. 2. F. Dietrich and C. List, “Where do preference come from?” International Journal of Game Theory. In press. 3. C. List, “Free will, determinism and the possibility of doing otherwise,” working paper, London School of Economics, 2011, http://personal.lse.ac.uk/list/PDF-files/Freewill.pdf. 4. I. Beg and N. Butt, (Im)Possibility Theorems in Fuzzy Framework, Critical Review, vol. 4, Society for Mathematics of Uncertainity, 2010. 5. P. Manzini and M. Mariotti, “Moody choice,” working paper University of St. Andrews, 2012. 6. C. List and P. Pettit, “Aggregating sets of judgments: an impossibility result,” Economics and Philosophy, vol. 18, pp. 89–110, 2002. 7. C. List and P. Pettit, Group Agency: The Possibility, Design and Status of Corporate Agents, Oxford University Press, Oxford, UK, 2011. 8. K. J. Arrow, Social Choice and Individual Values, John Wiley & Sons, New York, NY, USA, 2nd edition, 1963. 9. A. Sen, Collective Choice and Social Welfare, Holden Day, San Francisco, Calif, USA, 1970. 10. F. Dietrich, “Judgment aggregation: (Im)possibility theorems,” Journal of Economic Theory, vol. 126, no. 1, pp. 286–298, 2006. 11. M. Van Hees, “The limits of epistemic democracy,” Social Choice and Welfare, vol. 28, no. 4, pp. 649–666, 2007. 12. G. Pigozzi, “Belief merging and the discursive dilemma: an argument-based account to paradoxes of judgment aggregation,” Synthese, vol. 152, no. 2, pp. 285–298, 2006. 13. C. List, “Group knowledge and group rationality: a judgment aggregation perspective,” Episteme, vol. 2, no. 1, pp. 25–38, 2005. 14. C. A. Claussen and O. Roisland, “Collective economic decisions and the discursive paradox,” Norges Bank Working paper, 2005. 15. H. T. Nguyen and E. Walker, Fuzzy Logic, Chapman & Hall/CRC Press, 3rd edition, 2006. 16. F. Dietrich and C. List, “Strategy-proof judgment aggregation,” Economics and Philosophy, vol. 23, no. 3, pp. 269–300, 2007. 17. J. I. Peláez and J. M. Doña, “LAMA: a linguistic aggregation of majority additive operator,” International Journal of Intelligent Systems, vol. 18, no. 7, pp. 809–820, 2003. 18. R. R. Yager, “On ordered weighted averaging operators in multi-criterion decision making,” IEEE Transactions on Systems, Man and Cybernetics, vol. 18, no. 1, pp. 183–190, 1988. 19. C. List and P. Pettit, “On the many as one: a reply to Kornhauser and Sager,” Philosophy and Public Affairs, vol. 33, no. 4, pp. 377–390, 2005.
4,661
22,074
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2016-36
latest
en
0.882848
https://www.excelforum.com/excel-formulas-and-functions/1325634-formula-to-look-at-a-different-cell-for-criteria-if-a-cell-is-blank.html
1,603,850,305,000,000,000
text/html
crawl-data/CC-MAIN-2020-45/segments/1603107896048.53/warc/CC-MAIN-20201028014458-20201028044458-00551.warc.gz
699,414,839
16,260
# Formula to look at a different cell for criteria if a cell is blank 1. ## Formula to look at a different cell for criteria if a cell is blank Hi Everyone! I am trying to automate a template so that when a company and labor category are selected, the rate will populate limiting a person from entering the wrong rate. Since there are 2 different criteria, I decided to use a SUMIFS formula =SUMIFS('Contract Rates'!D3:D119,'Contract Rates'!A3:A119,A10,'Contract Rates'!B3:B119,B10) which works but my problem is with blank cells. The format we send to our client does not have the company name listed next to every labor category and I would really like to keep it that way. Is there a way to tell cell D11 if cell A11 is blank, look at cell A10 for the company's name? I tried nesting an IF formula with it but it just put what was in cell A10 instead of looking for the rate so I removed it. Attached is a workbook example. Any insight would be greatly appreciated! 2. ## Re: Formula to look at a different cell for criteria if a cell is blank In D10: ``Please Login or Register to view this content.`` With LOOKUP(2,1/(\$A\$7:A10<>""),\$A\$7:A10) instead of A10, to refer to last cell that is not blank. 3. ## Re: Formula to look at a different cell for criteria if a cell is blank Thank you very much bebo! There are currently 1 users browsing this thread. (0 members and 1 guests) #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts Search Engine Friendly URLs by vBSEO 3.6.0 RC 1
400
1,591
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.6875
3
CC-MAIN-2020-45
latest
en
0.925294
https://jtnb.centre-mersenne.org/item/JTNB_1997__9_1_97_0/
1,716,557,306,000,000,000
text/html
crawl-data/CC-MAIN-2024-22/segments/1715971058719.70/warc/CC-MAIN-20240524121828-20240524151828-00443.warc.gz
296,467,447
12,120
Linear forms in the logarithms of three positive rational numbers Journal de théorie des nombres de Bordeaux, Volume 9 (1997) no. 1, pp. 97-136. In this paper we prove a lower bound for the linear dependence of three positive rational numbers under certain weak linear independence conditions on the coefficients of the linear forms. Let $\Lambda ={b}_{2}log{\alpha }_{2}-{b}_{1}log{\alpha }_{1}-{b}_{3}log{\alpha }_{3}\ne 0$ with ${b}_{1},{b}_{2},{b}_{3}$ positive integers and ${\alpha }_{1},{\alpha }_{2},{\alpha }_{3}$ positive multiplicatively independent rational numbers greater than $1$. Let ${\alpha }_{j1}={\alpha }_{j1}/{\alpha }_{j2}$ with ${\alpha }_{j1},{\alpha }_{j2}$ coprime positive integers $\left(j=1,2,3\right)$. Let ${\alpha }_{j}\ge \phantom{\rule{4pt}{0ex}}\text{max}\left\{{\alpha }_{j1},e\right\}$ and assume that gcd$\left({b}_{1},{b}_{2},{b}_{3}\right)=1.$ Let ${b}^{\text{'}}=\left(\frac{{b}_{2}}{log{\alpha }_{1}}+\frac{{b}_{1}}{log{\alpha }_{2}}\right)\phantom{\rule{0.166667em}{0ex}}\left(\frac{{b}_{2}}{log{\alpha }_{3}}+\frac{{b}_{3}}{log{\alpha }_{2}}\right)$ and assume that $B\ge \phantom{\rule{4pt}{0ex}}\text{max}\left\{10,log{b}^{\text{'}}\right\}.$ We prove that either $\left\{{b}_{1},{b}_{2},{b}_{3}\right\}$ is $\left({c}_{4},B\right)$-linearly dependent over $ℤ$ (with respect to $\left({a}_{1},{a}_{2},{a}_{3}\right)$) or $\Lambda >exp\left\{-C{B}^{2}\left(\prod _{j=1}^{3}log{a}_{j}\right)\right\},$ where ${c}_{4}$ and $C={c}_{1}{c}_{2}log\rho +\delta$ are given in the tables of Section 6. Here ${b}_{1},{b}_{2},{b}_{3}$ are said to be $\left(c,B\right)$-linearly dependent over $ℤ$ if ${d}_{1}{b}_{1}+{d}_{2}{b}_{2}+{d}_{3}{b}_{3}=0$ for some ${d}_{1},{d}_{2},{d}_{3}\in ℤ$ not all $0$ with either (i) $0<|{d}_{2}|\le cBlog{a}_{2}\phantom{\rule{4pt}{0ex}}\text{min}\left\{log{a}_{1},log{a}_{3}\right\},|{d}_{1}|,|{d}_{3}|\le cBlog{a}_{1},log{a}_{3},$ or (ii) ${d}_{2}=0\phantom{\rule{4pt}{0ex}}\text{and}\phantom{\rule{4pt}{0ex}}|{d}_{1}|\le cBlog{a}_{1}log{a}_{2}\phantom{\rule{4pt}{0ex}}\text{and}\phantom{\rule{4pt}{0ex}}|{d}_{3}|\le cBlog{a}_{2}log{a}_{3}.$ In particular, we obtain ${c}_{4}<9146$ and $C<422,321$ for all values of $B\ge 10$, and for $B\ge 100$ we have ${c}_{4}\le 5572$ and $C\le 260,690.$. More complete information is given in the tables in Section 6. We prove this theorem by modifying the methods of P. Philippon, M. Waldschmidt, G. Wüstholz, et al. In particular, using a combinatorial argument, we prove that either a certain algebraic variety has dimension $0$ or $\left\{{b}_{1},{b}_{2},{b}_{3}\right\}$ are linearly dependent over $ℤ$ where the dependence has small coefficients. This allows us to improve Philippon’s zero estimate, leading to the interpolation determinant being non-zero under weaker conditions. Dans cet article, nous donnons une minoration de la dépendance linéaire de trois nombres rationnels positifs valable sous certaines conditions faibles d’indépendance linéaire des coefficients des formes linéaires. Soit $\Lambda ={b}_{2}log{\alpha }_{2}-{b}_{1}log{\alpha }_{1}-{b}_{3}log{\alpha }_{3}\ne 0$ avec ${b}_{1},{b}_{2},{b}_{3}$ des entiers positifs et ${\alpha }_{1},{\alpha }_{2},{\alpha }_{3}$ des rationnels multiplicativement indépendants supérieurs à $1$. Soit ${\alpha }_{j1}={\alpha }_{j1}/{\alpha }_{j2}$${\alpha }_{j1},{\alpha }_{j2}$ sont des entiers positifs premiers entre eux $\left(j=1,2,3\right).$ Soit ${\alpha }_{j}\ge \phantom{\rule{4pt}{0ex}}\text{max}\left\{{\alpha }_{j1},e\right\}$ et supposons que pgcd$\left({b}_{1},{b}_{2},{b}_{3}\right)=1.$ Soit ${b}^{\text{'}}=\left(\frac{{b}_{2}}{log{\alpha }_{1}}+\frac{{b}_{1}}{log{\alpha }_{2}}\right)\phantom{\rule{0.166667em}{0ex}}\left(\frac{{b}_{2}}{log{\alpha }_{3}}+\frac{{b}_{3}}{log{\alpha }_{2}}\right)$ et supposons que $B\ge \phantom{\rule{4pt}{0ex}}\text{max}\left\{10,log{b}^{\text{'}}\right\}.$ Nous démontrons que, soit $\left\{{b}_{1},{b}_{2},{b}_{3}\right\}$ est $\left({c}_{4},B\right)$-linéairement dépendant sur $ℤ$ (relativement à $\left({a}_{1},{a}_{2},{a}_{3}\right)$), ou bien $\Lambda >exp\left\{-C{B}^{2}\left(\prod _{j=1}^{3}log{a}_{j}\right)\right\}$ ${c}_{4}$ et $C={c}_{1}{c}_{2}log\rho +\delta$ sont donnés dans les tables de la Section 6. Ici nous dirons que ${b}_{1},{b}_{2},{b}_{3}$ sont $\left(c,B\right)$-lineairement dépendants sur $ℤ$ si ${d}_{1}{b}_{1}+{d}_{2}{b}_{2}+{d}_{3}{b}_{3}=0$ pour certains ${d}_{1},{d}_{2},{d}_{3}\in ℤ$ non tous nuls tels que ou bien (i) $0<|{d}_{2}|\le cBlog{a}_{2}\phantom{\rule{4pt}{0ex}}\text{min}\left\{log{a}_{1},log{a}_{3}\right\},|{d}_{1}|,|{d}_{3}|\le cBlog{a}_{1},log{a}_{3},$ ou bien (ii) ${d}_{2}=0\phantom{\rule{4pt}{0ex}}\text{et}\phantom{\rule{4pt}{0ex}}|{d}_{1}|\le cBlog{a}_{1}log{a}_{2}\phantom{\rule{4pt}{0ex}}\text{et}\phantom{\rule{4pt}{0ex}}|{d}_{3}|\le cBlog{a}_{2}log{a}_{3}.$ Nous obtenons en particulier ${c}_{4}<9146$ and $C<422321$ pour tout $B\ge 10$, et si $B\ge 100$ nous avons ${c}_{4}\le 5572$ et $C\le 260690.$ Des informations plus précises sont données dans les tables de la Section 6. Nous démontrons ce résultat en modifiant les méthodes de P. Philippon, M. Waldschmidt, G. Wüstholz, et al. En particulier, par un argument combinatoire, nous prouvons que soit une certaine variété algébrique est de dimension nulle, ou bien $\left\{{b}_{1},{b}_{2},{b}_{3}\right\}$ sont linéairement dépendants sur $ℤ$, avec de petits coefficients de dépendance. Cela nous permet d’améliorer le Lemme de zéros de Philippon, nous conduisant au fait que le déterminant d’interpolation reste non nul sous des conditions plus faibles. @article{JTNB_1997__9_1_97_0, author = {Curtis D. Bennett and Josef Blass and A. M. W. Glass and David B. Meronk and Ray P. Steiner}, title = {Linear forms in the logarithms of three positive rational numbers}, journal = {Journal de th\'eorie des nombres de Bordeaux}, pages = {97--136}, publisher = {Universit\'e Bordeaux I}, volume = {9}, number = {1}, year = {1997}, zbl = {0905.11032}, mrnumber = {1469664}, language = {en}, url = {https://jtnb.centre-mersenne.org/item/JTNB_1997__9_1_97_0/} } TY - JOUR AU - Curtis D. Bennett AU - Josef Blass AU - A. M. W. Glass AU - David B. Meronk AU - Ray P. Steiner TI - Linear forms in the logarithms of three positive rational numbers JO - Journal de théorie des nombres de Bordeaux PY - 1997 SP - 97 EP - 136 VL - 9 IS - 1 PB - Université Bordeaux I UR - https://jtnb.centre-mersenne.org/item/JTNB_1997__9_1_97_0/ LA - en ID - JTNB_1997__9_1_97_0 ER - %0 Journal Article %A Curtis D. Bennett %A Josef Blass %A A. M. W. Glass %A David B. Meronk %A Ray P. Steiner %T Linear forms in the logarithms of three positive rational numbers %J Journal de théorie des nombres de Bordeaux %D 1997 %P 97-136 %V 9 %N 1 %I Université Bordeaux I %U https://jtnb.centre-mersenne.org/item/JTNB_1997__9_1_97_0/ %G en %F JTNB_1997__9_1_97_0 Curtis D. Bennett; Josef Blass; A. M. W. Glass; David B. Meronk; Ray P. Steiner. Linear forms in the logarithms of three positive rational numbers. Journal de théorie des nombres de Bordeaux, Volume 9 (1997) no. 1, pp. 97-136. https://jtnb.centre-mersenne.org/item/JTNB_1997__9_1_97_0/ [1] A. Baker, The theory of linear forms in logarithms, in Transcendence Theory: Advances and Applications" Academic Press, London (1977), 1-27. | MR | Zbl [2] A. Baker and G. Wüstholz, Logarithmic forms and group varieties, J. reine angew. Math. 442 (1993),19-62. | MR | Zbl [3] J. Blass, A.M.W. Glass, D.K. Manski, D.B. Meronk, and R.P. Steiner, Constants for lower bounds for linear forms in the logarithms of algebraic numbers I, II, Acta Arith. 55 (1990), 1-22, corrigendum, ibid 65 (1993). | MR | Zbl [4] D.W. Brownawell and D.W. Masser, Multiplicity estimates for analytic functions II, Duke Math. Journal 47 (1980), 273-295. | MR | Zbl [5] L. Denis, Lemmes des zéros et intersections., Approximations diophantiennes et nombres transcendants Luminy (1990), éd. P. Philippon, de Gruyter (1992), 99-104. | MR | Zbl [6] Dong Ping Ping, Minorations de combinaisons linéaires de logarithmes de nombres algébriques p-adiques, C. R. Acad. Sci. Paris, Sér.1315 (1992), 103-106. | Zbl [7] A.O. Gel'Fond, Transcendental and algebraic numbers, (Russian). English trans.: Dover, New York (1960). | MR | Zbl [8] A.M.W. Glass, D.B. Meronk, T. Okada, and R. Steiner, A small contribution to Catalan's equation, J. Number Theory 47 (1994), 131-137. | MR | Zbl [9] R. Hartshorne, Algebraic Geometry, Graduate Texts in Math. 52, Springer Verlag, Heidelberg, 1977. | MR | Zbl [10] E. Kunz, Introduction to Commutative Algebra and Algebraic Geometry (German), English trans.: Birkhaüser, Boston (1985). | MR | Zbl [11] M. Laurent, Sur quelques résultats récents de transcendance, Astérisque 198-200 (1991), 209-230. | MR | Zbl [12] M. Laurent, Hauteurs de matrices d'interpolation, Approximations diophantiennes et nombres transcendants, Luminy (1990), éd. P. Philippon, de Gruyter (1992), 215-238. | MR | Zbl [13] M. Laurent, Linear forms in two logarithms and interpolation determinants, Acta Arith. LXVI (1994), 181-199, or Appendix to [33]. | MR | Zbl [14] M. Laurent, M. Mignotte, and Y.V. Nesterenko, Formes linéaires en deux logarithmes et determinants d'interpolation, J. Number Theory 55 (1995), 285-321. | MR | Zbl [15] D.W. Masser, On polynomials and exponential polynomials in several variables, Inv. Math. 63 (1981), 81-95. | MR | Zbl [16] D.W. Masser and G. Wüstholz, Zero estimates on group varieties I, Inv. Math. 64 (1981), 489-516. | MR | Zbl [17] D.W. Masser and G. Wüstholz, Zero estimates on group varieties II, Inv. Math. 80 (1985), 233-267. | MR | Zbl [18] D.W. Masser and G. Wüstholz, Fields of large transcendence degree, Inv. Math. 72 (1983), 407-464. | MR | Zbl [19] M. Mignotte and M. Waldschmidt, Linear forms in two logarithms and Schneider's method III, Ann. Fac. Sci. Toulouse 97 (1989), 43-75. | Numdam | MR | Zbl [20] Y.V. Nesterenko, Estimates for the orders of zeros of functions of a certain class and their applications in the theory of transcendental numbers, Math. USSR Izv. 11 (1977), 239-270. | Zbl [21] P. Philippon, Lemmes de zéros dans les groupes algébriques commutatifs, Bull. Soc. Math. France 114 (1986), 355-383, et 115 (1987), 397-398. | Numdam | MR | Zbl [22] P. Philippon and M. Waldschmidt, Lower bounds for linear forms in logarithms, Chapter 18 of New Advances in Transcendence Theory, Proc. Conf. Durham (1986), ed. A. Baker, Cambridge Univ. Press, Cambridge (1988), 280-312. | MR | Zbl [23] P. Philippon and M. Waldschmidt, Formes linéaires de logarithmes elliptiques et mesures de transcendance, Théorie des nombres, Proc. Conf. Québec City 1987, de Gruyter, Berlin, (1989), 798-805. | MR | Zbl [24] A.J. Van Der Poorten, Linear forms in logarithms in the p-adic case, Transcendence Theory: Advances and Applications, Academic Press, London (1977), 29-57. | MR | Zbl [25] E. Reyssat, Approximation algébrique de nombres liés aux fonctions elliptiques et exponentielle, Bull. Soc. Math. France 108 (1980), 47-79. | Numdam | MR | Zbl [26] T. Shorey and R. Tijdeman, Exponential Diophantine Equations, Cambridge Tracts in Mathematics, No. 87, Cambridge Univ. Press, Cambridge (1986). | MR | Zbl [27] D. Sinnou, Minorations de formes linéaires de logarithmes elliptiques., Publ. Math. de l'Univ. Pierre et Marie Curie, No. 106, Problèmes diophantiennes 1991-1992, exposé 3. [28] N. Tzanakis and B.M.M. De Weger, On the practical solution of the Thue equation, J. Number Theory 31 (1989), 99-132. | MR | Zbl [29] M. Waldschmidt, A lower bound for linear forms in logarithms, Acta Arith. 37 (1980), 257-283. | MR | Zbl [30] M. Waldschmidt, Nouvelles méthodes pour minorer des combinaisons linéaires de logarithmes de nombres algébriques, Sém. Th. Nombres Bordeaux 3 (1991), 129-185. | Numdam | MR | Zbl [31] M. Waldschmidt, Nouvelles méthodes pour minorer des combinaisons linéaires de logarithmes de nombres algébriques II, Problèmes Diophantiens 1989-1990, Publ. Univ. Pierre et Marie Curie (Paris VI) 93 (1991), 1-36. [32] M. Waldschmidt, Minorations de combinaisons linéaires de logarithmes de nombres algébriques, Canadian J. Math 45 (1993), 176-224. | MR | Zbl [33] M. Waldschmidt, Linear independence of logarithms of algebraic numbers, Matscience Lecture Notes, Madras (1992). | Zbl [34] G. Wüstholz, Recent progress in transcendence theory, Springer Lecture Notes in Math., Springer Verlag, Heidelberg, 1068 (1984), 280-296. | MR | Zbl [35] G. Wüstholz, A new approach to Baker's theorem on linear forms in logarithms I, Lecture Notes in Math., Springer Verlag, Heidelberg 1290 (1987), 189-202. | MR | Zbl [36] G. Wüstholz, A new approach to Baker's theorem on linear forms in logarithms II, Lecture Notes in Math., Springer Verlag, Heidelberg 1290 (1987), 203-211. | MR | Zbl [37] G. Wüstholz, A new approach to Baker's theorem on linear forms in logarithms, III, Chapter 25 of New Advances in Transcendence Theory, Proc. Conf. Durham (1986), ed. A. Baker, Cambridge Univ. Press, Cambridge (1988), 399-410. | MR | Zbl [38] Kunrui Yu., Linear forms in p-adic logarithms I., Acta Arith. 53 (1989), 107-186. | MR | Zbl [39] Kunrui Yu, Linear forms in p-adic logarithms II, Compositio Math. 74 (1990), 15-113. | Numdam | MR | Zbl
4,780
13,284
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 68, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.671875
3
CC-MAIN-2024-22
latest
en
0.630174
https://tutorial.eyehunts.com/python/python-set-operations-basics-with-example-code/
1,675,124,390,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00309.warc.gz
600,498,123
68,698
# Python set operations | Basics with example code Python Sets have mathematical set operations like union, intersection, difference, and symmetric difference. You can do this operation using the operators or inbuilt methods. See below Operator for set operations: • | for union • & for intersection • – for difference • ^ for symmetric difference ## Python set operations examples Simple example code. ### Set Union, S1|S2 operation Union is performed using | operator or using the union() method. ``````fib = {1, 1, 2, 3, 5, 8} prime = {2, 3, 5, 7, 11} print(fib | prime) # or using method res = fib.union(prime) print(res)`````` Output: {1, 2, 3, 5, 7, 8, 11} ### Set Intersection, S1&S2 operation The intersection is performed using & operator using the intersection() method. ``````fib = {1, 1, 2, 3, 5, 8} prime = {2, 3, 5, 7, 11} print(fib & prime) # or using method res = fib.intersection(prime) print(res)`````` Output: {2, 3, 5} ### Set Difference, S1-S2operation The difference is performed using the – operator or using the difference() method. ``````fib = {1, 1, 2, 3, 5, 8} prime = {2, 3, 5, 7, 11} print(fib - prime) # or using method res = fib.difference(prime) print(res) `````` Output: {8, 1} ### Set Symmetric Difference, S2^S2operation The symmetric difference is performed using the ^ operator or using the symmetric_difference() method. ``````fib = {1, 1, 2, 3, 5, 8} prime = {2, 3, 5, 7, 11} print(fib ^ prime) # or using method res = fib.symmetric_difference(prime) print(res) `````` Output: {1, 7, 8, 11} Easy to understand Sets and frozen sets support the following operators – ``````key in s # containment check key not in s # non-containment check s1 == s2 # s1 is equivalent to s2 s1 != s2 # s1 is not equivalent to s2 s1 <= s2 # s1is subset of s2 s1 < s2 # s1 is proper subset of s2 s1 >= s2 # s1is superset of s2 s1 > s2 # s1 is proper superset of s2 s1 | s2 # the union of s1 and s2 s1 & s2 # the intersection of s1 and s2 s1 – s2 # the set of elements in s1 but not s2 s1 ˆ s2 # the set of elements in precisely one of s1 or s2`````` Do comment if you have any doubts or suggestions on this Python set basic tutorial. Note: IDE: PyCharm 2021.3.3 (Community Edition) Windows 10 Python 3.10.1 All Python Examples are in Python 3, so Maybe its different from python 2 or upgraded versions. ## 1 thought on “Python set operations | Basics with example code” 1. The visual for the set operations is totally incorrect. The labels look like they were swapped diagonally (i.e. intersection should be difference, and union should be symmetric union)… This site uses Akismet to reduce spam. Learn how your comment data is processed.
847
2,773
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.625
4
CC-MAIN-2023-06
longest
en
0.800974
n-pax.com
1,708,807,193,000,000,000
text/html
crawl-data/CC-MAIN-2024-10/segments/1707947474544.15/warc/CC-MAIN-20240224180245-20240224210245-00173.warc.gz
430,863,961
11,549
## 8D Problem Solving Model - Auto NCR Posted on 2021-11-20 It is normal for organizations to stumble into challenges as they grow.  The challenge starts with on how we can correctly identify the problem and finding the effective solutions are not that easy as well. Trying out several techniques that will guide each organization for finding better and effective solutions is one way to get the best results as possible.  Nowadays, there are already established tools that will enable you for a better problem-solver. • Creative thinking • Rational thinking • Decision thinking • Risk analysis • Check-sheets and Work Instructions • Pareto Diagrams and Trend Charts • Process Flow Diagrams, FMEA, and Control Plans • Cause and Effect Diagrams • Dot Plots and Histograms • Scatter Plots and Analysis of Variation • Control Charts • Advanced Statistical and Data Analysis Tools • Simulation • Regression Analysis • Designed Experiments The above-mentioned tools are just some, but when used in a solid procedure or technique, problem solving makes it even more effective. One technique or procedure that I am going to introduce to you is by using the “8 Disciplines of Problem Solving.” Now let’s begin on knowing what this technique is all about. History Eight disciplines problem solving(8Ds) is a method developed at Ford Motor Company used to approach and to resolve problems, typically employed by engineers or other professionals. Focused on product and process improvement, its purpose is to identify, correct, and eliminate recurring problems.  It establishes a permanent corrective action based on statistical analysis of the problem and on the origin of the problem by determining the root causes. Source: https://en.wikipedia.org/wiki/Eight_disciplines_problem_solving Usage •        Customer complaints on the product •        Internal defects, failure, and scrap are present •        Regulation or safety issues has been discovered Benefits • Improved group arranged problem solving abilities • Expanded knowledge of a structure for problem solving • Better comprehension of how to utilize essential measurable devices needed for problem solving • Improved viability and proficiency at problem solving • A functional comprehension of Root Cause Analysis (RCA) • Problem Solving exertion might be embraced into the cycles and strategies for the association • Improved abilities for actualizing remedial activity • Better capacity to recognize fundamental systemic changes and ensuing contributions for change • More authentic and open correspondence in problem solving conversation, increasing effectiveness • An improvement in administration's comprehension of issues and issue goal The eight disciplines (8D) model is a problem-solving approach typically employed by quality engineers or other professionals, and is most commonly used by the automotive industry but has also been successfully applied in healthcare, retail, finance, government, and manufacturing. The purpose of the 8D methodology is to identify, correct, and eliminate recurring problems, making it useful in product and process improvement. The 8D problem solving model establishes a permanent corrective action based on statistical analysis of the problem and focuses on the origin of the problem by determining its root causes. Although it originally comprised eight stages, or disciplines, the eight disciplines system was later augmented by an initial planning stage. How to Use the 8D approach D0: Plan - Plan for solving the problem and determine the prerequisites. D1: Use a team - Select and establish a team of people with product/process knowledge. D2: Define and describe the problem - Specify the problem by identifying in quantifiable terms the who, what, where, when, why, and how for the problem. D3: Develop interim containment plan; implement and verify interim actions - Define and implement containment actions to isolate the problem from any customer. D4: Determine, identify, and verify root causes and escape points - Identify all applicable causes that could explain why the problem occurred. Also identify why the problem was not noticed at the time it occurred. All causes shall be verified or proved, not determined by fuzzy brainstorming. One can use 5 Whys and cause and effect diagrams to map causes against the effect or problem identified. D5: Choose and verify permanent corrections (PCs) for problem/nonconformity - Through preproduction programs, quantitatively confirm that the selected correction will resolve the problem for the customer. D6: Implement and validate corrective actions - Define and implement the best corrective actions (CA). D7: Take preventive measures - Modify the management systems, operation systems, practices, and procedures to prevent recurrence of this and all similar problems. D8: Congratulate your team - Recognize the collective efforts of the team. The team needs to be formally thanked by the organization. Source: https://asq.org/quality-resources/eight-disciplines-8d NXPERT ONE -Auto NCR NXPERT One Auto NCR, is a systematic way of managing your 8D Procedures of your entire orginization.Here are some features that will help you conduct this method in a very efficient way. Investigate and report NXPERT ONE assign new issues to both internal employees and external suppliers instantly. NXPERT ONE guides the user through the 8D investigation workflow: Investigation request, containment, root cause analysis, and permanent solution deployment. Full 8D investigation reports can be generated instantly by a single mouse click. Visibility of status, ownership, and actions Manual or paper-based issue investigation processes can be inefficient, slow, and difficult to measure. NXPERT ONE provides instant issue management for your customers and suppliers. Investigation status is instantly available for review to make tracking problem resolution easy. NXPERT ONE also provides total visibility for and your improving teams. It capture issues, conduct investigations, implement changes and track status instantly. Users get to see who’s responsible and what's been done.  It also provides you with a simple user interface helps track each investigation through to closure.
1,224
6,261
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.859375
3
CC-MAIN-2024-10
latest
en
0.904737
https://codegolf.stackexchange.com/questions/13073/find-the-optimal-set-of-weights-to-add-to-a-certain-set-of-weights
1,723,728,303,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722641291968.96/warc/CC-MAIN-20240815110654-20240815140654-00308.warc.gz
126,413,121
46,957
# Find the optimal set of weights to add to a certain set of weights In this challenge, you will recieve a comma-separated list of weights as input, such as 1,3,4,7,8,11 And you must output the smallest amount of weights that can add to that set. For example, the output for this set would be 1,3,7 Because you could represent all of those weights with just those three: 1 = 1 3 = 3 1+3 = 4 7 = 7 1+7 = 8 1+3+7 = 11 There may be more than one solution. For example, your solution for the input 1,2 could be 1,1 or 1,2. As long as it finds the minimum amount of weights that can represent the input set, it is a valid solution. Weights may not be used more than once. If you need to use one twice, you must output it twice. For example, 2,3 is not a valid solution for 2,3,5,7 because you can't use the 2 twice for 2+2+3=7. Input is guaranteed not to have duplicated numbers. This is so shortest code by character count wins. Network access is forbidden (so none of your "clever" wget solutions @JohannesKuhn cough cough) ;) Simpleest cases: 1,5,6,9,10,14,15 => 1,5,9 7,14,15,21,22,29 => 7,14,15 4,5,6,7,9,10,11,12,13,15,16,18 => 4,5,6,7 2,3,5,7 => 2,2,3 or 2,3,7 And some trickier ones: 10,16,19,23,26,27,30,37,41,43,44,46,50,53,57,60,61,64,68,71,77,80,84,87 => 3,7,16,27,34 20,30,36,50,56,63,66,73,79,86 => 7,13,23,43 27,35,44,46,51,53,55,60,63,64,68,69,72,77,79,81,86,88,90,95,97,105,106,114,123,132 => 9,18,26,37,42 • very similar to codegolf.stackexchange.com/questions/12399/… Commented Nov 4, 2013 at 5:42 • @Jan, one significant difference is that the challenge you cite called for a set, whereas this one permits duplicates (e.g., 7,7,7,8 above), which increases complexity manyfold. Commented Nov 4, 2013 at 18:14 • Can we assume the input weights are unique (so we don't have to remove dups, simple as that would be)? Also, you may consider requiring that solutions be able to solve a given test case; otherwise the shortest solution may be a brute-force enumerator that can only deal with tiny problems (e.g., if there are n inputs weights and m is the largest, enumerate all subsequences of (1..m) and for each subsequence, enumerate every combination of between 1 and n instances of each element of the sequence.) Commented Nov 4, 2013 at 19:09 • @CarySwoveland Edited for the "unique" part. I already have test cases. Commented Nov 4, 2013 at 23:45 • How can {7,7,7,8} be a solution? 8 is not in the input set. Commented Nov 5, 2013 at 2:37 # Mathematica 80 75 Update: See at bottom an update on Doorknob's challenging last test, added on Nov.5 This passes all but the last test. However, it does not attempt to use a digit more than once. And it only searches from solutions that are subsets of the larger data set. The function generates all of the subsets of the input data set and then tests which subsets can be used to construct the complete set. After the viable subsets are found, it chooses the smallest sets. s=Subsets f@i_:=GatherBy[Select[s@i,Complement[i, Total /@ s@#]=={}&],Length]〚1〛 Tests f[{1, 3, 4, 7, 8, 11}] {{1, 3, 7}} f[{1, 5, 6, 9, 10, 14, 15}] {{1, 5, 9}} f[{7, 14, 15, 21, 22, 29}] {{7, 14, 15}} f[{4, 5, 6, 7, 9, 10, 11, 12, 13, 15, 16, 18}] {{4, 5, 6, 7}} f[{2, 3, 5, 7}] {{2, 3, 5}, {2, 3, 7}} ## Update Below I'll provide an initial analysis that may help get started toward a solution. The data: data = {10, 16, 19, 23, 26, 27, 30, 37, 41, 43, 44, 46, 50, 53, 57, 60, 61, 64, 68, 71, 77, 80, 84, 87}; Differently from the earlier approach, we want to consider, in the solution set, numbers that do NOT appear in the data set. The approach makes use of absolute differences between pairs of numbers in the data set. g[d_] := DeleteCases[Reverse@SortBy[Tally[Union[Sort /@ Tuples[d, {2}]] /. {a_, b_} :> Abs[a - b]], Last], {0, _}] Let's look at the number of times each difference appears; we'll only grab the first 8 cases, starting from the most common difference]. g[data][[1;;8]] {{7, 14}, {27, 13}, {34, 12}, {3, 11}, {20, 10}, {16, 10}, {4, 10}, {11, 9}} 14 pairs differed by 7; 13 pairs differed by 27, and so on. Now let's test subsets starting with {difference1},{difference1, difference2}, and so on, until we can hopefully account for all the original elements in the data set. h reveals those numbers from the original set that cannot be constructed by composing sums from the subset. h[t_] := Complement[data, Total /@ Subsets@t] By the fifth try, there are still 10 elements that cannot be formed from {7, 27, 34, 3, 20}: h[{7, 27, 34, 3, 20}] {16, 19, 26, 43, 46, 53, 60, 77, 80, 87} But on the next try, all numbers of the data set are accounted for: h[{7, 27, 34, 3, 20, 16}] {} This is still not as economical as {3,7,16,27,34}, but it's close. There are still some additional things to take into account. 1. If 1 is in the data set, it will be required in the solution set. 2. There may well be some "loners" in the original set that cannot be composed from the most common differences. These would need to be included apart from the difference tests. These are more issues than I can handle at the moment. But I hope it sheds some light on this very interesting challenge. • hmm... currently devising testcase that requires duplicates :P Commented Nov 5, 2013 at 2:32 • I'll leave my solution posted for now and see if I can add a condition to test duplicates. Commented Nov 5, 2013 at 2:35 • If a solution exists where a weight w is repeated, then the same solution with one of the ws changed to 2 * w also works, because you can use the 2 * w everywhere you used w + w before. This can be repeated until the solution has no repeats. Therefore, you need not attempt to use repeats. Commented Nov 5, 2013 at 3:35 • You don't really need the parenthesis. Get the s=Subsets; out of the function Commented Nov 5, 2013 at 18:39 • Right about the parentheses. Commented Nov 5, 2013 at 19:13 Ruby 289 This is a straight enumeration, so it will obtain minimal solutions, but it may take years--possibly light years--to solve some problems. All the "simplest cases" solve in at most a few seconds (though I got 7,8,14 and 1,2,4 for the 3rd and 5th cases, respectively). Tricky #2 solved in about 3 hours, but the other two are just too big for enumeration, at least for the way I've gone about it. An array of size n that generates the given array by summing subsets of its elements is of minimal size if it can be shown that there is no array of size < n that does that. I can see no other way to prove optimality, so I start the enumeration with subsets of size m, where m is a known lower bound, and then increase the size to m+1 after having enumerated subsets of size m and shown they none of those "span" the given array, and so on, until I find an optimum. Of course, if I have enumerated all subsets up to size n, I could use a heuristic for size n+1, so that if I found a spanning array of that size, I would know it is optimal. Can anyone suggest an alternative way to prove a solution is optimal in the general case? I've included a few optional checks to eliminate some combinations early on. Removing those checks would save 87 characters. They are as follows (a is the given array): • an array of size n can generate at most 2^n-1 distinct positive numbers; hence, 2^n-1 >= a.size, or n >= log2(a.size).ceil (the "lower bound" I referred to above). • a candidate generating array b of size n can be ruled out if: • b.min > a.min • sum of elements of b < a.max or • b.max < v, where v = a.max.to_f/n+(n-1).to_f/2.ceil (to_f being conversion to float). The last of these, which is checked first, implements sum of elements of b <= sum(b.max-n+1..b.max) < a.max Note v is constant for all generator arrays of size n. I've also made use of @cardboard_box's very helful observation that there is no need to consider duplicates in the generating array. In my code, (1..a.max).to_a.combination(n) generates all combinations of the numbers 1 to a.max, taken n at a time (where a.max = a.last = a[-1]). For each combination b: (1...2**n).each{|j|h[b.zip(j.to_s(2).rjust(n,?0).split('')).reduce(0){|t,(u,v)|t+(v==?1?u:0)}]=0} fills a hash h with all numbers that are sums over non-empty subsets of b. The hash keys are those numbers; the values are arbitrary. (I chose to set the latter to zero.) a.all?{|e|h[e]}} checks whether every element of the given array a is a key in the hash (h[e] != nil, or just h[e]). Suppose n = 3 and b=[2,5,7]. Then we iterate over the range: (1...2**8) = (1...8) # 1,2,..,7 The binary representation of each number in this range is used to stab out the elements of b to sum. For j = 3 (j being the range index), 3.to_s(2) # => "11" "11".rjust(3,?0) # => "011" "011".split('') # => ["0","1","1"] [2,5,7].zip(["1","0","1"]) # => [[2,"0"],[5,"1"],[7,"1"]] [[2,"0"],[5,"1"],[7,"1"]].reduce(0){|t,(u,v)|t+(v==?1?u:0)}]=0 # => t = 0+5+7 = 12 The code: x=a[-1] n=Math.log2(a.size).ceil loop do v=(1.0*x/n+(n-1)/2.0).ceil (1..x).to_a.combination(n).each{|b| next if b[-1]<v||b[0]>a[0]||b.reduce(&:+)<x h={} (1...2**n).each{|j|h[b.zip(j.to_s(2).rjust(n,?0).split('')).reduce(0){|t,(u,v)|t+(v==?1?u:0)}]=0} (p b;exit)if a.all?{|e|h[e]}} n+=1 end
3,015
9,338
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.015625
3
CC-MAIN-2024-33
latest
en
0.871458
https://docs.microsoft.com/de-de/dotnet/api/system.windows.vector.op_division?view=netframework-4.8
1,568,726,268,000,000,000
text/html
crawl-data/CC-MAIN-2019-39/segments/1568514573071.65/warc/CC-MAIN-20190917121048-20190917143048-00248.warc.gz
442,779,624
9,730
# Vector.Division(Vector, Double)Vector.Division(Vector, Double)Vector.Division(Vector, Double) Operator ## Definition Dividiert den angegebenen Vektor durch den angegebenen Skalar und gibt den sich ergebenden Vektor zurück.Divides the specified vector by the specified scalar and returns the resulting vector. ``````public: static System::Windows::Vector operator /(System::Windows::Vector vector, double scalar);`````` ``public static System.Windows.Vector operator / (System.Windows.Vector vector, double scalar);`` ``static member ( / ) : System.Windows.Vector * double -> System.Windows.Vector`` #### Parameter vector Vector Vector Vector Der zu dividierende Vektor.The vector to divide. scalar Double Double Double Der Skalar, durch den `vector` dividiert wird.The scalar by which `vector` will be divided. #### Gibt zurück Das Ergebnis der Division von `vector` durch `scalar`.The result of dividing `vector` by `scalar`. ## Beispiele Im folgenden Beispiel wird gezeigt, wie dieser Operator (/) verwendet wird, Vector um eine Struktur durch einen skalaren zu teilen.The following example shows how to use this operator (/) to divide a Vector structure by a scalar. ``````private Vector overloadedDivisionOperatorExample() { Vector vector1 = new Vector(20, 30); Vector vectorResult = new Vector(); Double scalar1 = 75; // Divide vector by scalar. // vectorResult is approximately equal to (0.26667,0.4) vectorResult = vector1 / scalar1; return vectorResult; } `````` ``````Private Function overloadedDivisionOperatorExample() As Vector Dim vector1 As New Vector(20, 30) Dim vectorResult As New Vector() Dim scalar1 As Double = 75 ' Divide vector by scalar. ' vectorResult is approximately equal to (0.26667,0.4) vectorResult = vector1 / scalar1 Return vectorResult End Function ``````
424
1,809
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.515625
4
CC-MAIN-2019-39
latest
en
0.364969
https://www.cymath.com/reference/calculus-differentiation/product-rule
1,721,311,957,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763514831.13/warc/CC-MAIN-20240718130417-20240718160417-00416.warc.gz
630,598,856
9,014
# Product Rule ## Reference > Calculus: Differentiation Description$(fg)'=f'g+fg'$ Examples$\frac{d}{dx} \sin{x}{x}^{2}$1 Regroup terms.$\frac{d}{dx} {x}^{2}\sin{x}$2 Use Product Rule to find the derivative of $${x}^{2}\sin{x}$$. The product rule states that $$(fg)'=f'g+fg'$$.$(\frac{d}{dx} {x}^{2})\sin{x}+{x}^{2}(\frac{d}{dx} \sin{x})$3 Use Power Rule: $$\frac{d}{dx} {x}^{n}=n{x}^{n-1}$$.$2x\sin{x}+{x}^{2}(\frac{d}{dx} \sin{x})$4 Use Trigonometric Differentiation: the derivative of $$\sin{x}$$ is $$\cos{x}$$.$2x\sin{x}+{x}^{2}\cos{x}$Done2*x*sin(x)+x^2*cos(x)
245
570
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.0625
4
CC-MAIN-2024-30
latest
en
0.393949
https://theintactone.com/2019/02/10/qt-u4-topic-2-different-types-methods-for-finding-initial-solution-by-north-west-corner-rule-least-cost-method-and-vogal-approximation-method/
1,675,767,504,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764500456.61/warc/CC-MAIN-20230207102930-20230207132930-00484.warc.gz
573,864,981
68,078
# Different Types Methods for Finding Initial Solution by North – West Corner Rule, Least Cost Method and Vogal Approximation Method ### NORTH-WEST CORNER RULE The North-West Corner Rule is a method adopted to compute the initial feasible solution of the transportation problem. The name North-west corner is given to this method because the basic variables are selected from the extreme left corner. The concept of North-West Corner can be well understood through a transportation problem given below: In the table, three sources A, B and C with the production capacity of 50 units, 40 units, 60 units of product respectively is given. Every day the demand of three retailers D, E, F is to be furnished with at least 20 units, 95 units and 35 units of product respectively. The transportation costs are also given in the matrix. The prerequisite condition for solving the transportation problem is that demand should be equal to the supply. In case the demand is more than supply, then dummy origin is added to the table. The supply of dummy origin will be equal to the difference between the total supply and total demand. The cost associated with the dummy origin will be zero. Similarly, in case the supply is more than the demand, then dummy source is created whose demand will be equivalent to the difference between supply and demand. Again the cost associated with the dummy source will be zero. Once the demand and supply are equal, the following procedure is followed: 1. Select the north-west or extreme left corner of the matrix, assign as many units as possible to cell AD, within the supply and demand constraints. Such as 20 units are assigned to the first cell, that satisfies the demand of destination D while the supply is in surplus. 2. Now move horizontally and assign 30 units to the cell AE. Since 30 units are available with the source A, the supply gets fully saturated. 3. Now move vertically in the matrix and assign 40 units to Cell BE. The supply of source B also gets fully saturated. 4. Again move vertically, and assign 25 units to cell CE, the demand of destination E is fulfilled. 5. Move horizontally in the matrix and assign 35 units to cell CF, both the demand and supply of origin and destination gets saturated. Now the total cost can be computed. The Total cost can be computed by multiplying the units assigned to each cell with the concerned transportation cost. Therefore, Total Cost = 20*5+ 30*8+ 40*6+ 25*9+ 35*6 = Rs 1015 ### LEAST COST METHOD The Least Cost Method is another method used to obtain the initial feasible solution for the transportation problem. Here, the allocation begins with the cell which has the minimum cost. The lower cost cells are chosen over the higher-cost cell with the objective to have the least cost of transportation. The Least Cost Method is considered to produce more optimal results than the North-west Corner because it considers the shipping cost while making the allocation, whereas the North-West corner method only considers the availability and supply requirement and allocation begin with the extreme left corner, irrespective of the shipping cost. Let’s understand the concept of Least Cost method through a problem given below: In the given matrix, the supply of each source A, B, C is given Viz. 50units, 40 units, and 60 units respectively. The weekly demand for three retailers D, E, F i.e. 20 units, 95 units and 35 units is given respectively. The shipping cost is given for all the routes. The minimum transportation cost can be obtained by following the steps given below: 1. The minimum cost in the matrix is Rs 3, but there is a tie in the cell BF, and CD, now the question arises in which cell we shall allocate. Generally, the cost where maximum quantity can be assigned should be chosen to obtain the better initial solution. Therefore, 35 units shall be assigned to the cell BF. With this, the demand for retailer F gets fulfilled, and only 5 units are left with the source B. 2. Again the minimum cost in the matrix is Rs 3. Therefore, 20 units shall be assigned to the cell CD. With this, the demand of retailer D gets fulfilled. Only 40 units are left with the source C. 3. The next minimum cost is Rs 4, but however, the demand for F is completed, we will move to the next minimum cost which is 5. Again, the demand of D is completed. The next minimum cost is 6, and there is a tie between three cells. But however, no units can be assigned to the cells BD and CF as the demand for both the retailers D and F are saturated. So, we shall assign 5 units to Cell BE. With this, the supply of source B gets saturated. 4. The next minimum cost is 8, assign 50 units to the cell AE. The supply of source A gets saturated. 5. The next minimum cost is Rs 9; we shall assign 40 units to the cell CE. With his both the demand and supply of all the sources and origins gets saturated. The total cost can be calculated by multiplying the assigned quantity with the concerned cost of the cell. Therefore, Total Cost = 50*8 + 5*6 + 35*3 +20*3 +40*9 = Rs 955. Note: The supply and demand should be equal and in case supply are more, the dummy source is added in the table with demand being equal to the difference between supply and demand, and the cost remains zero. Similarly, in case the demand is more than supply, then dummy destination or origin is added to the table with the supply equal to the difference in quantity demanded and supplied and the cost being zero. VOGEL’S APPROXIMATION METHOD Definition: The Vogel’s Approximation Method or VAM is an iterative procedure calculated to find out the initial feasible solution of the transportation problem. Like Least cost Method, here also the shipping cost is taken into consideration, but in a relative sense. The following is the flow chart showing the steps involved in solving the transportation problem using the Vogel’s Approximation Method: The concept of Vogel’s Approximation Method can be well understood through an illustration given below: • First of all the difference between two least cost cells are calculated for each row and column, which can be seen in the iteration given for each row and column. Then the largest difference is selected, which is 4 in this case. So, allocate 20 units to cell BD, since the minimum cost is to be chosen for the allocation. Now, only 20 units are left with the source B. • Column D is deleted, again the difference between the least cost cells is calculated for each row and column, as seen in the iteration below. The largest difference value comes to be 3, so allocate 35 units to cell AF and 15 units to the cell AE. With this, the Supply and demand of source A and origin F gets saturated, so delete both the row A and Column F. • Now, single column E is left, since no difference can be found out, so allocate 60 units to the cell CE and 20 units to cell BE, as only 20 units are left with source B. Hence the demand and supply are completely met. Now the total cost can be computed, by multiplying the units assigned to each cell with the cost concerned. Therefore, Total Cost = 20*3 + 35*1 + 15*4 + 60*4 + 20*8 = Rs 555 Note: Vogel’s Approximation Method is also called as Penalty Method because the difference costs chosen are nothing but the penalties of not choosing the least cost routes. ## One thought on “Different Types Methods for Finding Initial Solution by North – West Corner Rule, Least Cost Method and Vogal Approximation Method” error: Content is protected !!
1,647
7,515
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.78125
4
CC-MAIN-2023-06
longest
en
0.928979
http://www.mywordsolution.com/homework-help/statistics/skewness-and-kurtosis/329
1,723,608,726,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722641095791.86/warc/CC-MAIN-20240814030405-20240814060405-00237.warc.gz
46,582,756
10,367
### Skewness and Kurtosis Assignment Help - Skewness and Kurtosis Homework Help From Statistics Tutors Introduction to Skewness and Kurtosis At the higher education level, research is an important subject in the curriculum. Research is of different types. Even students who pursue humanities subjects like political science, education, sociology, psychology are required to do research which may be experimental in nature. These students mostly lack mathematical skills. They find subjects like Statistics very difficult. However, experimental research involves a great deal of statistical operations. Statistical concepts are very complex, and two of the most difficult ones are skewness and kurtosis. Skewness and kurtosis are about real value random variables in an experiment. While skewness refers to the lack of symmetry, kurtosis is refers to the degree to which the distribution is peaked. These two are basic statistical concepts but very difficult to understand for those who have a literary background, or for students of humanities disciplines. The research results are seriously affected if there are errors in calculating skewness and kurtosis. The measures have to be depicted accurately on the graph and represented systematically. The success of data analysis depends upon the calculation of skewness and kurtosis. Difficulties Encountered In Skewness and Kurtosis Problems • Students fail to understand that skewness and kurtosis are two of the probability models. (The other two are means and variance). They confuse them with statistics or arithmetical operations. • Skewness and kurtosis are shape parameters for the probability model. Students are unable to interpret the implications reflected in the shapes. • Skewness and kurtosis are related only to the tails of distribution; hence they provide only a first order approximation. • If skewness and kurtosis are not clearly understood, here is possibility of fallacies that will finally affect the results. • Students who do not understand how to calculate skewness and kurtosis get stuck with a research at a crucial point and cannot proceed further. • Every research problem is unique. Applying statistical operations to data is an independent activity for each research project. One cannot copy from some other source. It has to be original work. Our Portfolio of Services in Skewness and Kurtosis - Statistics • Statistics Textbook Solution • Stats Coursework Writing Help • Skewness and Kurtosis Homework Help • Skewness and Kurtosis Assignment Writing Services • Solutions to Problems in Skewness and Kurtosis - Statistics • Statistics Experts Support 24x7 Some of the Qualities Which Make Us Different From Others Are • 100%, plagiarism free original work • Reasonable for students • 24x7 Live Support • Re-editing if students find it inadequate • Privacy of the work • Timely delivery • Customization as per student's need • Tutors are highly-experience from master and doctorate academic background We offer Skewness and Kurtosis homework help, Assignment writing services, assignment help, Skewness and Kurtosis homework solutions, coursework help, writing assignments service and live Skewness and Kurtosis tutor support service. Our Skewness and Kurtosis tutors are helping students across the world and they offer excellent statistics assignment help service in each discipline and course of statistics studies. Help With Skewness And Kurtosis Assignments Students who study research as a separate subject or those who conduct research are both faced with the difficulties related to skewness and kurtosis. Some students can conduct the research on their own yet they need help with the statistical part of the experimental project. Some students have to complete special assignments on various statistical operations and concepts like skewness and kurtosis. If only students get the right kind of help at the right moment, they can get through the course smoothly. Our academic help platform is specially created for providing just the kind of help that students welcome all the time. At our end, we have highly qualified personnel who are experts and experienced in their specialized domains including statistics. They are continuously engaged in solving skewness and kurtosis problems and sketching accurate graphs. Our faculty do not stop at creating graphical representations but also provide interpretations, results and tips for further analysis. Of course, it is only possible if students provide complete details about their assignments, the topics, aims and objectives. The more you cooperate the better will be our services. Skewness and Kurtosis Solutions Online- Live Statistics Tutor's Support 24x7 This is how we work, to make things simple for you. • First, we get you registered as service seekers by asking you to fill an online form provided on our website. • Whenever you need any kind of help, you are expected to log in and put forth a request. You must upload all the details of your assignments including any specific instructions from your guide or teachers. • You must provide all necessary data and raw scores for calculating skewness and kurtosis. • Give extra details about the topic of your research, aims etc. which will help our faculty to produce near accurate results. • Depending on the nature and length of your assignments, a price will be fixed. You have to make advance online payment. • The calculated statistics complete with graphical representation and interpretation/conclusion will be prepared and sent to you in time. Thus, you can proceed smoothly with the research without wasting valuable time and efforts in dealing with complicated formulas and calculations, all the time doubtful whether they might be correct. Whether it is skewness and kurtosis or any other calculation, you can be sure that you will get accurate results and flawless, error free calculations of any degree of complexity within the time limit. So next time you have to deal with skewness and kurtosis, remember to log in and avail of our services. ### Let us Explain How Assignment Service Works? Follow just three simple steps to get your classroom assignment/assessment done online from best qualified and experienced tutors! Let's see how it works? Find customized step by step solutions with Guaranteed Satisfaction!
1,234
6,351
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.1875
4
CC-MAIN-2024-33
latest
en
0.939432
http://www.bigopendata.eu/financial-modeling/
1,527,046,751,000,000,000
text/html
crawl-data/CC-MAIN-2018-22/segments/1526794865411.56/warc/CC-MAIN-20180523024534-20180523044534-00565.warc.gz
332,812,731
14,229
# Financial modeling Financial modeling is the task of building an abstract representation (a model ) of a real world financial situation. [1] This is a mathematical model designed to represent (a simplified version of) the performance of a financial asset or portfolio of a business, project , or any other investment. Financial modeling is a general term that means different things to different users; the reference report is for accounting and corporate finance applications, or for quantitative finance applications. While there is some debate in the industry to the nature of financial modelingtradecraft , such as welding, or a science -the task of financial modeling has been gaining acceptance and rigor over the years. [2] Typically, financial modeling is understood to be an exercise in financial asset pricing, or a quantitative nature. In other words, financial modeling is about translating a set of hypotheses about the behavior of markets or agents into numerical predictions; for example, a firm’s decisions on investments (or the firm will invest 20% of assets), or investment returns [3] (returns on “stock A” will, on average, be 10% higher than the market’s returns). ## Accounting In corporate finance and the accounting profession, financial modeling typically entails financial statement forecasting; usually the preparation of detailed company-specific models used for decision making Purposes [1] and financial analysis . Applications include: • Business valuation , Especially discounted cash flow , profit Including other valuation problems • Scenario planning and management decision making (“what is”; “what if”; “what has to be done” [4] ) • Capital budgeting • Cost of capital (ie WACC ) calculations • Financial statement analysis (including operating and finance leases , and R & D ) • Project finance To generalize citation needed ] to the nature of these models: firstly, they are built around financial statements , calculations and outputs are monthly, quarterly or annual; SECONDLY, the inputs take the form of “assumptions”, Where the analyst SPECIFIED the gains That will apply in Each period for external / global variables ( exchange rates , tax percentage, etc …; May be thought of as the model parameters ) and for internal / company specific variables ( wages , unit costs , etc. …). Correspondingly, both characteristics are reflected (at least implicitly) in theMathematical models form of thesis : Firstly, the models are in discrete time ; secondly, they are deterministic . For discussion of the issues that may arise, see below; for debate as to more sophisticated approaches Sometimes employed, see Corporate finance # Quantifying uncertainty , and Financial Economics #Corporate finance theory . Modelers are Sometimes Referred to ( tongue in cheek ) as “number crunchers”, and are designated Often ” financial analyst “. Typically, the modeler will have completed an MBA or MSF with (optional) coursework in “financial modeling”. Accounting and finance skills certifications Such As the CIIA and CFA Generally do not live gold Provide explicit training in modeling. citation needed ] At the same time, many commercial training courses are offered, both through universities and privately. Well developed software does exist, the vast proportion of the market is spreadsheet -based; this is widely available since Also, analysts will have their own criteria and methods for financial modeling. [5] Microsoft Excel now has the dominant position, having overtaken Lotus 1-2-3 in the 1990s. Spreadsheet-based modeling, [6] and several standardizations and ” best practices ” have been proposed. [7] “Spreadsheet risk” is being studied and managed. [7] One criticism here is that model outputs , ie line items , often include “unrealistic implicit assumptions” and “internal inconsistencies”. [8] (For example, a forecast for growth in revenue purpose without Corresponding Increases in working capital , fixed assets and the associated financing, May imbed unrealistic Assumptions about asset turnover , leverage and / or equity financing .) What is required, purpose Often lacking, it is always clear. Related to this, is that modellers often additionally “fail to identify crucial assumptions” relating to inputs, “and to explore what can go wrong”. [9] Here, in general, the use of simple and simple arithmetic methods of calculating the probability of a particular problem. [10] – ie, as mentioned, the problems are treated as deterministic in nature – and thus calculate a single value for the asset or project, but without providing information on the range, variance and sensitivity of outcomes. [11] Other reviews discuss the lack of basic computer programming concepts. [12] More serious criticism, in fact, relates to the nature of budgeting itself, and its impact on the organization. [13] [14] The Financial Modeling World Championships, known as ModelOff, has been held since 2012. Model is a global online financial modeling competition which culminates in a Live Event Finals for top competitors. From 2012-2014 the Live Finals were held in New York City and in 2015, in London. [15] ## Quantitative finance In quantitative finance , financial modeling entails the development of a sophisticated mathematical model . citation needed ] Models here with market values, portfolio returns and the like. A general distinction citation needed ] is between: “quantitative financial management”, models of the financial situation of a large, complex firm; quantitative asset pricing, models of the returns of different stocks; ” financial engineering “, models of the price or returns of derivative securities; “quantitative corporate finance”, models of the firm’s financial decisions. Relatedly, applications include: • Option pricing and calculation of their “Greeks” • Other derivatives , especially interest rate derivatives , credit derivatives and exotic derivatives • Modeling the term structure of interest rates ( Bootstrapping , short rate modeling ) and credit spreads • Credit scoring and provisioning • Corporate financing activity prediction problems • Portfolio optimization . [16] • Real options • Risk modeling ( Financial Risk Modeling ) and value at risk [17] • Dynamic financial analysis (DFA) • Credit valuation adjustment , CVA, more XVA These problems are Generally stochastic and continuous in nature and models here THUS require complex algorithms , entailing computer simulation , advanced numerical methods (Such As numerical differential equations , numerical linear algebra , dynamic programming ) and / or the development of optimization models . The general nature of these problems is discussed under Mathematical Finance , while specific techniques are listed under Outline of Finance # Mathematical tools . For further discussion here see also:Financial models with long-tailed distributions and volatility clustering ; Brownian model of financial markets ; Martingale pricing ; Extreme value theory; Historical simulation (finance) . Modellers are referred to as quants ( quantitative analysts ), and typically have advanced ( Ph.D. level) backgrounds in quantitative disciplines such as physics , engineering , computer science , mathematics or operations research . Alternatively, or in addition to quantitative Their background, They have full finance masters with a quantitative orientation, [19] Such as the Master of Quantitative Finance , or the more Specialized Master of Computational Finance Golden Master of Financial Engineering ; the CQF is more common. ALTHOUGH spreadsheets are Widely used here aussi (Almost always Requiring extensive VBA ) custom C ++ , Fortran or Python , or numerical analysis software Such As MATLAB , are preferred Often, [19] PARTICULARLY Where stability or speed is a concern. MATLAB is the tool of choice for doing economics research citation needed ] Because of ict intuitive programming, and graphical debugging tools, purpose C ++ / Fortran are preferred for conceptually simple but high-cost computational applications Where MATLAB is too slow; Python is widely used due to its simplicity and broadnessstandard library . Additionally, for many (of the standard) derivatives and softwareapplications, commercial software is available, and the choice is to be made in the future. question. [19] The complexity of these models may result in incorrect pricing or hedging or both. This Model is the subject of ongoing research by finance academics, and is a topic of great, and growing, interest in the risk management arena. [20] Criticism of the discipline (PRECEDING Often the financial crisis of 2007-08 by Several years) emphasizes the differences entre les mathematical and physical sciences, and finance, and the resulting deposit to be applied by modelers, and by traders and risk managers using Their models . Notable here are Emanuel Derman and Paul Wilmott , authors of the Financial Modelers’ Manifesto . Some go further and question that is mathematical and statistical modeling may be used for the most part ( for options , for portfolios). In fact, these may go so far as to question the empirical and scientific validity of modern financial theory . [21] Notable here are Nassim Taleb and Benoit Mandelbrot . [22] See also Mathematical Finance #Criticism and Financial Economics #Challenges and criticism . • Economic model • Financial engineering • Financial forecast • Financial Modelers’ Manifesto • Financial models with long-tailed distributions and volatility clustering • Financial planning • LBO valuation model , valuation of the current value of a business based on the business’s forecast financial performance • Model audit • Modeling and analysis of financial markets • Profit model • Real options valuation ## References 1. ^ Jump up to:b http://www.investopedia.com/terms/f/financialmodeling.asp 2. Jump up^ Nick Crawley (2010). Which industry sector would be the most popular financial modeling? , fimodo.com. 3. Jump up^ Low, RKY; Tan, E. (2016). “The Role of Analysts’ Forecasts in the Momentum Effect” . International Review of Financial Analysis . doi :10.1016 / j.irfa.2016.09.007 . 4. Jump up^ Joel G. Siegel; Jae K. Shim; Stephen Hartman (November 1, 1997). Schaum’s quick guide to business formulas: 201 decision-making tools for business, finance, and accounting students . McGraw-Hill Professional. ISBN  978-0-07-058031-2 . Retrieved 12 November 2011 . §39 “Corporate Planning Models”. See also, §294 “Simulation Model”. 5. Jump up^ See for example,Valuing Companies by Cash Flow Discounting: Ten Methods and Theories Nine, Pablo Fernandez: University of Navarra – IESE Business School 6. Jump up^ Danielle Stein Fairhurst (2009). Six Reasons your spreadsheet is NOT a financial model Archived2010-04-07 at theWayback Machine., Fimodo.com 7. ^ Jump up to:b Best Practice , European Spreadsheet Risks Interest Group 8. Jump up^ Krishna G. Palepu; Paul M. Healy; Erik Peek; Victor Lewis Bernard (2007). Business analysis and valuation: text and cases . Cengage Learning EMEA. pp. 261-. ISBN  978-1-84480-492-4 . Retrieved 12 November 2011 . 9. Jump up^ Richard A. Brealey; Stewart C. Myers; Brattle Group (2003). Capital investment and valuation . McGraw-Hill Professional. pp. 223-. ISBN  978-0-07-138377-6 . Retrieved 12 November 2011 . 10. Jump up^ Peter Coffee(2004). Spreadsheets: 25 Years in a Cell ,eWeek.
2,799
11,409
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.90625
3
CC-MAIN-2018-22
latest
en
0.727677
https://bitcoin.stackexchange.com/questions/11036/how-much-do-nodes-get-from-blocks-including-fees/11071
1,709,586,645,000,000,000
text/html
crawl-data/CC-MAIN-2024-10/segments/1707947476532.70/warc/CC-MAIN-20240304200958-20240304230958-00269.warc.gz
131,872,756
40,514
# How much do nodes get from blocks, including fees? I am trying to evaluate how much one could earn as pool owner/solo miner if bitcoin really appreciates in price. To get a number i try to get a figure how much one mined block generates in for example a month, week, year. 1. How much fees does one mined block generates actually in a given timeframe? 2. What percentage of that is from the block reward, and what percentage is from fees? Let's look at an example. BTC Guild (www.btcguild.com) has about 33k GH/s. https://www.btcguild.com/index.php?page=pool_stats That's about one third of the whole Bitcoin network combined So let's imagine on average they mine one third of all the blocks. That's 6*24 / 3 = 48 blocks in a day, 336 blocks in a week, and 1440 blocks in a month. Now assume each block has a transaction fee of 0.5 BTC on average. Now counting only transaction fee's: That's 24 BTC in a day, 168 BTC in a week and 720 BTC in a month. 1) Fees are very block-dependent but recently have normally been in the 0.25 - 0.5BTC range 2) A mined block currently gives 25BTC + fees. From the above, fees are coming in around 1-2% of the total block reward • Thanks for your contribution to my question! On which timeframes are these estimations build? Iam trying to understand how much a pool-owner/solo-miner can earn by fees in a week, month, year. May 19, 2013 at 17:32 • Well that's a different question to the one that you asked. My answers are per-block, but you also want to know blocks per week/month/year. The answer to that is totally dependent on the number of hashes you can get through per second. You also need to know your electricity usage and cost of electricity so that you can subtract that from your income. And then pick an exchange rate. Bottom line: it isn't quite so simple, and there are a number of similar questions and answers which you should read to get an overview of this area. – jgm May 19, 2013 at 19:50 • I still try to figure how much one makes on transaction fees. To get more precise lets say transaction fees at current bitcoin price with 1 mined block in a given timeframe ie 1 week, month, year. In point 1.) you already made a figure, but on what is that based. May 19, 2013 at 21:24 • If you're talking about how much you get in your local currency then translate the BTC to whatever currency you're thinking about. As for where the figures come from, the 25BTC figure is baked in to the Bitcoin system (it halves every four years or so) and the transaction fees come directly from the details of mined blocks within the blockchain (looking at blocks over the past 5 months as representative). The whole 'per week/month/year' thing is totally dependent on your hashing power i.e. how many blocks you can mine in a given timeframe. – jgm May 20, 2013 at 6:56 • If I solo mine I earn 0.25 - 0.5 BTC per block mined, as per my original answer. If I run a pool it's up to me as to how much I keep, and up to others as to if they sign up to it. Pools vary in the way that they work but most keep mining fees plus 3% of the block reward, give or take. – jgm May 20, 2013 at 12:57 http://blockchain.info/block-index/383911 On blockchain.info you can see the transaction fees payd to the miner. On the sample link Transaction Fees of \$ 41.75 have been payd in 480 Transactions.
844
3,333
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.09375
3
CC-MAIN-2024-10
latest
en
0.965593
https://minuteshours.com/84-25-minutes-in-hours-and-minutes
1,606,507,966,000,000,000
text/html
crawl-data/CC-MAIN-2020-50/segments/1606141194171.48/warc/CC-MAIN-20201127191451-20201127221451-00390.warc.gz
406,056,757
4,797
84.25 minutes in hours and minutes Result 84.25 minutes equals 1 hours and 24.25 minutes You can also convert 84.25 minutes to hours. Converter Eighty-four point two five minutes is equal to one hours and twenty-four point two five minutes.
61
245
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.90625
3
CC-MAIN-2020-50
latest
en
0.909733
http://eseykota.com/rm/RM_approach/index.htm
1,498,404,651,000,000,000
text/html
crawl-data/CC-MAIN-2017-26/segments/1498128320539.37/warc/CC-MAIN-20170625152316-20170625172316-00454.warc.gz
142,132,296
6,506
Radial Momentum is the arithmetic sum of the individual momenta of all the particles of a system, taken from the center of mass. Like linear momentum, Radial Momentum obeys conservation laws. So the fragments of an explosion, or the molecules of a radially expanding fluid, once set in radial motion, tend to stay in motion and maintain the total Radial Momentum, until some force acts on the individual elements. Once fluid molecules start expanding radially, Radial Momentum keeps these molecules expanding into larger and larger volumes ... and the result is that the density and the pressure both fall and induce lift. I propose to demonstrate, with simple experiments and models, that Radial Momentum, and not fluid velocity, accounts for lift or pressure drop in many systems. Say a stationary 1 kg bomb explodes into 6 pieces. The fragments speed out, radially, in all directions. Its center of mass does not move and so it still has no net linear momentum.  Its fragments, however, all have momentum relative to the center of mass. Each of the fragments travels at 1-m/sec, relative to the center of mass and has momentum, relative to that center of mass of 1/6 kg-m/sec. The total Radial Momentum of the system, then = 1 kg-m/sec. The basic math is straight forward and proceeds from Newtonian Physics. Two derivations follow, one using a cube for a control volume and the other using a sphere. Both lead to the result, P = 1/3 [MR2  / mV].  The pressure of a fluid, expanding into a vacuum equals one third of the square of the radial momentum, over the product of the mass and the volume. Basic Derivations for Particles Expanding in a Cube Eqn Operation Formula Units 1 Impulse from particle hitting inelastic wall and reversing direction. Y = 2 * m/6 * v kg-m / sec 2 Frequency of impact of walls of a cube of side, s f =  v / s 1 / sec 3 Force on one wall F = Y * f kg-m/sec2 4 Force, on one wall, from (1,2,3) F = 1/3 * m * v2 /  s kg-m / sec2 5 Pressure on Wall of Area A P = F / A = F s / V kg / m-sec2 6 P, from (4, 5), for volume V P = 1/3 * (m v2) / V kg / m-sec2 7 Radial Momentum = sum(mv) MR =  (m * v) kg-m / sec 8 squaring MR2 = (mv)2 kg2-m2 / sec2 9 rearranging (mv)2 = MR2 kg2-m2 / sec2 10 divide by mass m * v2 = MR2 / m kg2-m / sec2 11 Pressure, from (6, 10) P = 1/3 * [MR2 / mV] kg / m2-sec2 12 Pressure, from 11 P = 1/3 [MR2  / mV] kg / m2-sec2 13 Rearranging MR2  = 3 P m V kg2-m2 / sec2 14 taking sqrt MR = sqrt ( 3 P m V ) kg-m / sec 15 basic physics, Energy E = 1/2 mv2 kg-m2/sec2 16 Energy, from (10, 15) E = 1/2 MR2 / m kg-m2/sec2 17 Rearranging MR = sqrt(2Em) kg-m / sec 18 Pressure, from (12, 16) P = 2/3 [E/V] kg / m-sec2 19 Energy E = 3/2 P*V kg-m2/sec2 Basic Derivations for Particles Expanding in a Sphere 20 Frequency of impact, sphere of radius, r f = v / 2r 1 / sec 21 Force on wall, from (1, 3, 20) F =  m * v2 / r kg-m/sec2 22 Area of sphere of radius r A = 4 pi-r2 m2 23 Volume of sphere of radius r V = 4/3 pi-r3 m3 24 Pressure = F / A, from (21, 22) P = 1/4 m v2 / pi-r3 kg / m2-sec2 25 Pressure, from (23, 24) P = 1/3 (m v2) / V kg / m2-sec2 26 definition MR = mv kg-m / sec 27 from 25, 26 (same as 12 above) P = 1/3 [MR2  / mV] kg / m2-sec2 Exercise 1: Expanding Ring A 1-kg device rests between two plates that are 10 cm apart and which have a vacuum between them. At t = 0, the device explodes and releases energy of 2-joules, all of which carries tiny fragmented particles of the device out from the center of mass, in an expanding ring 10 cm high.  After one second, one minute, and one hour, what is the radius of the ring and what is the particle density of the ring. First, find the particle velocity, from E = 1/2 mv2. v = sqrt(2E/m) = sqrt(2 * 2-Joule / 1-kg) = sqrt(4) m/sec = 2 m/sec In the above diagram, height, h = 10 cm Radius, r = t * v = t * 2 m/sec Volume, V = 2-pi-r * h = .6282 * r Density, D = mass / Volume = 1-kg/V Time 1 Second 1 Minute 1 Hour Radius (m) 2 120 7200 Volume (m3) 1.256 785.384 4523.04 Density (kg/m3 ) .796 .00127 .000221 Exercise 2: Levitator Air entering a spool of thread, and exiting against a card below the spool, lifts the card up against the spool. This curious device is an example of a levitator. Compute and display the pressure, density, velocity, mass and mass flux between the plates of a levitator. Explain how the levitator works. Simple Levitator Cross Section of Levitator Computer Simulation of the Levitator The graph shows the values of key variables versus distance in a computer simulation. The center line is at the far left of the graph and the far right is a distance of 20 millimeters, representing the outer edge of the levitator disk. The simulation begins at 1.59 mm, the radius of the hole at the base of the fluid delivery tube. Since the pump is the flow motivator, the pressure is the highest at the center of the levitator, far left on the graph. The initial velocity inherits from this motivation. As soon as the fluid (air) begins to traverse the gap between the top and the ceiling, it continues to move, by its own momentum, through a series of larger and larger rings. As it enters larger rings, its density and therefore its pressure falls, as in example #1, above. The decreasing pressure ahead even further contributes toward accelerating the fluid, so the velocity rises, in a positive feedback cycle. As the fluid continues to expand, its density and pressure continue to fall, eventually falling well below the ambient pressure beneath the card.  The low pressure in this Active Region, accounts for all the lift.  Indeed, in experiments with water as the fluid, using clear plastic plates, the Active Region appears as a white cavitation ring, just outside the intake port. The low pressure draw out tiny air bubbles. For radial expansion situations, the density and pressure both achieve a minimum in the Active Region and that the pressure drop in that region accounts for most all of the lift. Meanwhile, at the far outer edge of the disk, the pressure is ambient so the pressure between the plates must also be ambient at the junction. Indeed, to promote the flow from the cavitation ring to the edge of the card, after the Active Effect plays out, there must be a pressure gradient to motivate the flow. This appears as a slightly downward-sloping pressure line from just past the cavitation rang to the edge of the card. Opposing the radial expansion are two forces. First, friction converts some energy to heat. This shows up as a bifurcation of the trajectories of the pressure and density lines. Second, the back pressure from the ambient air at the edge of the disk provides a net positive pressure slope against which the fluid must climb. Just as the initially emergent air experiences positive feedback and an attendant increase in velocity, another self-reinforcing phenomenon occurs at the end of the Active Region. As pressure starts to rise against ambient pressure, and low-momentum air ahead, it experiences additional deceleration. This, in turn, further reduces velocity and momentum. This positive-feedback induces a rapid rise of pressure, or hydraulic jump. After the jump, the pressure decreases toward the circumference, per normal gradient flow. The low-pressure within the Active Region contributes all the lift. One to validate the model is to notice the existence of a cavitation ring in the Active Region. Another is to notice the existence of a hydraulic jump in a similar configuration in which water impacts the back of a smooth dinner plate. Again, the Active Effect is evident between the central column of fluid and the hydraulic jump at a radius of about an inch beyond there. Water Stream on Plate shows Active Effect and Hydraulic Jump A third way is quantitative. While direct measurement of pressure and density between the plates is difficult, in cases where the levitator plate adheres to the table, application of an additional downward force to the plate results in (1) an increase in the gap size,  (2) an increase in airflow and (3) the induction of additional upward force to balance the increased weight. By measuring the gap size, airflow and disk weight, the author found a reasonably good fit with the model. Full formal presentation of this model, including justifying each equation, and presenting the specifics of the numerical simulation of a set of integral difference equations is out of scope for this paper. It appears here to provide additional insight into the importance of density effects to the induction of lift and to indicate that the theory of Radial Momentum may lead to numerical simulations that produce reasonably accurate results.
2,227
8,680
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.828125
4
CC-MAIN-2017-26
longest
en
0.878574
https://root.cern/doc/master/PdfFuncMathMore_8h_source.html
1,660,250,831,000,000,000
text/html
crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00220.warc.gz
467,492,323
5,827
ROOT   Reference Guide PdfFuncMathMore.h Go to the documentation of this file. 1// @(#)root/mathmore:$Id$ 2// Authors: L. Moneta, A. Zsenei 08/2005 3 4 /********************************************************************** 5 * * 6 * Copyright (c) 2004 ROOT Foundation, CERN/PH-SFT * 7 * * 8 * This library is free software; you can redistribute it and/or * 9 * modify it under the terms of the GNU General Public License * 10 * as published by the Free Software Foundation; either version 2 * 11 * of the License, or (at your option) any later version. * 12 * * 13 * This library is distributed in the hope that it will be useful, * 14 * but WITHOUT ANY WARRANTY; without even the implied warranty of * 15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * 16 * General Public License for more details. * 17 * * 18 * You should have received a copy of the GNU General Public License * 19 * along with this library (see file COPYING); if not, write * 20 * to the Free Software Foundation, Inc., 59 Temple Place, Suite * 21 * 330, Boston, MA 02111-1307 USA, or contact the author. * 22 * * 23 **********************************************************************/ 24 25 26#ifndef ROOT_Math_PdfFuncMathMore 27#define ROOT_Math_PdfFuncMathMore 28 29namespace ROOT { 30 namespace Math { 31 32 33 /** 34 35 Probability density function of the non central \f$\chi^2\f$ distribution with \f$r\f$ 36 degrees of freedom and the noon-central parameter \f$\lambda\f$ 37 38 \f[ p_r(x) = \frac{1}{\Gamma(r/2) 2^{r/2}} x^{r/2-1} e^{-x/2} \f] 39 40 for \f$x \geq 0\f$. 41 For detailed description see 42 <A HREF="http://mathworld.wolfram.com/NoncentralChi-SquaredDistribution.html"> 43 Mathworld</A>. 44 45 @ingroup PdfFunc 46 47 */ 48 49 double noncentral_chisquared_pdf(double x, double r, double lambda); 50 51 } //end namespace Math 52} // end namespace ROOT 53 54 55// make a fake class to auto-load functions from MathMore 56 57namespace ROOT { 58 namespace Math { 59 61 62 public: 63 66 }; 67 69 } 70 71} 72 73 74 75#endif // ROOT_Math_PdfFuncMathMore Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t r double noncentral_chisquared_pdf(double x, double r, double lambda) Probability density function of the non central distribution with degrees of freedom and the noon-c... Double_t x[n] Definition: legend1.C:17 Namespace for new Math classes and functions. MathMoreLib MathMoreLibrary This file contains a specialised ROOT message handler to test for diagnostic in unit tests.
689
2,649
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.53125
3
CC-MAIN-2022-33
latest
en
0.521482
wellwoven.co.uk
1,726,314,328,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651579.22/warc/CC-MAIN-20240914093425-20240914123425-00327.warc.gz
562,553,590
82,235
# How to Count the Point or Knot Density of a Machine Made Rug Rug Density and What it Means to You Rug weaving starts with a weft, or net of yarn. Once this net is set on the loom another yarn, called the warp, is woven through that net. The weft and warp are knotted at each point they intersect, forming the rug. Rug density is a measure of the number of knots used to make a rug. One way to think about area rug point count is like resolution on a computer screen. The more pixels a screen has the more detail can be displayed. If we think of points of yarn like pixels higher point density means more detailed designs. With this density the weight of the rug increases because more material is used to make the rug. This count is measured in points per square meter with each point being a single knot. This number is calculated by multiplying the number of knots running horizontally on the weft and vertically on the warp. You can even calculate this yourself. It’s easy! All you’ll need is: • A metric ruler with centimeter markings • A pencil, pen, or other object with a fine tip to count the individual points. How to do it: Take your ruler and count the points running 10 centimeters across and 10 down on the back of your rug. Make sure you start from the same point for both measurements. Take these two numbers and multiply them. Then take this number and multiply it by 100 to get the total number of points in one square meter. Imagine a rug has 32 points on the weft and 47 on the warp. The calculation would go like this: (32(weft count) x 47 (warp count) x 2) x 100 = 300,800 So this rug has 300,800 points per square meter.
384
1,657
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.65625
4
CC-MAIN-2024-38
latest
en
0.92295
http://www.cracksat.net/sat/math-multiple-choice/test1127.html
1,552,962,613,000,000,000
text/html
crawl-data/CC-MAIN-2019-13/segments/1552912201882.11/warc/CC-MAIN-20190319012213-20190319034213-00193.warc.gz
259,581,158
5,483
 SAT Math Multiple Choice Practice Test 27_cracksat.net # SAT Math Multiple Choice Practice Test 27 ### Test Information 22 questions 26 minutes Take more free SAT math multiple choice tests available from cracksat.net. 1. If 7 times a number is 84, what is 4 times the number? A. 16 B. 28 C. 48 D. 52 E. 56 2. A painter drains 4 gallons of turpentine from a full 16-gallon jug. What percent of the turpentine remains in the jug? A. 33% B. 45% C. 50% D. 67% E. 75% 3. If each number in the following sum were increased by t, the new sum would be 4.22. What is the value of t ? A. 0.24 B. 0.29 C. 0.33 D. 0.37 E. 0.43 4. In the figure above, ?ACE is equilateral, and B, D, and F are the midpoints of , , and , respectively. If the area of ?ACE is 24, what is the area of the shaded region? A. 4 B. 6 C. 8 D. 12 E. 16 5. If , and , what is the value of p ? A. –4 B. - C. 0 D. E. 4 6. When 23 is divided by 3, the remainder is x. What is the remainder when 23 is divided by 2x ? A. 1 B. 2 C. 3 D. 4 E. 5 7. A monthly Internet service costs d dollars for the first 10 hours, and e dollars per hour for each hour after the first 10. Which of the following could represent the cost of the service, if h represents the total number of hours the service was used this month? A. d + e(h – 10) B. d + 10eh C. eh + 10d D. eh(d – 10) E. h(d + 10e) 8. The graph of y = f(x) is shown above. Which of the following could be the equation for f(x) ? A. f(x) = 2x B. f(x) = x2 C. f(x) = 2x2 D. f(x) = x - 2 E. f(x) = |2x| 9. A bookcase with 6 shelves has 20 books on the top shelf and 30 books on each of the remaining shelves. How many books are there on all 6 shelves of the bookcase? A. 120 B. 130 C. 150 D. 160 E. 170 10. If a + b = 14, b = , and c = 24, then a = A. 4 B. 6 C. 8 D. 10 E. 12 11. A pack of 10 baseball cards costs \$3. A pack of 12 basketball cards costs \$3. If Karim spends \$15 on packs of one type of card, then at most how many more basketball cards than baseball cards could he purchase? A. 5 B. 10 C. 12 D. 15 E. 18 12. A fleet of 5 trucks must make deliveries. Each truck is loaded with k cartons. Each carton contains 60 boxes. If there are a total of 900 boxes, what is the value of k ? A. 3 B. 5 C. 7 D. 9 E. 10 13. If 35% of p is equal to 700, what is 40% of p ? A. 98 B. 245 C. 280 D. 800 E. 2,000 14. The total cost to hold a party at a banquet hall is the result when the product of the number of guests and the cost of food per person is added to the product of the hourly cost to rent the hall and the number of hours the party will last. One hundred guests have been invited, the food costs a total of \$200, and the hall charges \$50 per hour. To save money, the organizers would like to reduce the length of the party from 4 hours to 2 hours. How much money would the organizers save by reducing the length of the party? A. \$400 B. \$300 C. \$200 D. \$100 E. The price will not change. 15. If ab = 119 and ab = 7, what is the value of a ? A. 5 B. 12 C. 14 D. 17 E. 21 16. A. B. C. D. E. 17. At a track meet, Brian jumped a distance of 14 feet, 9 inches. If Mike jumped exactly 2 feet farther than Brian, how far did Mike jump? (1 foot = 12 inches.) A. 17 feet, 6 inches B. 17 feet, 5 inches C. 17 feet, 3 inches D. 17 feet, 2 inches E. 17 feet, 1 inch 18. Based on the chart above, which of the following could express the relationship between x and y ? A. y = x – 4 B. y = x – 2 C. y = 2x – 1 D. y = 2x + 2 E. y = 3x – 3 19. Based on the figure above, which of the following expressions is equal to b ? A. ac B. 180 – (a + c) C. (a + c) – 90 D. (a + c) – 180 E. 360 – (a + c) 20. If 12(10m + 8)(6m + 4)(2m) = 0, then how many different possible values of m exist? A. One B. Two C. Three D. Four E. Five 21. In a certain game, a red marble and a blue marble are dropped into a box with five equally-sized sections, as shown above. If each marble lands in a different section of the box, how many different arrangements of the two marbles are possible? A. 5 B. 20 C. 25 D. 40 E. 100 22. If a square lies completely within a circle, which of the following must be true? I. The radius of the circle is equal in length to one side of the square. II. The area of the square is less than the area of the circle. III. All four corners of the square touch the circle. A. I only B. II only C. I and II only D. II and III only E. I, II, and III 
1,538
4,418
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.796875
4
CC-MAIN-2019-13
latest
en
0.894957
https://www.ginac.de/ginac.git/?p=ginac.git;a=commitdiff;h=0c2f0f4c6d4118e817c5b9a14a9d0f3ada82e9fe
1,566,282,187,000,000,000
application/xhtml+xml
crawl-data/CC-MAIN-2019-35/segments/1566027315222.56/warc/CC-MAIN-20190820045314-20190820071314-00541.warc.gz
826,066,781
4,838
author Richard Kreckel Fri, 20 May 2011 20:20:57 +0000 (22:20 +0200) committer Richard Kreckel Fri, 20 May 2011 20:20:57 +0000 (22:20 +0200) Before, sqrt(2)*x used to be a polynomial in x. Now, it is a polynomial in x again. check/exam_paranoia.cpp patch | blob | history ginac/add.cpp patch | blob | history ginac/add.h patch | blob | history ginac/expairseq.cpp patch | blob | history ginac/expairseq.h patch | blob | history ginac/mul.cpp patch | blob | history ginac/mul.h patch | blob | history index 972270f..52d261c 100644 (file) @@ -497,13 +497,19 @@ static unsigned exam_paranoia19() // Bug in expairseq::is_polynomial (fixed 2011-05-20). static unsigned exam_paranoia20() { +       unsigned result = 0; symbol x("x"); -       ex e = sqrt(x*x+1)*sqrt(x+1); -       if (e.is_polynomial(x)) { +       ex e1 = sqrt(x*x+1)*sqrt(x+1); +       if (e1.is_polynomial(x)) { clog << "sqrt(x*x+1)*sqrt(x+1) is wrongly reported to be a polynomial in x\n"; -               return 1; +               ++result; } -       return 0; +       ex e2 = sqrt(Pi)*x; +       if (!e2.is_polynomial(x)) { +               clog << "sqrt(Pi)*x is wrongly reported to be no polynomial in x\n"; +               ++result; +       } +       return result; } unsigned exam_paranoia() index 553c2d3..42364c0 100644 (file) @@ -260,6 +260,16 @@ bool add::info(unsigned inf) const return inherited::info(inf); } +bool add::is_polynomial(const ex & var) const +{ +       for (epvector::const_iterator i=seq.begin(); i!=seq.end(); ++i) { +               if (!(i->rest).is_polynomial(var)) { +                       return false; +               } +       } +       return true; +} + int add::degree(const ex & s) const { int deg = std::numeric_limits<int>::min(); index 7d96c8c..4d5bd0a 100644 (file) @@ -47,6 +47,7 @@ public: public: unsigned precedence() const {return 40;} bool info(unsigned inf) const; +       bool is_polynomial(const ex & var) const; int degree(const ex & s) const; int ldegree(const ex & s) const; ex coeff(const ex & s, int n=1) const; index 88d986e..2649b4b 100644 (file) @@ -375,28 +375,6 @@ ex expairseq::conjugate() const return result; } -bool expairseq::is_polynomial(const ex & var) const -{ -               for (epvector::const_iterator i=seq.begin(); i!=seq.end(); ++i) { -                       if (!(i->rest).is_polynomial(var)) { -                               return false; -                       } -               } -       } -       else if (is_exactly_a<mul>(*this)) { -               for (epvector::const_iterator i=seq.begin(); i!=seq.end(); ++i) { -                       if (!(i->rest).is_polynomial(var) || !(i->coeff.info(info_flags::integer))) { -                               return false; -                       } -               } -       } -       else { -               return basic::is_polynomial(var); -       } -       return true; -} - bool expairseq::match(const ex & pattern, exmap & repl_lst) const { // This differs from basic::match() because we want "a+b+c+d" to index 7e8d551..44d6e26 100644 (file) @@ -87,7 +87,6 @@ public: bool match(const ex & pattern, exmap& repl_lst) const; ex subs(const exmap & m, unsigned options = 0) const; ex conjugate() const; -       bool is_polynomial(const ex & var) const; void archive(archive_node& n) const; void read_archive(const archive_node& n, lst& syms); @@ -347,6 +347,17 @@ bool mul::info(unsigned inf) const return inherited::info(inf); } +bool mul::is_polynomial(const ex & var) const +{ +       for (epvector::const_iterator i=seq.begin(); i!=seq.end(); ++i) { +               if (!i->rest.is_polynomial(var) || +                   (i->rest.has(var) && !i->coeff.info(info_flags::integer))) { +                       return false; +               } +       } +       return true; +} + int mul::degree(const ex & s) const { // Sum up degrees of factors index 65f59bd..d28b627 100644 (file) @@ -49,6 +49,7 @@ public: public: unsigned precedence() const {return 50;} bool info(unsigned inf) const; +       bool is_polynomial(const ex & var) const; int degree(const ex & s) const; int ldegree(const ex & s) const; ex coeff(const ex & s, int n = 1) const;
1,544
4,142
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.75
3
CC-MAIN-2019-35
longest
en
0.093607
http://bankersdaily.in/seating-arrangement-for-sbi-po-set-16/
1,631,919,746,000,000,000
text/html
crawl-data/CC-MAIN-2021-39/segments/1631780055808.78/warc/CC-MAIN-20210917212307-20210918002307-00167.warc.gz
5,965,536
35,118
## Seating Arrangement For SBI PO : Set –  16 1-5) Study the following information and answer these question. A, B, C, D, E, F, G and H are sitting around a circular table facing centre. No two males and two females are immediate neighbours of each other and they like some colours Pink, Yellow, Black, White, Violet, Green, Golden, Silver but not necessarily in the same order. A is wife of H and likes pink colour. A sits third to the left of E and E does not like white or Violet. F sits second to the right of D. D does not like Green colour and is not an immediate neighbour of either A or E. H and C are immediate neighbours of each other and C likes white colour and H likes Violet colour. The person who likes White colour is immediate neighbour of the person who like Green colour. F is not immediate neighbour of his wife B and likes silver colour and his wife does not like black colour. D’s choice is Yellow. 1) Which of the following groups consists of only female members? a) CBGA b) BAGD c) BDAC d) DCBG e) None of these 2) Four of the following five are alike in a certain way and so from a group. Which is the one that does not belongs to that group? a) H b) F c) E d) G e) D 3) How many people sit between B and F when counted in anticlockwise direction from B? a) One b) Two c) Three d) Four e) None of these 4) Who sits third to the left of B? a) F b) H c) D d) A e) None of these 5) Which of the following is true about G? a) G is male b) G sits exactly between F and H c) G sits third to the left of E d) G sits second to the right of B e) None of these Answer 1 e 2 d 3 b 4 b 5 d 6-10) Study the following information carefully and answer the questions given below: Eight students, P, Q, R, S, T, U, V and W, are sitting around a rectangular table in such a way that two students sit on each of the four sides of the table facing the centre. Students sitting on opposite sides are exactly opposite to each other. S faces North and sits exactly opposite W. T is on the immediate left of W. P and V sit on the same side. V is exactly opposite Q, who is on the immediate right of R. P is next to the left of S. 1) Which of the following statements is definitely true? a) P is facing north. b) T is sitting opposite U. c) U is to the left of V. d) R is to the left of P. e) None of these. 7) Who is sitting opposite T? a) S b) P c) U d) P or S e) None of these 8) Which of the following pairs of students has both the students sitting on the same side with first student sitting to the right of second student? a) SU b) RQ c) UR d) PV e) None of these 9) Who is next to T in anti clockwise direction? a) V b) Q c) U d) P or U e) None of these 10) Who is sitting opposite P? a) V b) S c) T d) P e) None of these Answer 6 b 7 c 8 d 9 c 10 e
796
2,828
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.734375
3
CC-MAIN-2021-39
latest
en
0.950929
https://betechie.in/tag/jee-advanced/
1,618,822,385,000,000,000
text/html
crawl-data/CC-MAIN-2021-17/segments/1618038879305.68/warc/CC-MAIN-20210419080654-20210419110654-00500.warc.gz
241,130,260
7,083
6 ## Preparation strategies for JEE Advanced 2021 The question format of JEE exams is getting difficult year by year. Here’s how students can prepare for the upcoming examination: ## JEE 2021: All About the New Exam Pattern and Marking Scheme With JEE 2021 already on us, the new pattern and marking scheme is what every engineering aspirant is worried about. Here’s a detailed know-how on the same: ## Last-Minute Maths Revision Tips for JEE 2021 With engineering entrance exams fast approaching, here are a few Mathematics tips to help students ace JEE Mains 2021: In addition to effective planning, continuous practice and efficient time management are what help aspirants to successfully crack the JEE Advanced examination. Here are a few ways to optimise your preparation to excel and achieve the best seat in B.Tech Admissions 2021. Physics 1) General Units and dimensions, dimensional analysis; least count, significant figures; Methods of measurement and error analysis for physical quantities pertaining to the following experiments: Experiments based on using Vernier calipers and screw gauge (micrometer), Determination of g using simple pendulum, Young’s modulus by Searle’s method, Specific heat of a liquid using... ## JEE - Syllabus The syllabus for JEE Main and JEE Advanced is the same. Mathematics Integral Calculus: Area Under CurvesDefinite IntegrationDifferential EquationIndefinite Integration Algebra: Binomial TheoremComplex NumbersMathematical InductionMatrices and DeterminantsPermutation and CombinationProgressionsSet Theory and RelationsProbabilityTheory of Equation Coordinate Geometry: CircleEllipseHyperbolaParabolaPoint and Straight LineVector Differential Calculus: Continuity and DifferentiabilityDifferential CoefficientDifferentiation and Application of DerivativesFunctionsLimits Trigonometry: Inverse Trigonometric FunctionsTrigonometrical Equations... ### Meet our top authors You've successfully subscribed to BeTechie Welcome back! You've successfully signed in. Great! You've successfully signed up.
423
2,054
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.015625
3
CC-MAIN-2021-17
latest
en
0.81999
https://www.urionlinejudge.com.br/judge/en/profile/132128
1,611,230,071,000,000,000
text/html
crawl-data/CC-MAIN-2021-04/segments/1610703524743.61/warc/CC-MAIN-20210121101406-20210121131406-00312.warc.gz
1,058,263,209
5,809
# PROFILE Check out all the problems this user has already solved. Problem Problem Name Ranking Submission Language Runtime Submission Date 1190 Right Area 01643º 5677098 C 0.000 11/18/16, 8:17:57 AM 1189 Left Area 01477º 5677097 C 0.000 11/18/16, 8:17:41 AM 1188 Inferior Area 01576º 5677096 C 0.000 11/18/16, 8:17:28 AM 1187 Top Area 01914º 5677095 C 0.000 11/18/16, 8:17:12 AM 1186 Below the Secundary Diagonal 01928º 5677094 C 0.000 11/18/16, 8:16:49 AM 1182 Column in Array 02617º 5677093 C 0.000 11/18/16, 8:16:12 AM 1181 Line in Array 03195º 5677091 C 0.000 11/18/16, 8:15:58 AM 1180 Lowest Number and Position 03844º 5677089 C 0.000 11/18/16, 8:15:40 AM 1179 Array Fill IV 02103º 5677088 C 0.000 11/18/16, 8:15:28 AM 1178 Array Fill III 03289º 5677086 C 0.000 11/18/16, 8:14:52 AM 1177 Array Fill II 03335º 5677085 C 0.000 11/18/16, 8:14:31 AM 1183 Above the Main Diagonal 02461º 5677084 C 0.000 11/18/16, 8:14:16 AM 1184 Below the Main Diagonal 01821º 5677081 C 0.000 11/18/16, 8:14:00 AM 1185 Above the Secundary Diagonal 01728º 5677080 C 0.000 11/18/16, 8:13:38 AM 1176 Fibonacci Array 02772º 5677078 C 0.000 11/18/16, 8:12:58 AM 1175 Array change I 03866º 5677077 C 0.000 11/18/16, 8:12:42 AM 1174 Array Selection I 04182º 5677075 C 0.000 11/18/16, 8:12:20 AM 1173 Array fill I 04618º 5677073 C 0.000 11/18/16, 8:11:53 AM 1172 Array Replacement I 04752º 5677072 C 0.000 11/18/16, 8:11:35 AM 1165 Prime Number 01029º 5677071 C 0.000 11/18/16, 8:11:21 AM 1164 Perfect Number 02234º 5677070 C 0.000 11/18/16, 8:11:05 AM 1160 Population Increase 05094º 5677068 C 0.004 11/18/16, 8:10:24 AM 1159 Sum of Consecutive Even... 02314º 5677067 C 0.000 11/18/16, 8:10:08 AM 1158 Sum of Consecutive Odd... 02235º 5677066 C 0.000 11/18/16, 8:09:54 AM 1157 Divisors I 02844º 5677063 C 0.000 11/18/16, 8:09:37 AM 1156 S Sequence II 02083º 5677059 C 0.000 11/18/16, 8:08:15 AM 1155 S Sequence 02383º 5677057 C 0.000 11/18/16, 8:08:02 AM 1154 Ages 02826º 5677055 C 0.000 11/18/16, 8:07:41 AM 1153 Simple Factorial 03703º 5626318 C 0.000 11/10/16, 3:08:43 PM 1151 Easy Fibonacci 03513º 5626315 C 0.000 11/10/16, 3:08:29 PM 1 of 5
1,027
2,124
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.671875
3
CC-MAIN-2021-04
latest
en
0.297477
https://www.slideshare.net/smacdermaid/quadrilaterals-5623479
1,532,148,303,000,000,000
text/html
crawl-data/CC-MAIN-2018-30/segments/1531676592309.94/warc/CC-MAIN-20180721032019-20180721052019-00087.warc.gz
987,455,544
35,037
Successfully reported this slideshow. We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime. Upcoming SlideShare Loading in …5 × # Quadrilaterals 8,481 views Published on Lesson 8-7 Holt Math Course 2 Published in: Technology, Business • Full Name Comment goes here. Are you sure you want to Yes No Your message goes here • Be the first to comment ### Quadrilaterals 1. 1. Classifying Quadrilaterals Math 2 Lowell Middle School Mrs. MacDermaid 2. 2. Quadrilateral • A quadrilateral is a four-side polygon. 3. 3. Special Quadrilaterals • Parallelogram • Rectangle • Rhombus • Square • Trapezoid 4. 4. Parallelogram • Both pairs of opposite sides are parallel and congruent. Both pairs of opposite angels are congruent. 5. 5. Rectangle • Parallelogram with four right angles. 6. 6. Rhombus • Parallelogram with four congruent sides 7. 7. Square • Parallelogram with four congruent sides and four right angles. 8. 8. Trapezoid • Exactly one pair of opposite sides is parallel. 9. 9. Classifying Quadrilaterals • Quadrilaterals can have more than one name because the special quadrilaterals sometimes share properties. • The best name is the one that is the most specific. 10. 10. How the Quadrilaterals are Related (Venn Diagram) Parallelograms Squares Rectangles Rhombuses Quadrilaterals Trapezoids 11. 11. How the Quadrilaterals are Related (Concept Map) Parallelograms Squares RectanglesRhombuses Quadrilaterals TrapezoidsOther 12. 12. Try This! • Give all the names that apply to each quadrilateral. Then give the name that best describes it. 13. 13. Try This! • Give all the names that apply to each quadrilateral. Then give the name that best describes it. • The figures has opposite sides that are parallel so it is a parallelogram. • The figures has four right angles, so it is a rectangle. • Rectangle is more specific so it is the best name. 14. 14. Try This! • Give all the names that apply to each quadrilateral. Then give the name that best describes it. 15. 15. Try This! • Give all the names that apply to each quadrilateral. Then give the name that best describes it. • The figures has exactly one pair of opposite sides that is parallel, so it is a trapezoid.. • Trapezoid is the name that best describes this quadrilateral. 16. 16. Clip Art Credits • Slide #2 Microsoft Office Clip Art Gallery http://office.microsoft.com/en-us/images/results.aspx?qu=geometric%20shapes#ai:MC900048066| http://office.microsoft.com/en-us/images/results.aspx?qu=geometric%20shapes#ai:MC900048064| http://office.microsoft.com/en-us/images/results.aspx?qu=geometric%20shapes#ai:MC900048068| http://office.microsoft.com/en-us/images/results.aspx?qu=geometric%20shapes#ai:MC900048065 • All other Slides Original line drawings by Sharon MacDermaid
721
2,836
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.3125
4
CC-MAIN-2018-30
longest
en
0.770572
https://socratic.org/questions/how-do-you-convert-73-mi-h-km-s#632841
1,669,707,828,000,000,000
text/html
crawl-data/CC-MAIN-2022-49/segments/1669446710690.85/warc/CC-MAIN-20221129064123-20221129094123-00636.warc.gz
550,456,864
6,046
# How do you convert 73 mi/h = ____km/s? Jun 20, 2018 $3.26 \cdot {10}^{-} 2 \setminus \text{km/s}$. #### Explanation: Well, we know that $\text{1 mile = 1.609 km}$ and $\text{1 hour = 3600 seconds}$ So, we have, $\text{73 miles/hr" = (73*1.609)/3600 \ "km/s}$ $\setminus = \text{0.0326 km/s}$ $= 3.26 \cdot {10}^{-} 2 \setminus \text{km/s}$
149
352
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.0625
4
CC-MAIN-2022-49
longest
en
0.404072
https://thekidsworksheet.com/phase-change-worksheet-pdf-answers/
1,669,542,808,000,000,000
text/html
crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00163.warc.gz
600,028,503
22,015
# Phase Change Worksheet Pdf Answers Phase changes and latent heat how much energy does it take to boil water. Use the graph to answer the following questions 1 23. Changes In States Of Matter Worksheet For Kids Google Kereses Matter Worksheets States Of Matter States Of Matter Worksheet ### Phase changes occur because of the energy of molecular motion. Phase change worksheet pdf answers. Phase diagram worksheet. Dilutions worksheet name from phase change worksheet answers source. Combining hydrogen and oxygen to make water is a physical change. 1 how many joules are required to heat 250 grams of liquid water from 0. Why does the temperature of h 2 o not increase when it is boiling. In a physical change the makeup of matter is changed. As heat is added to a solid the molecules break out of their bonds and begin to move freely causing. At point a the beginning of observations the substance exists in a solid state. What is latent heat. Use the graph to answer the following questions. If possible discuss your theory with your classmates and teacher. Material in this phase has volume and shape. Phase change worksheet the graph was drawn from data collected as a substance was heated at a constant rate. Attached is a list of needed values to solve problems 1. Multiple answers needed for this question 22 if i had a quantity of this substance at a pressure of 2 00 atm and a temperature of 1500 c what phase change s would occur if i decreased the pressure to 0 25 atm. Part i phase changes note. Based on what you have observed explain why you think phase changes occur. Evaporation is a physical change. The graph was drawn from data collected as a substance was heated at a constant rate. Heat with phase change worksheet answer sheet. Material in this phase has volume and shape. Burning wood is a physical change. Evaporation occurs when liquid water changes into a gas. Use the graph to answer the following questions. At point a the beginning of observations the substance exists in a solid state. Pre nursing entrance exam teas exam may 2014 from phase change worksheet answers source. Phase change worksheet name date period the graph was drawn from data collected as a substance was heated at a constant rate. Use the graph to answer the following questions. At point a the beginning of observations the substance exists in a solid state. Some can be used more than once gas temperature infinite slower liquid melt vaporizing heat solid definite faster cool move energy the graph was drawn from data collected as a substance was heated at a constant rate. Changing the size and shapes of pieces of wood would be a chemical change. Material in this phase has. Phase change worksheet word bank. Explain your answer by drawing a heating cooling curve for water. Chem 16 2 le answer key j4 feb 4 2011 from phase change worksheet answers source. Introduction To Physical And Chemical Changes Worksheet Chemical And Physical Changes Matter Science Chemical Changes States Of Matter Phase Change And Heat Phet Simulation Activity Energy Transformations States Of Matter Matter Activities Introduction To Physical And Chemical Changes Worksheet Chemical Changes Chemistry Worksheets Chemical And Physical Changes Phase Change Worksheet Answers Worksheets Graphing Algebra Graphs Matter Evaporation Condensation Melting Freezing Study Guide Phase Changes Matter Science States Of Matter 6th Grade Science Pin On States Of Matter Matter Evaporation Condensation Melting Freezing Study Guide Phase Changes Matter Science Chemistry Classroom Chemical Science Pin On Classroom Science Pin By Naazish Ahamadi On Teaching Middle School Science Chemical And Physical Changes Matter Science Changes In Matter Phase Diagram Worksheet Answers One Step Equations Algebra Equations Worksheets Equations Properties Of Matter Homework Properties Of Matter Matter Worksheets Matter Vocabulary 5th Grade Science Worksheets The Many Phases Of Water Greatkids Science Worksheets Matter Worksheets 5th Grade Science Heating Curve Graphing Activity Good To Review Phase Changes And Calcs From Chem 1 Before Delving M Chemistry Teaching Chemistry Physical Science High School Worksheet Chemical Physical Change Chemistry Worksheets Chemical And Physical Changes Matter Science Changing States Of Matter Worksheet Matter Worksheets States Of Matter Worksheet States Of Matter Free Differentiated Worksheet For The Bill Nye The Science Guy Phases Of Matter Episode Free Worksheet Video Science Guy Bill Nye Science Guy Bill Nye Bill Nye Phases Of Matter Video Worksheet Bill Nye Matter Videos Matter Worksheets Bill Nye S1e8 Phases Of Matter States Of Matter Video Sheet Matter Videos States Of Matter Bill Nye
898
4,735
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.203125
3
CC-MAIN-2022-49
latest
en
0.932353