url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://quizlet.com/explanations/questions/a-bar-having-a-square-cross-section-of-30-mm-by-30-mm-is-2-m-long-and-is-held-upward-if-it-has-a-mas-2a350496-00ff-4802-8e04-4887e625611b
|
#### Question
A bar having a square cross section of 30 mm by 30 mm is 2 m long and is held upward. If it has a mass of 5 kg/m, determine the largest angle θ, measured from the vertical, at which it can be supported before it is subjected to a tensile stress along its axis near the grip.
Verified
#### Step 1
1 of 7
We have the following data:
An inclined bar with angle $\theta$ with respect to the y-axis
$\bullet$ Cross-section of the bar Dimensions: 30 mm $\times$ 30 mm $\bullet$ Length of the bar $L$ = 2 m $\bullet$ Mass per unit length $m$ = 5 $\frac{\text{kg}}{\text{m}}$
Required: $\Rightarrow$ Largest angle $\theta$ without developing any tensile stress
## Create an account to view solutions
2,117 solutions
#### Physics for Scientists and Engineers: A Strategic Approach with Modern Physics
4th EditionRandall D. Knight
3,509 solutions
#### Mechanics of Materials
10th EditionR.C. Hibbeler
1,723 solutions
|
2022-12-09 00:25:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 30, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6233807802200317, "perplexity": 1289.7356717191922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711368.1/warc/CC-MAIN-20221208215156-20221209005156-00841.warc.gz"}
|
https://www.sencha.com/forum/archive/index.php/t-182096.html
|
PDA
View Full Version : "sencha slice theme" throwing undefined errors and halting on 4.1?
gav
20 Feb 2012, 3:12 PM
Preamble: I've done custom themes in 4.0.1/2 just fine with sliced images using Tools SDK 1.2.2, everything worked swimmingly, so I'm familiar with the process.
I was unable to run sencha slice theme on my somewhat involved app that we've recently ported to 4.1, so I created a completely stub app using the Ext MVC architecture example ( http://www.sencha.com/learn/architecting-your-app-in-ext-js-4-part-1 and http://www.sencha.com/learn/architecting-your-app-in-ext-js-4-part-2 ).
index.html:
<html>
<link rel="stylesheet" href="resources/css/my-ext-theme.css" type="text/css">
<script type="text/javascript" src="ext-4.1/ext-all-debug.js"></script>
<script type="text/javascript" src="app/Application.js"></script>
<body>
</body>
</html>
app/Application.js:
Ext.application({
name: 'Panda',
autoCreateViewport: true,
launch: function() {
}
});
And a very simple Viewport class with all the components removed from that same example:
Ext.define('Panda.view.Viewport', {
extend: 'Ext.container.Viewport',
requires: [],
layout: 'fit',
initComponent: function() {
this.items = {
xtype: 'panel',
dockedItems: [{
dock: 'top',
xtype: 'toolbar',
height: 80,
items: [{
xtype: 'component',
}]
}]
};
this.callParent();
}
});
I followed the directions here for theme slicing: http://www.sencha.com/learn/theming/ .
I can do the CSS generation fine, all succeeds there, but then I try running the "slice theme" and get the following:
$/Applications/SenchaSDKTools-1.2.2/command/sencha slice theme -d ext-4.1 -c resources/css/my-ext-theme.css -o resources/images/ -v Sencha Theme Generator Copyright (c) 2011 Sencha Inc. TypeError: 'undefined' is not a function Line: 0 Source: undefined TypeError: 'undefined' is not a function Line: 0 Source: undefined Error: Ext.Loader is not enabled, so dependencies cannot be resolved dynamically. Missing required class: Ext.util.Observable Line: 0 Source: undefined Error: Ext.Loader is not enabled, so dependencies cannot be resolved dynamically. Missing required class: Ext.util.Observable Line: 0 Source: undefined Error: Ext.Loader is not enabled, so dependencies cannot be resolved dynamically. Missing required class: Ext.data.Model Line: 0 Source: undefined This happens whether I use SDK Tools 1.2.2 or SDK Tools 2.0 Preview. Am I doing anything wrong, or is there a bug here? I can bundle the app if needed, but 99% of everything needed should be pasted above. mitchellsimoens 21 Feb 2012, 5:14 AM It's saying Ext.Loader isn't enabled gav 21 Feb 2012, 5:29 AM I see the message, but look at the first line of app/Application.js, that I pasted above: Ext.Loader.setConfig({enabled: true}); That's the best way I know how to enable it. Does it need to be placed somewhere else as well? westy 23 Feb 2012, 6:43 AM I'm getting the same errors, and some of the comments here confuse me. I was under the impression that slicing has nothing to do with any html or an application as such. How can it from the command it's given? e.g.: sencha slice theme -d ..\..\..\ThirdPartyReferences\Ext\ext-4.1.0-beta-3 -c css\Altus.css -o ..\..\..\ThirdPartyReferences\Ext\ext-4.1.0-beta-3\resources\themes\images\altus-blue -v All you give it is built CSS file, it has no knowledge of an application... Anyway, my errors are: C:\path\to\app>sencha slice theme -d ..\..\..\ThirdPartyReferences\Ext\ext-4.1.0-beta-3 -c css\Altus .css -o ..\..\..\ThirdPartyReferences\Ext\ext-4.1.0-beta-3\resources\themes\images\altus-blue -v Sencha Theme Generator Copyright (c) 2011 Sencha Inc. TypeError: 'undefined' is not a function Line: 0 Source: undefined TypeError: 'undefined' is not a function Line: 0 Source: undefined Error: Ext.Loader is not enabled, so dependencies cannot be resolved dynamically. Missing required class: Ext.util.Obser vable Line: 0 Source: undefined Error: Ext.Loader is not enabled, so dependencies cannot be resolved dynamically. Missing required class: Ext.util.Obser vable Line: 0 Source: undefined TypeError: 'undefined' is not an object Line: 0 Source: undefined Error: Ext.Loader is not enabled, so dependencies cannot be resolved dynamically. Missing required class: Ext.data.Model Line: 0 Source: undefined At that point it hangs, doesn't error, doesn't do anything more. This needs to work for 4.1 to be evaluated, and quite frankly I'm surprised it hasn't been sorted since I reported the same issue 6 weeks ago with 4.1.0-beta-1. Cheers, Westy westy 23 Feb 2012, 9:02 AM Just tried with SDK tools 2.0.0 beta and similar problem: C:\Altus\repos\Altus\web\resources>sencha slice theme -d ..\..\..\ThirdPartyReferences\Ext\ext-4.1.0-beta-3 -c css\Altus .css -o ..\..\..\ThirdPartyReferences\Ext\ext-4.1.0-beta-3\resources\themes\images\altus-blue -v Sencha Theme Generator Copyright (c) 2011 Sencha Inc. TypeError: 'undefined' is not a function Line: 0 Source: undefined TypeError: 'undefined' is not a function Line: 0 Source: undefined Error: Ext.Loader is not enabled, so dependencies cannot be resolved dynamically. Missing required class: Ext.util.Obser vable Line: 0 Source: undefined Error: Ext.Loader is not enabled, so dependencies cannot be resolved dynamically. Missing required class: Ext.util.Obser vable Line: 0 Source: undefined TypeError: 'undefined' is not an object Line: 0 Source: undefined Error: Ext.Loader is not enabled, so dependencies cannot be resolved dynamically. Missing required class: Ext.data.Model Line: 0 Source: undefined Then hangs. zombeerose 23 Feb 2012, 9:10 AM I'm getting the exact same barrage of errors from running SDK 2.0 Beta against Ext 4.1 Beta 2 and Beta 3. If I revert my ext directory to version 4.0.7, everything works perfectly. Here is my command: sencha.bat slice theme -d htdocs\includes\library\extjs\ext -c htdocs\includes\library\extjs\portal\resources\css\theme.css -o htdocs\includes\library\extjs\ext\resources\themes\images\custom -v farion 24 Feb 2012, 1:58 AM I have exactly the same problem (Mac OSX). Even if I use the original extjs css. #$ sencha slice theme -d extjs/ -c extjs/resources/css/ext-all.css -o resources/images/ -v
Sencha Theme Generator
Copyright (c) 2011 Sencha Inc.
TypeError: 'undefined' is not a function
Line: 0
Source: undefined
TypeError: 'undefined' is not a function
Line: 0
Source: undefined
Error: Ext.Loader is not enabled, so dependencies cannot be resolved dynamically. Missing required class: Ext.util.Observable
Line: 0
Source: undefined
Error: Ext.Loader is not enabled, so dependencies cannot be resolved dynamically. Missing required class: Ext.util.Observable
Line: 0
Source: undefined
Error: Ext.Loader is not enabled, so dependencies cannot be resolved dynamically. Missing required class: Ext.data.Model
Line: 0
Source: undefined
Edit: I just upgraded to ExtJS 4.1 Beta3 and the Developer Tools 2.0 Beta, but the bug still exists.
dougi
24 Feb 2012, 6:34 AM
same problem on windows xp
dougi
24 Feb 2012, 7:04 AM
i add a manifest file under the resources folder named manifest.js and references it through the options -m . It removed some errors but not all.
-- error messages --
TypeError: 'undefined' is not a function
Line: 0
Source: undefined
TypeError: 'undefined' is not a function
Line: 0
-- manifest.js --
/// add my customs components themes
Ext.manifest = {
widgets: [
{
xtype: 'widget.window',
ui : 'custom'
}
]
};
});
gav
25 Feb 2012, 4:51 AM
dougi: Does that stop the process blocking/never finishing for you, or does it still not finish even with the manifest? I still haven't found a solution to this, making it impossible to move forward, as we use custom themes that have to work on IE8.
westy
25 Feb 2012, 7:30 AM
I still haven't found a solution to this, making it impossible to move forward, as we use custom themes that have to work on IE8.
That's why I'm so surprised that no-one's jumped on this.
IE7 and 8 support is not optional, it's an absolute requirement for 90% of developers using Ext surely. It's been nearly two months now that it's not been possible to split a theme and still little interest in fixing it it seems...
Whilst we'd all love to target modern browsers it's simply not realistic out here in the real world.
gav
25 Feb 2012, 7:45 AM
I could get away without IE7 here, but unfortunately IE8 is a requirement. However, this bug's an all or nothing one. :)
zombeerose
25 Feb 2012, 11:45 AM
@gav
I run two slice commands - one for the theme & one with a manifest. Both freeze and yield no results.
I have submitted this thread as a support ticket so it hopefully gets more attention.
dougi
28 Feb 2012, 8:25 AM
Sorry for the delay.
it still not finish even with the manifest. it freezes.
gav
17 Mar 2012, 7:28 AM
Has Ext 4.1 RC1 resolved this for any of you? It hasn't helped here.
youss.imzourh
19 Mar 2012, 1:51 PM
Nope, it still not working. Damn it!
Hope on the next release!
westy
21 Mar 2012, 3:13 AM
Just to add to this, it seems that creating JSB3 files is broken in the 2.0.0 beta version of the SDK too.
That and it was horrendous to get working due to path issues or something.
To spell it out, again, without an updated, working, stable SDK it is impossible to move to 4.1!
dougi
22 Mar 2012, 2:10 AM
i have got the same errors with 4.1 RC1 on windows XP. The process never ends.
It's a major problem and for what i know it 's the same problem for macos and window users. So it's currently a dead end.
So i ask to the Sencha team. Can we expect the problem to be fixed with the final release.
It's really a bad timing because i'm trying to convince my boss to buy Extjs.
Thanks.
mitchellsimoens
22 Mar 2012, 4:09 AM
Slicer has been fixed for the next release! It just got fixed earlier this week so it should be good to go! The fix hasn't been merged in as it's going through what seems to be a rigorous QA process as we are ensuring we have pixel perfectness but it's at least creating the images now.
gav
22 Mar 2012, 4:12 AM
Exciting to hear, Mitchell! Is there a planned beta release for the next version of the SDK Tools that includes this fix, or just it's done when it's done?
mitchellsimoens
22 Mar 2012, 4:15 AM
Exciting to hear, Mitchell! Is there a planned beta release for the next version of the SDK Tools that includes this fix, or just it's done when it's done?
There isn't anything planned but if I were a betting man we would want it working for 4.1.0 GA that is support to come very soon.
dougi
22 Mar 2012, 4:38 AM
Good news!
zombeerose
3 Apr 2012, 8:35 AM
Any more news about the slicer? Is the next release of the SDK still dependent upon the Ext GA release?
mitchellsimoens
3 Apr 2012, 8:55 AM
I haven't tested it myself but there is a new SDK Tools version out. As I said, I have not tested and I haven't heard (haven't asked) if it is part of that release.
zombeerose
3 Apr 2012, 12:01 PM
So far my own tests with ExtJS 4.1 have not been favorable. No errors, no feedback, it just runs and exits with code 0.
I suspect it is not ready based on what Jacky mentions: "Please note that this release is only meant to be used with Sencha Touch SDK v2.0.1-RC (http://www.sencha.com/forum/showthread.php?192166-Sencha-Touch-2.0.1-RC-Now-Available) and later."
mitchellsimoens
3 Apr 2012, 12:09 PM
Ok, just got off a call with an engineering manager and I asked about slicer and what the status is.
So we just released SDK Tools 2 beta 2 but the slicer fix was not included in that. We just released Ext JS 4.1.0 RC2 (http://www.sencha.com/blog/ext-js-4-1-rc-2-released) a few minutes ago and GA will be the next release. Our goal is to have the SDK Tools 2 with slicer working when GA comes out in a few weeks. Currently we don't see any issues preventing this but of course you never know what will creep it's ugly head up.
zombeerose
3 Apr 2012, 2:14 PM
Ok - thank you for the update. Since our client base is primarily IE :((, the slicer is a requirement before we can deploy.
Off to play with RC2 ... :)
youss.imzourh
11 May 2012, 4:20 AM
That's working like a charm now with ExtJS 4.1.0 final and Sencha 2.0.0 Beta 3 !
Brilliant ! Thanks.
|
2015-08-31 06:50:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25230905413627625, "perplexity": 6966.111979075342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065828.38/warc/CC-MAIN-20150827025425-00282-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://zbmath.org/?q=an:05915416&type=pdf&format=complete
|
# zbMATH — the first resource for mathematics
Proofs of power sum and binomial coefficient congruences via Pascal’s identity. (English) Zbl 1230.05014
Summary: A well-known and frequently cited congruence for power sums is $1^n+ 2^n+\cdots+ p^n\equiv\begin{cases} -1\pmod p\;&\text{if }(p-1)\mid n,\\ 0\pmod p\;&\text{if }(p-1)\nmid n,\end{cases}$ where $$n\geq 1$$ and $$p$$ is prime. We survey the main ingredients in several known proofs. Then we give an elementary proof, using an identity for power sums proven by B. Pascal in the year 1654. An application is a simple proof of a congruence for certain sums of binomial coefficients, due to Ch. Hermite [J. Reine Angew. Math. 81, 93–95 (1875; JFM 07.0131.01)] and P. Bachmann [Niedere Zahlentheorie. Zweiter Teil, Teubner, Leipzig (1910; JFM 41.0221.10) (p. 53); Reprint. Bronx, N. Y.: Chelsea (1968; Zbl 0253.10001)].
##### MSC:
11A07 Congruences; primitive roots; residue systems 11B65 Binomial coefficients; factorials; $$q$$-identities 05A10 Factorials, binomial coefficients, combinatorial functions 05A19 Combinatorial identities, bijective combinatorics
Full Text:
|
2021-08-04 10:26:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7540333271026611, "perplexity": 4650.359513769293}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154798.45/warc/CC-MAIN-20210804080449-20210804110449-00402.warc.gz"}
|
https://www.physicsforums.com/threads/tensor-indices-proving-lorentz-covariance.812462/
|
# Tensor indices (proving Lorentz covariance)
Tags:
1. May 6, 2015
### VintageGuy
1. The problem statement, all variables and given/known data
So, I need to show Lorentz covariance of a Proca field E-L equation, conceptually I have no problems with this, I just have to make one final step that I cannot really justify.
2. Relevant equations
"Proca" (quotation marks because of the minus next to the mass part, I saw on the internet there is also the plus convention) field is defined as:
$${\cal L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{1}{2}m^2V_{\mu}V^{\mu}$$
where $V_{\mu}$ is the massive field, and $F_{\mu\nu}$ the appropriate analogy to the EM field tensor. This leads to E-L:
$$\partial^{\mu}F_{\mu\nu}-m^2V_{\nu}=0$$
3. The attempt at a solution
So when I transform the equation according to: $V^{\mu}(x) \rightarrow V'^{\mu}(x')=\Lambda^{\mu}_{\,\, \nu}V^{\nu}(x)$, everything turns out okay but this one part that looks like: $-\partial^{\mu}\Lambda_{\nu}^{\,\, \alpha}\partial_{\alpha}V_{\mu}(x)$, and fr the proof to be over I need it to look like:
$$-\partial^{\mu}\Lambda_{\nu}^{\,\, \alpha}\partial_{\alpha}V_{\mu}(x)=-\partial^{\mu}\partial_{\nu}V'_{\mu}(x')$$
and I can't seem to wrap my head around it, there must me something I'm not seeing...
EDIT: initialy I transformed the derivatives as well, these are derivatives of the field over the "old" coordinates (x not x')
2. May 6, 2015
### Orodruin
Staff Emeritus
You should keep doing that. Otherwise your equations are expressed in some weird combination of frames.
3. May 6, 2015
### VintageGuy
I just figured it out, for some reason I was approaching the equation as though it was the Lagrangian density... Thanks, solved.
|
2017-12-14 21:47:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8637291789054871, "perplexity": 588.7827873905884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948550986.47/warc/CC-MAIN-20171214202730-20171214222730-00468.warc.gz"}
|
https://www.physicsforums.com/threads/friction-of-an-object-on-a-moving-board.599122/
|
Homework Help: Friction of an object on a moving board
1. Apr 22, 2012
grusini
1. The problem statement, all variables and given/known data
A body of mass $m_A=2 kg$ is placed on a long board of mass $m_B=8 kg$ at distance $d=1 m$ from the rear edge of the board. The friction coefficient between the body and the board is $μ=0.2$. A force of magnitude $30 N$ is applied to the front edge of the board and the body start moving towards the rear edge. How much time will it take to fall off the board?
2. Relevant equations
The force of friction is given by $F_f=F_n\cdot μ$ where F_n is the normal force exerted by the object on the surface.
3. The attempt at a solution
I tried to write down Newton's equation of motion (on the x-axis) for the body and the board as follows:
Board: $F=(m_A+m_B)a_1$
Body: $F\frac{m_A}{m_A+m_B}-F_f=m_A a_2$ where $F_f=m_Agμ$.
With these equations the problems doesn't come out right...
2. Apr 22, 2012
Staff: Mentor
What forces act on the body? Apply Newton's 2nd law.
What forces act on the board? Apply Newton's 2nd law.
(Don't treat 'board + body' as a single system, since parts are in relative motion.)
3. Apr 22, 2012
grusini
On the board: $F$ (and the weight of $A$ which is equilibrated by the board itself).
On the body: The force exerted by the board on the body, directed along the direction of $F$ and of a "certain" magnitude and the friction force.
Last edited by a moderator: Apr 22, 2012
4. Apr 22, 2012
Staff: Mentor
All we care about are the horizontal forces, since vertical forces will cancel.
You are missing the horizontal force of the body on the board. (Newton's 3rd law.)
The only horizontal force on the body is the friction force from the board.
5. Apr 22, 2012
grusini
So the body exerts a horizontal force $F_f$ on the board in the opposite direction of $F$? Then Newton's 2nd law for the board is
$F-F_f=m_Ba_1$
and the Newton's law for the body would be:
$F_f=m_Aa_2$?
6. Apr 22, 2012
Staff: Mentor
Right. And you also know how to calculate the friction force.
|
2018-09-25 18:38:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5456739664077759, "perplexity": 482.85354439164496}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162385.83/warc/CC-MAIN-20180925182856-20180925203256-00453.warc.gz"}
|
https://www.hankcs.com/nlp/application-of-dependency-parsing-in-deep-learning.html
|
依存句法分析在深度学习中的应用
DCNNs
线性拼接
\label{eq:seq_con}
\widetilde{ \bf x}_{i,j} = {\bf x}_i \oplus {\bf x}_{i+1}\oplus \cdots \oplus {\bf x}_{i+j}
DCNNs (Dependency-based CNN) (Ma et al. (2015))做了2种简单的改进,即基于路径和基于兄弟节点,如下图所示:
GCN
Graph Convolutional Networks
GCN是一种编码图数据的网络结构,给定一张$$n$$个节点的有向图,可以将其表示为邻接矩阵$$\bf A$$,其中$$A_{ij}=1$$表示存在从$$i$$$$j$$的边。在$$L$$层的GCN中,记第$$l$$层节点$$i$$的输入为$$h_i^{(l-1)}$$,输出为$$h_i^{(l)}$$。那么,图卷积操作定义如下: \begin{align} h_i^{(l)} = \sigma\big( \sum_{j=1}^n A_{ij} W^{(l)}{h}_j^{(l-1)} + b^{(l)} \big), \label{eqn:conv} \end{align}
$$A_{ij}$$只在邻接节点处等于$$1$$,所以图卷积实际上是让节点从邻接节点处获取总结性的信息。
soft pruning
其中,$$Q$$$$K$$ 都是上一层的表示 $$\mathbf{h}^{(l-1)}$$。用这些$$\mathbf{\tilde{A}}$$代替$$\mathbf{{A}}$$就可实现soft pruning了。这个机制的效果如下所示:
References
Rui Cai, Xiaodong Zhang, and Houfeng Wang. 2016. Bidirectional recurrent convolutional neural network for relation classification. In Proceedings of the 54th annual meeting of the association for computational linguistics (volume 1: Long papers), pages 756–765, Berlin, Germany, August. Association for Computational Linguistics.
Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention guided graph convolutional networks for relation extraction. In Proceedings of the 57th annual meeting of the association for computational linguistics, pages 241–251, Florence, Italy, July. Association for Computational Linguistics.
Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, and Houfeng Wang. 2015. A dependency-based neural network for relation classification. In Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 2: Short papers), pages 285–290, Beijing, China, July. Association for Computational Linguistics.
Mingbo Ma, Liang Huang, Bowen Zhou, and Bing Xiang. 2015. Dependency-based convolutional neural networks for sentence embedding. In Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 2: Short papers), pages 174–179, Beijing, China, July. Association for Computational Linguistics.
Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers), pages 1556–1566, Beijing, China, July. Association for Computational Linguistics.
Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 1785–1794, Lisbon, Portugal, September. Association for Computational Linguistics.
|
2020-06-04 05:08:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9104869365692139, "perplexity": 10682.427302695489}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439019.86/warc/CC-MAIN-20200604032435-20200604062435-00392.warc.gz"}
|
https://leanprover-community.github.io/archive/stream/113488-general/topic/apply.20induction_lemma.2C.20clear.20n.html
|
## Stream: general
### Topic: apply induction_lemma, clear n
#### Kevin Buzzard (Nov 11 2020 at 19:44):
Reviewing a PR and I see apply nat.strong_induction_on n, clear n,. This reminded me of some code I was writing last week which had the line apply span_induction hx; clear hx x,. This is perhaps quite a common idiom? The reason one wants to clear the variable is because one is about to embark on a proof or proofs of results related to the original goal and ones instinct is to use the same variable names again -- for example I want to prove results about elements of submodules and I want to use x and y for these elements, so it's annoying if I don't clear the original x which is now useless to me. Is this style OK or is it worth modding some induction tactic to make this happen automatically? I'm not entirely sure what I want here, but given that I've now seen this twice in one week it made me wonder.
#### Mario Carneiro (Nov 11 2020 at 20:29):
I think ssreflect has something for this, a la have {h1} h2 := foo h1 which is like have h2 := foo h1, clear h1 except it works even if you reuse the variable names
#### Mario Carneiro (Nov 11 2020 at 20:33):
I don't see how we can offer something better than ; clear though
#### Mario Carneiro (Nov 11 2020 at 20:33):
in the examples, apply span_induction hx clearing hx x would be no better
#### Shing Tak Lam (Nov 11 2020 at 20:41):
fwiw induction n using nat.strong_induction_on with n hn works (does the same as apply nat.strong_induction_on n, clear n, intros n hn)
#### Kevin Buzzard (Nov 11 2020 at 21:42):
Yes, I realised when I was posting that I couldn't really see a solution which would be saving too many characters!
Last updated: May 15 2021 at 23:13 UTC
|
2021-05-16 00:04:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34017854928970337, "perplexity": 1807.5228948574372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00494.warc.gz"}
|
https://math.stackexchange.com/questions/3177721/determining-whether-a-relation-is-an-equivalence-relation/3177732
|
# Determining whether a relation is an equivalence relation
Define a relation $$R$$ on the set of functions from $$R$$ to $$R$$ as follows:
$$(f,g) \in R \text{ if and only if } f(x) − g(x) \geq 0 \text{ for all } x \in R$$
Is this relation reflexive? Symmetric? Transitive? Is it an equivalence relation? Explain.
So far I have that the relations is reflexive because $$f(x)-f(x) \geq 0$$, which is true.
But I'm not quite sure if the relation is symmetric or transitive as I am not quite familiar.
• Welcome to MSE. $R$ is not symmetric, because $(f,g)\in R \not\implies (g,f)\in R$, so $R$ is not an equivalence relation – J. W. Tanner Apr 7 '19 at 3:46
• Does the condition $f(x)-g(x)\ge0$ look "symmetric" in the functions $f$ and $g$? – Angina Seng Apr 7 '19 at 3:48
• I don't quite understand why it is not symmetric. However, is it true that the relation is reflexive and transitive? – ph-quiett Apr 7 '19 at 3:51
• For example, "$<$" is not symmetric on real numbers because $a<b$ does not imply $b<a$ – J. W. Tanner Apr 7 '19 at 4:00
• @ ph-quiett Consider $x ^ 2 + 1$ and $x ^ 2$ to disprove the symmetry – Minz Apr 7 '19 at 4:02
Reflexive $$f(x)-f(x)\geq 0 \forall x\in \mathbb{R}$$ Yes it is reflexive.
Transitive $$f(x)-g(x)\geq 0 \forall x\in \mathbb{R}$$ $$g(x)-h(x)\geq 0 \forall x\in \mathbb{R}$$ Add above equations, $$\Longrightarrow f(x)-h(x)\geq 0 \forall x\in \mathbb{R}$$ Yes it is transitive.
$$f(x)-g(x)\geq 0 \forall x\in \mathbb{R}$$ $$g(x)-f(x)\leq 0 \forall x\in \mathbb{R}$$ Hence, $$(f,g)\in R$$ & $$(g,f)\in R$$ iff $$g=f$$ Hence, this relation is not symmetric.
|
2021-06-21 03:39:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.850405216217041, "perplexity": 364.45209106415837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488262046.80/warc/CC-MAIN-20210621025359-20210621055359-00339.warc.gz"}
|
https://scicomp.stackexchange.com/questions/18683/how-to-solve-energy-balance-equation-by-numerical-method
|
# How to solve Energy Balance equation by numerical method
Good Day
I am new to heat transfer technique please give me some suggestion on solving energy balance equation
$$a \frac{\partial T_p}{\partial t}=\frac{\partial}{\partial x}\left(b\frac{\partial T_p}{\partial x}\right)+\frac{\partial}{\partial z}\left(c\frac{\partial T_p}{\partial z}\right)+b$$
which is discretized as
$$a\left(\frac{T_p-T_{p_0}}{\Delta t}\right)=c\left(\frac{T_u-T_p}{\Delta X_u \Delta X}\right)+d\left(\frac{T_D-T_p}{\Delta X_d \Delta X}\right)+e\left(\frac{T_w-T_p}{\Delta Z_w \Delta Z}\right)+f\left(\frac{T_E-T_p}{\Delta Z_E \Delta Z}\right)+b$$
$b=S_c+S_pT_p$
Where $T_p$ is the temperature of solar panel $a,c,d,e,f$ are the constant value.
I am not getting how to proceed with this equation. Please suggest some hints at solutions.
• You need to read an introductory textbook about finite difference methods. The second equation you show above is the equation the temperature at every grid point has to satisfy. This leads to a linear system that you can then solve for the temperature of the next time step. – Wolfgang Bangerth Jan 18 '15 at 15:39
• Can you suggest me some reference book/article for this sir. – Ambaresh Jan 18 '15 at 16:19
• Any book on finite difference methods will do, given that you are missing one of the very first steps of what to do to solve these equations. Go to your library and see what they have. – Wolfgang Bangerth Jan 18 '15 at 18:33
## 1 Answer
The problem with giving you the next step in this solution is that it is very easy to naively create a Finite Difference Scheme and have it become unstable. The things (without going into the math too much) which matter in this regard are your time step, grid size and how quick your variables are changing.
In order to have your scheme be stable, it is often preferred to use an implicit time integration. This means that each iteration in your loop (looping over the time of interest), this Tp0 value will be your b vector in the prototypical Ax=b linear system. To form your A matrix you will need to read more about difference schemes. I would defer you to googling about "5 point stencils" for 2D FD Approximations. You'll find a full list of equations.
Honestly, I do not understand why the discretized equation below the balance law looks as it does. To me, this looks like a slightly retooled Unsteady Heat Conduction equation in 2D. If that is the case, that is usually not the typical way to discretize a second order differential equation with differences. (Though I do not know what Tu, Td, Tw, or Te are.
|
2020-08-04 22:22:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44738999009132385, "perplexity": 337.57295019746186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735885.72/warc/CC-MAIN-20200804220455-20200805010455-00180.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-8-polynomials-and-factoring-8-8-factoring-by-grouping-practice-and-problem-solving-exercises-page-520/23
|
## Algebra 1
$(w^{3}+6w)(3w-2)$
$3w^{4}=3.w.w.w.w$ and $2w^{3}=2.w.w.w$ Hence, $GCF=$$w.w.w=w^{3} 18w^{2}=2.3.3.w.w and 12w=2.2.3.w Hence, GCF=$$2.3.w=6w$ After grouping : $=(3w^{4}-2w^{3})+(18w^{2}-12w)$ $=w^{3}(3w-2)+6w(3w-2)$ $=(w^{3}+6w)(3w-2)$ $=(w)(w^{2}+6)(3w-2)$
|
2018-05-25 17:19:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9766823053359985, "perplexity": 13183.784346896364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867140.87/warc/CC-MAIN-20180525160652-20180525180652-00181.warc.gz"}
|
http://www.scientificlib.com/en/Mathematics/LX/BareissAlgorithm.html
|
Hellenica World
# .
In mathematics, the Bareiss algorithm, named after Erwin Bareiss, is an algorithm to calculate the determinant or the echelon form of a matrix with integer entries using only integer arithmetic; any divisions that are performed are guaranteed to be exact (there is no remainder). The method can also be used to compute the determinant of matrices with (approximated) real entries, avoiding the introduction any round-off errors beyond those already present in the input.
During the execution of Bareiss algorithm, every integer that is computed is the determinant of a submatrix of the input matrix. This allows, using Hadamard inequality, to bound the size of these integers. Otherwise, Bareiss algorithm may be viewed as a variant of Gaussian elimination and needs roughly the same number of arithmetic operations.
It follows that, for an n × n matrix of maximum (absolute) value 2L for each entry, the Bareiss algorithm runs in O(n3) elementary operations with an O(n n/2 2nL) bound on the absolute value of intermediate values needed. Its computational complexity is thus O(n5L2 (log(n)2 + L2)) when using elementary arithmetic or O(n4L (log(n) + L) log(log(n) + L))) by using fast multiplication.
The general Bareiss algorithm is distinct from the Bareiss algorithm for Toeplitz matrices.
References
Bareiss, Erwin H. (1968), "Sylvester's Identity and multistep integer-preserving Gaussian elimination" (PDF), Mathematics of Computation 22 (102): 565–578, doi:10.2307/2004533.
Mathematics Encyclopedia
|
2020-11-30 08:36:55
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9255598187446594, "perplexity": 1221.4071290561242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141211510.56/warc/CC-MAIN-20201130065516-20201130095516-00229.warc.gz"}
|
http://www.businesswalk360.com/dl00ff8/4e4bd7-which-of-the-following-is-a-microeconomic-question%3F
|
Donna Benedicto The Main Event, Ffxiv Macro If Statement, How To Cancel Idfc Fastag, Toto Africa Violin Sheet Music, Pxzvc Bad Idea Ft Shiloh Dynasty, " />
# which of the following is a microeconomic question?
a. Answer: 2 question Which of the following is a microeconomic question? c. What are the total production levels in the economy? a. b) is a microeconomic question, so the answer is b. Government budget C. Car industry D. Foreign exchange rate - 16240677 What price to charge for an automobile b. c. D) All of the above are microeconomic … The answer cannot be determined from the information given. Unemployment Was 6.8 Percent Of The Labor Force Last Year. … A. general price level B. c. How can we best encourage economic growth? 10) A) Why Do Economies Experience Periods Of High Inflation? Which of the following is a microeconomic decision? Question: 10) Which Of The Following Is A Microeconomics Question? Ask your question. It is more concerned with the individual than the nation. Which 2 of the following are microeconomic problems? © copyright 2003-2020 Study.com. The demand for a certain product is given by p... A monopoly sells its good in the United States,... Show that the two utility function below generate... You are the manager of a firm that receives... Eric lives in San Diego and loves to eat desserts.... For parts (a) - (c) below, write down a... Find an article in a recent newspaper or magazine... Luke's utility is U(x1,x2) = 2x1 + \ln x2. 2. Log in. The dollar sales to attain that target profit is closest to. Will a new type of electronic reader or tablet increase the number of buyers? Which of the following is a microeconomic question? You can specify conditions of storing and accessing cookies in your browser. Microeconomic Theory 1 • Basic analytical framework of modern economics: − Economic environments: Number of agents, individuals’ characteristics (preference, technology, endowment), information structures, institutional economic environments ... which results in the following F.O.C.’s If the price of this good falls from P 1 to P 2 , then consumer surplus will _____ by areas _____. C. The U.S. Economy Is Experiencing High Fiscal And Trade Deficits. The … Think of an organization with which you are familiar and share two aspects that ar What is the overall price level in the econom… Show more Which of the following is a microeconomic question? Unlimited wants exceed limited resources. Explanation: The other questions are macroeconomic question. Economists understand that people respond to: answer. What are the variables that determine the price of a specific good? â¦, bit balance in allowance for doubtful accounts. What should the Federal government do to reduce the trade deficit with Japan? e company's monthly target profit is $28,000. c. Gross Domestic Product. Which among the following statement is INCORRECT? Create your account. b. A stock paying$5 in annual dividends currently sells for $80 and has an expected return of 14%. answer. â¦. What are the variables that determine the price of a specific good? The overall welfare (happiness) of a country. â¦, e YTM on these bonds is 6.5 percent, what is the current bond price, Change management failures can be traced to many different aspects. Join now. 1. Middle School. â¦, year from now after the next dividend has been paid, Candle Stix estimates that 2% of the$100,000 balance in accounts receivable is uncollectible. What should be done differently. How can we reduce the unemployment rate among Hispanic men? answer! a. a general rise in interest rates b. a drop in inflation c. an increase in total production in the United States d. a drop in the nation's unemployment rate e. an increase in the price of the Ford Taurus What might investors expect to pay for the stock one Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library. B. D. â¦, tion only covering the cost of one project. Click hereto get an answer to your question ️ Give two examples of microeconomic studies. Flag question Question text When making a decision "at the margin," you will NOT consider undertaking more of an activity if the Marginal Benefit from it is __ the Marginal Cost. Question: Which of the following industries is most prone to the occurrence of natural monopolies? Log in. A family decides to drive its child to ... D. All of these are examples of microeconomic issues. The production possibilities curve bows outward from the origin because: opportunity costs increase as the production of a good increases. Correct Answer: An increase in spending by foreigners on the country's exports. a. The following question refers to the diagram below, which illustrates an individual’s demand curve for a good. Sqeekers Co. issued 10-year bonds a year ago at a coupon rate of 8.2 percent. D. The general price level increased by 4 percent last year. A subject matter of microeconomic issues 16240677 Examsbook.com is an ultimate one-stop haven of knowledge,... Motivated by cost and benefit considerations its child to... d. all of the which of the following is a microeconomic question? is a microeconomic?! Of plumbers lead to higher airline ticket prices question, so the answer can not be determined from information... Subject matter of microeconomic 1 level Increased by 2.5 percent last year 's exports to year-end adjusting entries, is... Of scarce or limited resources, ordering, and presentation of content is as! Price level in the entire economy shortage of plumbers lead to higher airline ticket prices bonds a year at... Your Degree, get access to this video and Our entire Q & a library of coffee question 4Incorrect 0.00... We have a constitutional amendment to balance the Federal government do to reduce the unemployment rate among Hispanic men that. Bringing Tuition-Free College to the occurrence of natural monopolies the government prevent the merger of two large firms all the! ) is a $400 de â¦, bit balance in allowance for doubtful accounts Examsbook.com. Affect the economy the following is a macroeconomic issue rather than a microeconomic issue? a production possibilities bows! Welfare ( happiness ) of a country 's domestic economy microeconomics is the study of at... A coupon rate of 8.2 percent refer to an individual company or bussines, while macroeconomics to. Examsbook.Com is an ultimate one-stop haven of knowledge constitutional amendment to balance the Federal budget company or,. The government prevent the merger of two large firms macroeconomics focuses on issues that affect the economy College the! Profit is$ 720,720 and th ⦠mark 0.00 out of 1: opportunity costs increase as the of! A rise in the economy from the origin because: opportunity costs increase as which of the following is a microeconomic question? of! Is the overall price level in the economy as a whole, rather the individual to reduce the unemployment among! How can we reduce the trade deficit with Japan b ) Why do Economies Experience of. The Law of demand in economics balance the Federal government do to reduce the unemployment rate among Hispanic men is. Are... what is the study of economics at the which of the following is a microeconomic question? level ) a ) Should have! Are mainly motivated by cost and benefit considerations: an increase in spending by on., while macroeconomics refers to economics variables of a country 's exports five forms of elasticity can be depicted experts! Type of electronic reader or tablet increase the number of buyers Transferable &! Payments and have a par value of the Labor Force last year to the.... Scarce or limited resources airlines likely lead to higher airline ticket prices of knowledge electric, phone, cable tend! The Recession, we have all that you need to know to crack it that the! The Community the above you can specify conditions of storing and accessing cookies in your browser value! Ultimate one-stop haven of knowledge ) will Federal Reserve Intervention Lower the Inflation remain! Value of the Mexican peso rise in the entire economy High Fiscal and trade Deficits question: ). Is 63 % building improvements coming in second, we have a par value of 1,000! 6.8 percent of the following is a microeconomic Statement tend to turn into natural monopolies most easily $and! Of natural monopolies most easily End of the following is a microeconomic topic answer! For 'Which of the following questions is a. a microeconomic question, so answer! Crack it and accessing cookies in your browser a microeconomics question happiness ) of a microeconomic Statement prevent merger! Hereto get an answer to your question Which one of the following is a phenomenon... Will a rise in the economy$ 28,000 $1,000 result in a decrease in economic growth be the. Th Identify Which of the following industries is most prone to the of. Of 14 % economic growth be in the economy Timeline & Importance, Short-Run costs vs, Sources &,... Of coffee question 4Incorrect mark 0.00 out of 1 ) Should we have a amendment! Thanks to everybody for their help: ) a a specific good function is U (,... Affected by the state of the following is primarily a macroeconomic topic:.... Following would have the greatest positive impact on a linear demand curve, the... Scholars® Bringing Tuition-Free College to the building, additional purchases of food, and purchasing a for! By consumers is affected by the state of the which of the following is a microeconomic question? are microeconomic … the answer can be! Microeconomic … the answer is:$ 20 and a cup of coffee question 4Incorrect mark 0.00 out 1!, tion only covering the cost of one project foreign exchange market question mark rise the. Cup of coffee question 4Incorrect mark 0.00 out of 1 video and Our Q. Foreign exchange rate - 16240677 Examsbook.com is an example of a macroeconomic question because considers... Question ️ Give two examples of microeconomic 1 bit balance in allowance for doubtful accounts produces sells! Feedback the correct answer: what are the total production levels in th Which! The merger of two large firms by consumers is affected by the state of the Recession is closest.! What factors are contributing to the rise of unemployment in the economy, Short-Run costs vs annual dividends currently for! And a cup of coffee question 4Incorrect mark 0.00 out of 1 ticket?. 14 % get your Degree, get access to this video and entire. Property of their respective owners the answers to estudyassistant.com Which of the is... Wrong answers - 16240677 Examsbook.com is an example of a country 's exports the correct answer is all. Conversely, macroeconomics is the overall welfare ( happiness ) of a specific good one answer to empty the with. To crack it is $28,000 Q & a library do to reduce the unemployment rate among Hispanic men &! To estudyassistant.com Which of the Mexican peso rise in the economy more Which of the Recession the Law of in! Question Which of the following is a microeconomics question your tough homework study... Following would have the greatest positive impact on a day to day basis by individuals and firms is as. Foreign exchange market question mark Identify Which of the Labor Force last year an... Par value of$ 1,000 with the dona â¦, tion only covering the cost one... Individuals are mainly motivated by cost and benefit considerations: opportunity costs increase the... Ordering, and presentation of content is known as ________ people face problem... 4 percent last year cookies in your browser function is U ( x, y ) =x+4y Which the! For food delivery issue rather than a microeconomic question increase in spending by foreigners on country! Remain relatively stable this year answer your tough homework and study questions the.... Answer is: $20 and a cup of coffee question 4Incorrect mark 0.00 out of 1 is! A good increases this video and Our entire Q & a library, and purchasing a vehicle food... With Japan then consumer surplus will _____ by areas _____ unemployment in the economy as a,. 20 and a cup of coffee question 4Incorrect mark 0.00 out of 1 instructions: you may more! Accessing cookies in your browser Federal government do to reduce the unemployment rate among Hispanic men is primarily macroeconomic! What... Matt 's utility function is U ( x, y ) =x+4y the merger of airlines!: what are the variables that determine the price of a country 's exports accessing cookies in your browser attain... Percent last year, so the answer is: select one: a for correct answers and click empty! Falls from P 1 to P 2, then consumer surplus will _____ by areas _____ their... Constitutional amendment to balance the Federal government do to reduce the unemployment rate among Hispanic men releases new. These are examples of microeconomic issues for the wrong answers? a: a this video and Our entire &. The general price level in the economy your question Which one of the Mexican peso rise in tax result. Monopolies most easily grow faster than others is affected by the state of the iPhone College! Known as ________ of a country to empty the box with a check mark for answers. The … question 13 Which of the Recession semiannual payments and have a par value of the is. The Community is$ 28,000 the study of economics at the personal level tough homework study. Get access to this video and Our entire Q & a library,,! Ratio is 63 % issued 10-year bonds a year ago at a coupon rate 8.2. Elasticity can be depicted ) all of these are examples of microeconomic issues the forms! Of a country individual company or bussines, while macroeconomics refers to economics variables of a good.... Included improvements to the rise of unemployment in the economy Increased Since the of! Or limited resources: a. microeconomic decisions are made on a linear demand curve, all the five of... Building, additional purchases of food, and purchasing a vehicle for food.. Output Increased by 2.5 percent last year are contributing to the building, purchases... Market question mark an answer for 'Which of the macroeconomy of buyers 5 Which! Coupon rate of 8.2 percent question: 5 ) Which of the following industries is prone... Homework and study questions considers how overall spending by consumers is affected the... Working Scholars® Bringing Tuition-Free College to the building, additional purchases of food, and presentation of is. \$ 720,720 and th ⦠the econom… Show more Which of the following would have the positive. Building, additional purchases of food, and purchasing a vehicle for food delivery small businesses and are. ( a ) what... Matt 's utility function is U (,.
|
2021-07-30 23:33:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2767114043235779, "perplexity": 3214.085415171999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154032.75/warc/CC-MAIN-20210730220317-20210731010317-00145.warc.gz"}
|
https://stats.meta.stackexchange.com/questions/2079/how-to-increase-reputation-with-very-low-reputation-points
|
# How to increase reputation with very low reputation points?
I just logged into stack exchange for the first time to answer a very basic question about logistic models in R. I found out that I cannot answer the question because I don't have enough reputation points. So, I am curious what are the fastest ways to gain the 50 reputation points needed to answer this basic question about how to get a pseudo $r^2$ for a logistic regression?
The question is: How to calculate pseudo-$R^2$ from R's logistic regression?
The answer is just to use the pscl package with the pR2 command but I am really confused about how to get more points. Any useful tips for increasing reputation are appreciated.
• I cannot reproduce this circumstance: according to the help, you need only the minimum of one point to answer a question. How exactly did you try to answer and what caused that to fail? – whuber Jun 20 '14 at 15:26
• Which apparently I can do on my own questions but not on other people's posts until I have 50 points. – user48729 Jun 20 '14 at 16:04
• Playing around some more with voting up led me to my answer - stats.stackexchange.com/help/whats-reputation – user48729 Jun 20 '14 at 16:12
• I guess that I am just still confused about how to get off of 1 reputation point because a user needs 5 user points to "answer" a question. I guess that the only way is to ask good questions? – user48729 Jun 20 '14 at 16:18
• According to this page you should be able to provide an answer, you just can't provide a comment until 50. At 5 you can participate in meta but you are here already because your question was migrated. I don't see anything or recall anything that required 5 points to answer a question. – Meadowlark Bradsher Jun 20 '14 at 16:28
I think you are confused between answering a question in a comment (50 rep) and answering as the site intends for everyone to be able to do (1 rep). This response was done in the text box below that you see as 'Your Answer' with several formatting options readily on display. This is where you should be able to provide answers even with only 1 rep.
It appears that what you are attempting to do is answer in a comment which is where I had previously answered you. This requires 50 reputation and is generally not supposed to be where an answer that is intended to be final or complete should go.
Providing answers in comments is for partial answers, asking for clarification, interrogatively arriving at answers, providing guesses and other interactions that are not explicitly intended to be fully authoritative.
Welcome to Cross Validated!
• Thank you meadowlark but you need to have 5 reputation points to answer in the "Your Answer" text box. Until you have gained 5 reputation points, you see "You must have at least 5 reputation on Cross Validated to answer a question." Which according to the StatExchange email that I received a few minutes ago, indicates that you first need to have an upvoted post in order to be able to answer a question. So, would you be willing to upvote this post just to let me answer posts? I am a stats prof and I am extremely amused (and somewhat distracted) by this minor inconvenience. – user48729 Jun 20 '14 at 17:37
• We are sympathetic and really appreciate your bearing with us. We (the community members) have little control over these basic aspects of how the SE system works, but this thread helps me appreciate the wisdom of most of the restrictions. You ran into trouble because you were trying to create material that would have taken considerable intervention on the community's part to clean up. I hope you won't mind spending a few minutes reading over our help center: it will help you decide if this is a good place for you to be and it will help you get up to speed quickly if you decide to participate. – whuber Jun 20 '14 at 17:54
• whuber, I believe (not sure) you'd need to ping @user48729, for calling his attention on someone else's post. – Andre Silva Jun 20 '14 at 18:23
• @whuber I am curious to know what exactly happened. The documentation says he should be able to answer but he was not? I can't see the question he is referring to. Is there a link? – Meadowlark Bradsher Jun 20 '14 at 18:38
• @MeadowlarkBradsher The original post was here: stats.stackexchange.com/questions/82105/… But, I have tried to answer other questions today and have found this note at the bottom of my page "You must have at least 5 reputation on Cross Validated to answer a question." Thanks for upvoting my post and then explaining why it isn't raising reputation points. – user48729 Jun 20 '14 at 18:45
• @MeadowlarkBradsher would it be appropriate to post this to the stats.stackexchange.com site? I guess that I am now confused by the difference between the meta site and the regular site. Thank you for being a good community citizen and pointing out many of these basic things. – user48729 Jun 20 '14 at 18:52
• User48729: (1) according to our help, you need only 1 point to answer a question on the main site, but 5 to answer here on meta. The role of the meta site is explained at stats.stackexchange.com/help/whats-meta. @Andre: One does not need to ping either the thread originator nor the answerer; in fact, an attempt to do so at the beginning of a comment will automatically be deleted! Both those people will always be notified of any comments made. Pinging serves to notify others who might have been part of a comment conversation. – whuber Jun 20 '14 at 19:07
• This answer is a good example of how to make more growth out of smaller value questions. When the answer you provide is compelling then it grows in value over time. When it evokes discussion there is also point growth there. The natural product of these is a rule that for a sufficiently good question (not hard to find here), a great answer can elicit response and value that increases over time. Best of luck. – EngrStudent Jul 3 '14 at 15:40
To clarify (though all the parts of this information is here in various places):
• On the main site, you can normally answer a question with 1 reputation.
• Here on meta you need 5 reputation to answer
• You need 50 reputation to comment in either location.
$\,$
To gain reputation quickly:
• answer questions on the main site. Good answers will generally get upvoted; this is the fast route to reputation. An average answer should typically get you about 30 points or so (but even an experienced answerer with a higher average will get 0 points sometimes). [However, it might take a while to figure out the kinds of answers that tend to be worth more - my average was below the site average for a long while. Other people may be faster on the uptake than I was.]
There are many unanswered questions, so there's plenty of opportunity for points. An even slightly determined person can get 50 points in a session.
• ask good questions on the main site. This is often a little slower, but a very good question gets a lot of attention, and one that attracts good answers gets a lot more. You don't need to be able to answer questions to garner a lot of reputation.
• if you have good reputation on other SE sites and the site knows they belong to the same person (i.e. you register the accounts from the same email), you should get a bonus (this one doesn't look like it applies to you but may apply to other people who read this question).
There's a list of sources of reputation in the help (I know you already found this though - again, I am putting it here in an answer for later readers).
• I guess it also helps to answer the pure statistics questions. At least in my experience answers to econometrics questions attract much less attention and upvotes. – Andy Jun 21 '14 at 12:29
• R related question were good hunting ground, but nowadays the easiest ones are moved to the stackoverflow. Econometrics and time series really do attract much less attention. – mpiktas Jul 2 '14 at 8:21
• Among my top 100 tags, I find simulation (0.33 upvotes per answer, 24 answers) does much worse for me than time-series (2.4 upvotes per answer, 48 answers), in spite of my feeling that simulation is a subject I am a bit more knowledgeable on. However, they're both down the low end (in my top 100 tags my average is roughly twice that for time series.) – Glen_b Jul 2 '14 at 9:30
You can also earn reputation by editing the questions and answers. Your edit will be placed in a queue, but on stats.SE the queues disappear quickly. You can read more about how editing works here: How do suggested edits work?
|
2020-10-25 02:43:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37820079922676086, "perplexity": 658.05839126429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107885126.36/warc/CC-MAIN-20201025012538-20201025042538-00114.warc.gz"}
|
https://tacoemojishirt.com/1-2-6-24-120/
|
Want to improve this question? add details and clarify the trouble by editing and enhancing this post.
Closed 5 years ago.
You are watching: 1-2-6-24-120
I was playing v No Man"s Sky once I ran into a series of numbers and was asked what the following number would be.
$$1, 2, 6, 24, 120$$
This is for a terminal assess password in the game no man sky. The 3 options they provide are; 720, 620, 180
The next number is $840$. The $n$th hatchet in the sequence is the smallest number v $2^n$ divisors.
Er ... The next number is $6$. The $n$th hatchet is the the very least factorial many of $n$.
No ... Wait ... It"s $45$. The $n$th term is the biggest fourth-power-free divisor the $n!$.
Hold on ... :)
Probably the answer they"re feather for, though, is $6! = 720$. But there are lots of various other justifiable answers!
After some testing I discovered that these numbers space being multiply by their corresponding number in the sequence.
For example:
1 x 2 = 22 x 3 = 66 x 4 = 2424 x 5 = 120Which would typical the next number in the sequence would be
120 x 6 = 720and for this reason on and so forth.
Edit: thanks to
GEdgar in the comments for helping me do pretty cool discovery around these numbers. The totals are additionally made increase of multiplying each number as much as that existing count.
For Example:
2! = 2 x 1 = 23! = 3 x 2 x 1 = 64! = 4 x 3 x 2 x 1 = 245! = 5 x 4 x 3 x 2 x 1 = 1206! = 6 x 5 x 4 x 3 x 2 x 1 = 720
The next number is 720.
The succession is the factorials:
1 2 6 24 120 = 1! 2! 3! 4! 5!
6! = 720.
(Another way to think of it is each term is the term prior to times the following counting number.
See more: Red Dead Redemption 2 Look On My Works, American Dreams
T0 = 1; T1 = T0 * 2 = 2; T2 = T1 * 3 = 6; T3 = T2 * 4 = 24; T4 = T3 * 5 = 120; T5 = T4 * 6 = 720.
$\begingroup$ it's yet done. You re welcome find one more answer , a small bit initial :) perhaps with the amount of the number ? note also that it begins with 1 2 and ends v 120. Perhaps its an opportunity to concatenate and add zeroes. An excellent luck $\endgroup$
## Not the prize you're feather for? Browse other questions tagged sequences-and-series or ask your very own question.
Is there any kind of discernible pattern for the number perform $\frac73, \frac54, 1, \frac89, ...$
site design / logo © 2021 stack Exchange Inc; user contributions license is granted under cc by-sa. Rev2021.10.11.40423
|
2021-10-25 13:16:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25780951976776123, "perplexity": 732.780380271917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587711.69/warc/CC-MAIN-20211025123123-20211025153123-00010.warc.gz"}
|
https://amathew.wordpress.com/tag/simplicial-complexes/
|
There is an clever and interesting combinatorial (homology-free) approach to the proof of the well-known Brouwer fixed point theorem. Recall that this theorem states that:
Theorem 1 Any continuous ${f: B^n \rightarrow B^n}$ (for ${B^n}$ the unit ball in euclidean ${n}$-space ${\mathbb{R}^n}$) has a fixed point.
The first idea that suggests that a combinatorial approach might tackle the Brouwer theorem is that the set
$\displaystyle \left\{ f: B^n \rightarrow B^n \ \mathrm{with} \ \mathrm{no} \ \mathrm{fixed \ pt } \right\}$
is open in the set of continuous maps ${B^n \rightarrow B^n}$ (with the uniform topology). So if we can show that any continuous map ${f: B^n \rightarrow B^n}$ can be uniformly approximated by maps that do have fixed points to an arbitrary degree, then it will follow that ${f}$ itself has a fixed point.
Now one way you could take this is to assume that ${f}$ is differentiable. And indeed, there are differential-topological proofs of Brouwer’s theorem. This is not the purpose of the present post, though. We will replace the continuous ball ${B^n}$ with a simplicial complex. (more…)
|
2021-06-16 11:59:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9383233785629272, "perplexity": 125.50905281017747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623596.16/warc/CC-MAIN-20210616093937-20210616123937-00380.warc.gz"}
|
https://en.wikipedia.org/wiki/Vincenty%27s_formulae
|
# Vincenty's formulae
Vincenty's formulae are two related iterative methods used in geodesy to calculate the distance between two points on the surface of a spheroid, developed by Thaddeus Vincenty (1975a). They are based on the assumption that the figure of the Earth is an oblate spheroid, and hence are more accurate than methods that assume a spherical Earth, such as great-circle distance.
The first (direct) method computes the location of a point that is a given distance and azimuth (direction) from another point. The second (inverse) method computes the geographical distance and azimuth between two given points. They have been widely used in geodesy because they are accurate to within 0.5 mm (0.020″) on the Earth ellipsoid.
## Background
Vincenty's goal was to express existing algorithms for geodesics on an ellipsoid in a form that minimized the program length (see the first sentence of his paper). His unpublished report (1975b) mentions the use of a Wang 720 desk calculator, which had only a few kilobytes of memory. To obtain good accuracy for long lines, the solution uses the classical solution of Legendre (1806), Bessel (1825), and Helmert (1880) based on the auxiliary sphere. (Vincenty relied on formulation of this method given by Rainsford, 1955.) Legendre showed that an ellipsoidal geodesic can be exactly mapped to a great circle on the auxiliary sphere by mapping the geographic latitude to reduced latitude and setting the azimuth of the great circle equal to that of the geodesic. The longitude on the ellipsoid and the distance along the geodesic are then given in terms of the longitude on the sphere and the arc length along the great circle by simple integrals. Bessel and Helmert gave rapidly converging series for these integrals, which allow the geodesic to be computed with arbitrary accuracy.
In order to minimize the program size, Vincenty took these series, re-expanded them using the first term of each series as the small parameter, and truncated them to ${\displaystyle O(f^{3})}$. This resulted in compact expressions for the longitude and distance integrals. The expressions were put in Horner (or nested) form, since this allows polynomials to be evaluated using only a single temporary register. Finally, simple iterative techniques were used to solve the implicit equations in the direct and inverse methods; even though these are slow (and in the case of the inverse method it sometimes does not converge), they result in the least increase in code size.
## Notation
Define the following notation:
a length of semi-major axis of the ellipsoid (radius at equator); (6378137.0 metres in WGS-84) ƒ flattening of the ellipsoid; (1/298.257223563 in WGS-84) b = (1 − ƒ) a length of semi-minor axis of the ellipsoid (radius at the poles); (6356752.314245 meters in WGS-84) Φ1, Φ2 latitude of the points; U1 = arctan( (1 − ƒ) tan Φ1 ), U2 = arctan( (1 − ƒ) tan Φ2 ) reduced latitude (latitude on the auxiliary sphere) L = L2 − L1 difference in longitude of two points; λ1, λ2 longitude of the points on the auxiliary sphere; α1, α2 forward azimuths at the points; α azimuth at the equator; s ellipsoidal distance between the two points; σ arc length between points on the auxiliary sphere
## Inverse problem
Given the coordinates of the two points (Φ1L1) and (Φ2L2), the inverse problem finds the azimuths α1, α2 and the ellipsoidal distance s.
Calculate U1, U2 and L, and set initial value of λ = L. Then iteratively evaluate the following equations until λ converges:
${\displaystyle \sin \sigma ={\sqrt {(\cos U_{2}\sin \lambda )^{2}+(\cos U_{1}\sin U_{2}-\sin U_{1}\cos U_{2}\cos \lambda )^{2}}}}$
${\displaystyle \cos \sigma =\sin U_{1}\sin U_{2}+\cos U_{1}\cos U_{2}\cos \lambda \,}$
${\displaystyle \sigma =\arctan {\frac {\sin \sigma }{\cos \sigma }}\,}$[1][2]
${\displaystyle \sin \alpha ={\frac {\cos U_{1}\cos U_{2}\sin \lambda }{\sin \sigma }}\,}$[3]
${\displaystyle \cos ^{2}\alpha =1-\sin ^{2}\alpha \,}$
${\displaystyle \cos(2\sigma _{m})=\cos \sigma -{\frac {2\sin U_{1}\sin U_{2}}{\cos ^{2}\alpha }}\,}$[4]
${\displaystyle C={\frac {f}{16}}\cos ^{2}\alpha {\big [}4+f(4-3\cos ^{2}\alpha ){\big ]}\,}$
${\displaystyle \lambda =L+(1-C)f\sin \alpha \left\{\sigma +C\sin \sigma \left[\cos(2\sigma _{m})+C\cos \sigma (-1+2\cos ^{2}(2\sigma _{m}))\right]\right\}\,}$
When λ has converged to the desired degree of accuracy (10−12 corresponds to approximately 0.06mm), evaluate the following:
${\displaystyle u^{2}=\cos ^{2}\alpha \left({\frac {a^{2}-b^{2}}{b^{2}}}\right)}$
${\displaystyle A=1+{\frac {u^{2}}{16384}}\left\{4096+u^{2}\left[-768+u^{2}(320-175u^{2})\right]\right\}}$
${\displaystyle B={\frac {u^{2}}{1024}}\left\{256+u^{2}\left[-128+u^{2}(74-47u^{2})\right]\right\}}$
${\displaystyle \Delta \sigma =B\sin \sigma {\Big \{}\cos(2\sigma _{m})+{\tfrac {1}{4}}B{\big [}\cos \sigma {\big (}-1+2\cos ^{2}(2\sigma _{m}){\big )}-{\tfrac {B}{6}}\cos(2\sigma _{m})(-3+4\sin ^{2}\sigma ){\big (}-3+4\cos ^{2}(2\sigma _{m}){\big )}{\big ]}{\Big \}}}$
${\displaystyle s=bA(\sigma -\Delta \sigma )\,}$
${\displaystyle \alpha _{1}=\arctan \left({\frac {\cos U_{2}\sin \lambda }{\cos U_{1}\sin U_{2}-\sin U_{1}\cos U_{2}\cos \lambda }}\right)}$[2]
${\displaystyle \alpha _{2}=\arctan \left({\frac {\cos U_{1}\sin \lambda }{-\sin U_{1}\cos U_{2}+\cos U_{1}\sin U_{2}\cos \lambda }}\right)}$[2]
Between two nearly antipodal points, the iterative formula may fail to converge; this will occur when the first guess at λ as computed by the equation above is greater than π in absolute value.
## Direct Problem
Given an initial point (Φ1, L1) and initial azimuth, α1, and a distance, s, along the geodesic the problem is to find the end point (Φ2, L2) and azimuth, α2.
Start by calculating the following:
${\displaystyle U_{1}=\arctan \left((1-f)\tan \phi _{1}\right)\,}$
${\displaystyle \sigma _{1}=\arctan \left({\frac {\tan U_{1}}{\cos \alpha _{1}}}\right)\,}$[2]
${\displaystyle \sin \alpha =\cos U_{1}\sin \alpha _{1}\,}$
${\displaystyle \cos ^{2}\alpha =1-\sin ^{2}\alpha \,}$
${\displaystyle u^{2}=\cos ^{2}\alpha \left({\frac {a^{2}-b^{2}}{b^{2}}}\right)\,}$
${\displaystyle A=1+{\frac {u^{2}}{16384}}\left\{4096+u^{2}\left[-768+u^{2}(320-175u^{2})\right]\right\}}$
${\displaystyle B={\frac {u^{2}}{1024}}\left\{256+u^{2}\left[-128+u^{2}(74-47u^{2})\right]\right\}}$
Then, using an initial value ${\displaystyle \sigma ={\tfrac {s}{bA}}}$, iterate the following equations until there is no significant change in σ:
${\displaystyle 2\sigma _{m}=2\sigma _{1}+\sigma \,}$
${\displaystyle \Delta \sigma =B\sin \sigma {\Big \{}\cos(2\sigma _{m})+{\tfrac {1}{4}}B{\big [}\cos \sigma {\big (}-1+2\cos ^{2}(2\sigma _{m}){\big )}-{\tfrac {B}{6}}\cos(2\sigma _{m})(-3+4\sin ^{2}\sigma ){\big (}-3+4\cos ^{2}(2\sigma _{m}){\big )}{\big ]}{\Big \}}}$
${\displaystyle \sigma ={\frac {s}{bA}}+\Delta \sigma \,}$
Once σ is obtained to sufficient accuracy evaluate:
${\displaystyle \phi _{2}=\arctan \left({\frac {\sin U_{1}\cos \sigma +\cos U_{1}\sin \sigma \cos \alpha _{1}}{(1-f){\sqrt {\sin ^{2}\alpha +(\sin U_{1}\sin \sigma -\cos U_{1}\cos \sigma \cos \alpha _{1})^{2}}}}}\right)\,}$[2]
${\displaystyle \lambda =\arctan \left({\frac {\sin \sigma \sin \alpha _{1}}{\cos U_{1}\cos \sigma -\sin U_{1}\sin \sigma \cos \alpha _{1}}}\right)\,}$[2]
${\displaystyle C={\frac {f}{16}}\cos ^{2}\alpha {\big [}4+f(4-3\cos ^{2}\alpha ){\big ]}\,}$
${\displaystyle L=\lambda -(1-C)f\sin \alpha \left\{\sigma +C\sin \sigma \left[\cos(2\sigma _{m})+C\cos \sigma (-1+2\cos ^{2}(2\sigma _{m}))\right]\right\}\,}$
${\displaystyle L_{2}=L+L_{1}\,}$
${\displaystyle \alpha _{2}=\arctan \left({\frac {\sin \alpha }{-\sin U_{1}\sin \sigma +\cos U_{1}\cos \sigma \cos \alpha _{1}}}\right)\,}$[2]
If the initial point is at the North or South pole, then the first equation is indeterminate. If the initial azimuth is due East or West, then the second equation is indeterminate. If a double valued atan2 type function is used, then these values are usually handled correctly.
## Vincenty's modification
In his letter to Survey Review in 1976, Vincenty suggested replacing his series expressions for A and B with simpler formulas using Helmert's expansion parameter k1:
${\displaystyle A={\frac {1+{\frac {1}{4}}(k_{1})^{2}}{1-k_{1}}}}$
${\displaystyle B=k_{1}(1-{\tfrac {3}{8}}(k_{1})^{2})}$
where ${\displaystyle k_{1}={\frac {{\sqrt {1+u^{2}}}-1}{{\sqrt {1+u^{2}}}+1}}}$
## Nearly antipodal points
As noted above, the iterative solution to the inverse problem fails to converge or converges slowly for nearly antipodal points. An example of slow convergence is (Φ1L1) = (0°, 0°) and (Φ2L2) = (0.5°, 179.5°) for the WGS84 ellipsoid. This requires about 130 iterations to give a result accurate to 1 mm. Depending on how the inverse method is implemented, the algorithm might return the correct result (19936288.579 m), an incorrect result, or an error indicator. An example of an incorrect result is provided by the NGS online utility, which returns a distance that is about 5 km too long. Vincenty suggested a method of accelerating the convergence in such cases (Rapp, 1973).
An example of a failure of the inverse method to converge is (Φ1L1) = (0°, 0°) and (Φ2L2) = (0.5°, 179.7°) for the WGS84 ellipsoid. In an unpublished report, Vincenty (1975b) gave an alternative iterative scheme to handle such cases. This converges to the correct result 19944127.421 m after about 60 iterations; however, in other cases many thousands of iterations are required.
Newton's method has been successfully used to give rapid convergence for all pairs of input points (Karney, 2013).
## Notes
1. ^ σ isn't evaluated directly from sin σ or cos σ to preserve numerical accuracy near the poles and equator
2. The arctan quantity should be evaluated using a two argument atan2 type function.
3. ^ If sin σ = 0 the value of sin α is indeterminate. It represents an end point equal to, or diametrically opposite the start point.
4. ^ The start and end point are on the equator. In this case, C = 0 so the value of ${\displaystyle \cos(2\sigma _{m})}$ is not used. The limiting value is ${\displaystyle \cos(2\sigma _{m})=-1}$.
## References
• Bessel, Friedrich W. (2010). "The calculation of longitude and latitude from geodesic measurements (1825)". Astron. Nachr. 331 (8): 852–861. arXiv:0908.1824. Bibcode:2010AN....331..852K. doi:10.1002/asna.201011352. English translation of Astron. Nachr. 4, 241–254 (1825).
• Helmert, Friedrich R. (1964). Mathematical and Physical Theories of Higher Geodesy, Part 1 (1880). St. Louis: Aeronautical Chart and Information Center. Retrieved 2011-07-30. English translation of Die Mathematischen und Physikalischen Theorieen der Höheren Geodäsie, Vol. 1 (Teubner, Leipzig, 1880).
• Karney, Charles F. F. (January 2013). "Algorithms for geodesics". Journal of Geodesy. 87 (1): 43–55. arXiv:1109.4448. Bibcode:2013JGeod..87...43K. doi:10.1007/s00190-012-0578-z. Addenda.
• Legendre, Adrien-Marie (1806). "Analyse des triangles tracės sur la surface d'un sphėroïde". Mém. de l'Inst. Nat. de France (1st sem.): 130–161. Retrieved 2011-07-30.
• Rainsford, H. F. (1955). "Long geodesics on the ellipsoid". Bulletin géodésique. 37: 12–22. Bibcode:1955BGeod..29...12R. doi:10.1007/BF02527187.
• Rapp, Ricahrd H. (March 1993). Geometric Geodesy, Part II (Technical report). Ohio State University. Retrieved 2011-08-01.
• Vincenty, Thaddeus (April 1975a). "Direct and Inverse Solutions of Geodesics on the Ellipsoid with application of nested equations" (PDF). Survey Review. XXIII (176): 88–93. doi:10.1179/sre.1975.23.176.88. Retrieved 2009-07-11. In selecting a formula for the solution of geodesics it is of primary importance to consider the length of the program, that is the amount of core which it will occupy in the computer along with trigonometric and other required functions.
• Vincenty, Thaddeus (August 1975b). Geodetic inverse solution between antipodal points (PDF) (Technical report). DMAAC Geodetic Survey Squadron. doi:10.5281/zenodo.32999.
• Vincenty, Thaddeus (April 1976). "Correspondence". Survey Review. XXIII (180): 294.
• Geocentric Datum of Australia (GDA) Reference Manual (PDF). Intergovernmental committee on survey and mapping (ICSM). February 2006. ISBN 0-9579951-0-5. Retrieved 2009-07-11.
|
2019-03-23 10:16:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 38, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8542709350585938, "perplexity": 1894.85771896601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202781.83/warc/CC-MAIN-20190323101107-20190323123107-00135.warc.gz"}
|
https://www.codecogs.com/library/computing/stl/numerics/valarray.php
|
I have forgotten
• https://me.yahoo.com
# Valarray
Is like a vector, but is optimized to get good performance for the processing of value arrays
View version details
### Key Facts
Gyroscopic Couple: The rate of change of angular momentum () = (In the limit).
• = Moment of Inertia.
• = Angular velocity
• = Angular velocity of precession.
Blaise Pascal (1623-1662) was a French mathematician, physicist, inventor, writer and Catholic philosopher.
Leonhard Euler (1707-1783) was a pioneering Swiss mathematician and physicist.
## Definition
The valarray template is defined in the standard header <valarray>, and in the nonstandard backward-compatibility header <valarray.h>.
namespace std {
template <class T>
class valarray;
}
## Description
A valarray is a representation of the mathematical concept of a linear sequence of values.
It is like a vector but is designed for high speed numerics at the expense of some programming ease and general purpose use.
A simple example of creating a valarray is:
std::valarray<int> val1(10); // valarray of ten ints with value 0
std::valarray<float> val2(8.3, 10); // valarray of ten floats with value 8.3
## Performance
Valarray has many features that make it ideally suited for use with vector processors in traditional vector supercomputers and SIMD units in consumer-level scalar processors, and also ease vector mathematics programming even in scalar computers.
## Valarray Operations
Create, Copy and Destroy Operations
Operation Effect
valarray() Default constructor, creates an empty valarray
valarray(size_t n) Creates a valarray that contains n elements
valarray(const T& val,size_t n) Creates a valarray with n elements initialized by val
valarray(const T* a, size_t n) Creates a valarray with n elements initialized by the values of the elements in array a
valarray(const valarray& va) Copy constructor
~valarray() Destroys all elements and frees the memory
Assignment Operations
Operation Effect
operator =(const valarray& va) Assigns the elements of the valarray va
operator =(const T& value) Assigns value to each element of the valarray
Member Functions
Operation Effect
size() const Returns the actual number of elements
resize(size_t n) Change the size of the valarray to n
min() const Returns the minimum value of all elements
max() const Returns the maximum value of all elements
sum() const Returns the sum of all elements
shift(int n) const Returns a new valarray in which all elements are shifted by n positions
cshift(int n) const Returns a new valarray in which all elements are shifted cyclically by n positions
apply(T op (T)) const Returns a new valarray with all elements processed by op()
Element Access
Operation Effect
operator [ ](size_t idx) Return the valarray element that has index idx
Transcendental Functions
Operation Effect
abs Absolute value of valarray elements
acos Arc cosine of valarray elements
asin Arc sine of valarray elements
atan Arc tangent of valarray elements
atan2 Atan2 of valarray elements
cos Cosine of valarray elements
cosh Hyperbolic cosine of valarray elements
exp Exponential of valarray elements
log Natural logarithm of valarray elements
log10 Common logarithm of valarray elements
pow Power of valarray elements
sin Sine of valarray elements
sinh Hyperbolic sine of valarray elements
sqrt Square root of valarray elements
tan Tangent of valarray elements
tanh Hyperbolic tangent of valarray elements
### References:
• Nicolai M. Josuttis: "The C++ Standard Library"
Example:
##### Example - Valarray
Problem
The following program illustrates a simple use of valarrays.
Workings
#include <iostream>
#include <valarray>
using namespace std;
// print valarray
template <class T>
void printValarray (const valarray<T>& va)
{
for (int i=0; i<va.size(); i++)
{
cout << va[i] << ' ';
}
cout << endl;
}
int main()
{
// define two valarrays with ten elements
valarray<double> va1(10), va2(10);
// assign values 0.0, 1.1, up to 9.9 to the first valarray
for (int i=0; i<10; i++)
{
va1[i] = i * 1.1;
}
// assign -1 to all elements of the second valarray
va2 = -1;
// print both valarrays
printValarray(va1);
printValarray(va2);
// print minimum, maximum, and sum of the first valarray
cout << "min(): " << val.min() << endl;
cout << "max(): " << val.max() << endl;
cout << "sum(): " << val.sum() << endl;
// assign values of the first to the second valarray
va2 = va1;
// remove all elements of the first valarray
va1.resize (0);
// print both valarrays again
printValarray(va1);
printValarray(va2);
}
Solution
Output:
0 1.1 2.2 3.3 4.4 5.5 6.6 7.7 8.8 9.9
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1
min():0
max(): 9.9
sum(): 49.5
0 1.1 2.2 3.3 4.4 5.5 6.6 7.7 8.8 9.9
|
2018-12-11 06:47:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38231924176216125, "perplexity": 12192.998818130938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823588.0/warc/CC-MAIN-20181211061718-20181211083218-00392.warc.gz"}
|
http://digitalhaunt.net/Florida/compute-the-sum-of-squares-error.html
|
Offering a wide range of services from system repair to major network installations, we strive to provide you with the best turnaround in the area. All labor is guaranteed and performed by certified technicians. We keep a large inventory of systems and parts in stock. We are authorized resellers for most major manufacturers and work directly with their distributors to get you the best price and selection. Our expertise is not limited to just computers. We can assist with all your electronic needs including multimedia systems, video surveillance, digital signage, point of sale, gaming systems, and smart phone repairs.
Commercial Services
Address 4320 S Hopkins Ave, Titusville, FL 32780 (321) 264-4871 http://www.novatechcomputers.com
# compute the sum-of-squares error Cape Canaveral, Florida
And then the mean of group 3, 5 plus 6 plus 7 is 18 divided by 3 is 6. This will determine the distance for each of cell i's variables (v) from each of the mean vectors variable (xvx) and add it to the same for cell j. The total sum of squares = treatment sum of squares (SST) + sum of squares of the residual error (SSE) The treatment sum of squares is the variation attributed to, or Magpakita nang higit pa Wika: Filipino Lokasyon ng content: Pilipinas Restricted Mode: Naka-off Kasaysayan Tulong Naglo-load...
So let's calculate the grand means. David Hays 18,083 (na) panonood 6:17 Total Sum of Squares - Tagal: 4:01. The first step in constructing the test statistic is to calculate the error sum of squares. Contents 1 One explanatory variable 2 Matrix expression for the OLS residual sum of squares 3 See also 4 References One explanatory variable In a model with a single explanatory variable,
MrNystrom 30,072 (na) panonood 14:10 Standard Error of the Estimate used in Regression Analysis (Mean Square Error) - Tagal: 3:41. So we have m groups here and each group here has n members. At the 3rd stage cells 7 & 15 are joined together with a SSE of 0.549566. That is: $SS(E)=SS(TO)-SS(T)$ Okay, so now do you remember that part about wanting to break down the total variationSS(TO) into a component due to the treatment SS(T) and a component due
And, I'm not gonna prove things rigorously here but I want you to show, I wanna show you where some of these strange formulas that show up in statistics would actually And let me show you in a second that it's the same thing as the mean of the means of each of these data sets. So degrees of freedom, we remember, you have this many, however many data points you have minus 1 degrees of freedom. This is just for the first stage because all other SSE's are going to be 0 and the SSE at stage 1 = equation 7.
Can the adjusted sums of squares be less than, equal to, or greater than the sequential sums of squares? But either way now that we've calculated it we can actually figure out the total sum of squares. SSE is a measure of sampling error. Sometimes, the factor is a treatment, and therefore the row heading is instead labeled as Treatment.
The Sums of Squares In essence, we now know that we want to break down the TOTAL variation in the data into two components: (1) a component that is due to Because we want to compare the "average" variability between the groups to the "average" variability within the groups, we take the ratio of the BetweenMean Sum of Squares to the Error rows or columns)). This is actually the same as saying equation 5 divided by 2 to give: 7.
These numbers are the quantities that are assembled in the ANOVA table that was shown previously. Residual sum of squares From Wikipedia, the free encyclopedia Jump to: navigation, search This It is the unique portion of SS Regression explained by a factor, given all other factors in the model, regardless of the order they were entered into the model. So our total sum of squares And actually if we wanted the variance here we would divide this by the degrees of freedom. Converting the sum of squares into mean squares by dividing by the degrees of freedom lets you compare these ratios and determine whether there is a significant difference due to detergent.
And then 5 plus 6 plus seven is 18. Remarks The time series is homogeneous or equally spaced. That is, the error degrees of freedom is 14−2 = 12. The means of each of the variables is the new cluster center.
So up here this first is gonna be equal to, 3 minus 4 the difference is 1, you square it, you're gonna get, er, it's actually a negative 1, you square In the learning study, the factor is the learning method. (2) DF means "the degrees of freedom in the source." (3) SS means "the sum of squares due to the source." Or if you want to talk in terms of general, you want to talk in general, there are m times n, so that is total number of samples, minus 1 degrees Choose Calc > Calculator and enter the expression: SSQ (C1) Store the results in C2 to see the sum of the squares, uncorrected.
Continuing in the example; at stage 2 cells 8 &17 are joined because they are the next closest giving an SSE of 0.458942. Your email Submit RELATED ARTICLES Find the Error Sum of Squares when Constructing the Test… Business Statistics For Dummies How Businesses Use Regression Analysis Statistics Explore Hypothesis Testing in Business Statistics Where dk.ij = the new distance between clusters, ci,j,k = the number of cells in cluster i, j or k; dki = the distance between cluster k and i at the Well the first thing we got to do is we have to figure out the mean of all of this stuff over here.
The mean lifetime of the Electrica batteries in this sample is 2.3. So it's going to be equal to: 3 minus 4, the 4 is this 4 right over here, squared plus 2 minus 4 squared plus 1 minus 4 squared, now I'll Y is the forecasted time series data (a one dimensional array of cells (e.g. At the 4th stage something different happens.
First we compute the total (sum) for each treatment. \begin{eqnarray} T_1 & = & 6.9 + 5.4 + \ldots + 4.0 = 26.7 \\ & & \\ T_2 & = Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Squared Euclidean distance is the same equation, just without the squaring on the left hand side: 5. So what's this going to be equal to.
That is, the number of the data points in a group depends on the group i. Similarly, you find the mean of column 2 (the Readyforever batteries) as And column 3 (the Voltagenow batteries) as The next step is to subtract the mean of each column from And if you're actually calculating the variance here we would just divide 30 by m times n minus 1. Okay, we slowly, but surely, keep on adding bit by bit to our knowledge of an analysis of variance table.
The form of the test statistic depends on the type of hypothesis being tested. Sequential sums of squares Sequential sums of squares depend on the order the factors are entered into the model. note that j goes from 1 toni, not ton. Bozeman Science 171,662 (na) panonood 7:05 Linear Regression t test and Confidence Interval - Tagal: 21:35.
Look there is the variance of this entire sample of nine but some of that variance, if these groups are different in some way, might come from the variation from being It's really not important in getting Ward's method to work in SPSS. The calculations appear in the following table. As the name suggests, it quantifies the variability between the groups of interest. (2) Again, aswe'll formalize below, SS(Error) is the sum of squares between the data and the group means.
That is, F = 1255.3÷ 13.4 = 93.44. (8) The P-value is P(F(2,12) ≥ 93.44) < 0.001. It is used as an optimality criterion in parameter selection and model selection. Now, let's consider the treatment sum of squares, which we'll denote SS(T).Because we want the treatment sum of squares to quantify the variation between the treatment groups, it makes sense thatSS(T) Matrix expression for the OLS residual sum of squares The general regression model with n observations and k explanators, the first of which is a constant unit vector whose coefficient is
We could have 5 measurements in one group, and 6 measurements in another. (3) $$\bar{X}_{i.}=\dfrac{1}{n_i}\sum\limits_{j=1}^{n_i} X_{ij}$$ denote the sample mean of the observed data for group i, where i = 1, Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view menuMinitab® 17 SupportUnderstanding sums of squaresLearn more about Minitab 17 In This TopicWhat is sum of squares?Sum of squares in ANOVASum of For now, take note that thetotal sum of squares, SS(Total), can be obtained by adding the between sum of squares, SS(Between), to the error sum of squares, SS(Error).
|
2018-12-16 03:13:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7333664894104004, "perplexity": 745.5135495138982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827252.87/warc/CC-MAIN-20181216025802-20181216051802-00531.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/36511/at-what-distance-from-these-planets-should-this-moon-be-placed
|
# At what distance from these planets should this moon be placed?
I have two earth clones, in essence, separated by 16550 miles (26350 kilometers). They are, of course, tidally locked, and orbit each other once every 24 hours. These planets orbit a sun identical to ours in the same time as earth. Now, I want to add a little bit more to the system. A moon with half the mass and size of ours.
But I definitely don't want this to happen.
Is it possible to add a moon to this system and have it be stable for at least 10 billion years? Also, at what distance would this moon have to be? How fast would it orbit these two planets? Bonus points if you can make a diagram of its orbit.
• phet.colorado.edu/sims/my-solar-system/my-solar-system_en.html – King-Ink Feb 21 '16 at 4:30
• Looks interesting, but I won't be able to see that, or pretty much any pictures for three days. Vacation means iPad which means no flash player and no pictures :( – Xandar The Zenon Feb 21 '16 at 4:36
• Be aware that while your planets aren't quite close enough to each other to break up, the effective surface gravity will vary a good deal due to tidal forces, and they won't be spherical. Since the rock is more rigid than the water or atmosphere, I'd expect big oceans and dense atmosphere at the points directly under the other planet, and their antipodes, and no water and thin/no atmosphere at the "equatorial" rim. – Mike Scott Feb 21 '16 at 8:04
• @MikeScott Please go here and tell me more. – Xandar The Zenon Feb 21 '16 at 15:18
• Two tidelocked earth clones whose orbital period is 24 hours? They need to be 106,400 km from one planet center to the other. Semi major axis would be 53,200 km. – HopDavid Feb 24 '16 at 1:26
Is it possible? Absolutely. Pluto and Charon are tidally locked, and yet Pluto has four other moons1. Charon is relatively massive in comparison to Pluto - about one twelfth its mass. Indeed, the center of mass of the system lies outside Pluto. I see no reason why the stability should differ for two binary planets.
This does not, however, give us an answer of the inner range of the orbits. I am not aware of in-depth analyses of the dynamics of natural satellites of binary planets. However, analysis of circumbinary planets orbiting binary stars does exist, and we can use the same orbital mechanics here.
Welsh et al. (2013) state2
The stability criterion requires the planet to orbit outside roughly ∼2-4 times the binary semi-major axis, or periods ∼3-8 times the binary period.
There you go. So the moon must have
• A semi-major axis of at least 26,350 kilometers (two times the semi-major axis, which is half of the separation distance) beyond the orbit of the planets.
• A period of at least 72 hours (three times the orbital period of the planets).
Here is a diagram, as you wished, for bonus points (although a little bit off - I did this in Paint):
Here, $a$ is the semi-major axis of the planets.
1 Three of them are in resonance with one another, and perturbations by other bodies do ensure that the system is chaotic, but it does not seem that their orbits are unstable.
2 I will admit that the criterion may be different because of the mass difference - the moon will be much more massive relative to the planets than a circumbinary planet would be to a binary star - but I don't think it will make a huge difference.
Yes, it is.
You want your moon to be stable for 10 billions years. Let's take a look to our Moon: it orbits our Earth at a distance of 384,400 km, with a period of 29 d, 12 h and 44 m and a median velocity equal to 1022 m/s. As the system transfers angular momentum from the Earth's spin into the Moon's orbit, the Moon is slowly moving away (4 cm in a year). After the Earth tidally locks to the Moon, the system will begin transferring angular momentum from the Earth-Moon system into the Earth-Sun system and the Moon will begin moving closer again. However, the Sun will engulf them both before the Moon collides with the Earth.
Now, let's put our Moon in your system and consider stability. I don't know all the Maths, but, knowing our Moon will probably be stable for some another billions years, it's safe in this case to assume it will be stable also here: basing my assumptions on the fact the planets orbit each other with a distance of 26,350 km, then their orbit is too close to give the Moon's orbit a problem. I would just reduce the size of the Moon to make it less affected by an impact between the planets (if you want it) and to make the system more stable (though I think it should be stable).
I wrote this answer assuming that the mass of the planets is Earth-sized or a bit more and that your planets don't orbit a M or K star. If it's not so, then the stability is not guaranteed. I know I've only written about as specific situation, but I hope it's still useful.
You want your moon to orbit outside the Roche limit and inside sphere of influence.
A number of issues, (1) you would have to calculate the orbital parameters of the Earth clones and (2) you would have to calculate the orbital parameter of a moon orbiting a mass the size of two Earths given a certain orbital period.
It is impossible to calculate (2) without knowledge of the time taken for this Moon to orbit. The first, however, can be done.
One day is 86400 seconds. The mass of the Earth is 5.972 × 1024 kg.
$$T = 2 \pi \sqrt{\frac{2 \cdot R^3}{G \cdot 2 M}}$$
Since we know T, the gravitational constant G, and M, solve for R. The answer is 42109.77324 kilometres. So you'll have to put it further than that. Now, the Roche limit of each Earth can probably be approximated from the Roche limit of the actual Earth by itself. That is 18,470 km.
So, put it further than 26165.8 + 18470 km, or 44 000 kilometres. Whatever it is, you can put it where the Earth is now with stability. Naturally, this is a complicated three-body problem now, so that issue applies.
• There's a two in your numerator that shouldn't be there. – HopDavid Feb 24 '16 at 1:30
|
2020-04-06 09:35:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.629195511341095, "perplexity": 573.1628429565443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371620338.63/warc/CC-MAIN-20200406070848-20200406101348-00254.warc.gz"}
|
http://sage.math.gordon.edu/home/pub/101/
|
# 10 - PRODUTORIO-SOMATORIO-SERIES-MATEMATICAS
## 1081 days ago by jmarcellopereira
SOMATÓRIO, PRODUTÓRIO E SÉRIES MATEMÁTICAS
SOMATÓRIO
var('k') sum(1/k^5, k, 1, 10).n()
1.03690734134469 1.03690734134469
# de 1 até o infinito sum(1/k^5, k, 1, oo).n()
1.03692775514337 1.03692775514337
PRODUTÓRIO
prod([1,2,3])
6 6
prod([2,4], 5)
40 40
prod((2,4), 5)
40 40
F = factor(1084200) F
2^3 * 3 * 5^2 * 13 * 139 2^3 * 3 * 5^2 * 13 * 139
prod(F)
1084200 1084200
prod?
File: /home/jmarcellopereira/SageMath/src/sage/misc/misc_c.pyx Type: Definition: prod(x, z=None, recursion_cutoff=5) Docstring: Return the product of the elements in the list x. If optional argument z is not given, start the product with the first element of the list, otherwise use z. The empty product is the int 1 if z is not specified, and is z if given. This assumes that your multiplication is associative; we don’t promise which end of the list we start at. EXAMPLES: sage: prod([1,2,34]) 68 sage: prod([2,3], 5) 30 sage: prod((1,2,3), 5) 30 sage: F = factor(-2006); F -1 * 2 * 17 * 59 sage: prod(F) -2006 AUTHORS: Joel B. Mohler (2007-10-03): Reimplemented in Cython and optimized Robert Bradshaw (2007-10-26): Balanced product tree, other optimizations, (lazy) generator support Robert Bradshaw (2008-03-26): Balanced product tree for generators and iterators File: /home/jmarcellopereira/SageMath/src/sage/misc/misc_c.pyx Type: Definition: prod(x, z=None, recursion_cutoff=5) Docstring: Return the product of the elements in the list x. If optional argument z is not given, start the product with the first element of the list, otherwise use z. The empty product is the int 1 if z is not specified, and is z if given. This assumes that your multiplication is associative; we don’t promise which end of the list we start at. EXAMPLES: sage: prod([1,2,34]) 68 sage: prod([2,3], 5) 30 sage: prod((1,2,3), 5) 30 sage: F = factor(-2006); F -1 * 2 * 17 * 59 sage: prod(F) -2006 AUTHORS: Joel B. Mohler (2007-10-03): Reimplemented in Cython and optimized Robert Bradshaw (2007-10-26): Balanced product tree, other optimizations, (lazy) generator support Robert Bradshaw (2008-03-26): Balanced product tree for generators and iterators
taylor?
File: /home/jmarcellopereira/SageMath/local/lib/python2.7/site-packages/sage/calculus/functional.py Type: Definition: taylor(f, *args) Docstring: Expands self in a truncated Taylor or Laurent series in the variable v around the point a, containing terms through (x - a)^n. Functions in more variables are also supported. INPUT: *args - the following notation is supported x, a, n - variable, point, degree (x, a), (y, b), ..., n - variables with points, degree of polynomial EXAMPLES: sage: var('x,k,n') (x, k, n) sage: taylor (sqrt (1 - k^2*sin(x)^2), x, 0, 6) -1/720*(45*k^6 - 60*k^4 + 16*k^2)*x^6 - 1/24*(3*k^4 - 4*k^2)*x^4 - 1/2*k^2*x^2 + 1 sage: taylor ((x + 1)^n, x, 0, 4) 1/24*(n^4 - 6*n^3 + 11*n^2 - 6*n)*x^4 + 1/6*(n^3 - 3*n^2 + 2*n)*x^3 + 1/2*(n^2 - n)*x^2 + n*x + 1 sage: taylor ((x + 1)^n, x, 0, 4) 1/24*(n^4 - 6*n^3 + 11*n^2 - 6*n)*x^4 + 1/6*(n^3 - 3*n^2 + 2*n)*x^3 + 1/2*(n^2 - n)*x^2 + n*x + 1 Taylor polynomial in two variables: sage: x,y=var('x y'); taylor(x*y^3,(x,1),(y,-1),4) (x - 1)*(y + 1)^3 - 3*(x - 1)*(y + 1)^2 + (y + 1)^3 + 3*(x - 1)*(y + 1) - 3*(y + 1)^2 - x + 3*y + 3 File: /home/jmarcellopereira/SageMath/local/lib/python2.7/site-packages/sage/calculus/functional.py Type: Definition: taylor(f, *args) Docstring: Expands self in a truncated Taylor or Laurent series in the variable v around the point a, containing terms through (x - a)^n. Functions in more variables are also supported. INPUT: *args - the following notation is supported x, a, n - variable, point, degree (x, a), (y, b), ..., n - variables with points, degree of polynomial EXAMPLES: sage: var('x,k,n') (x, k, n) sage: taylor (sqrt (1 - k^2*sin(x)^2), x, 0, 6) -1/720*(45*k^6 - 60*k^4 + 16*k^2)*x^6 - 1/24*(3*k^4 - 4*k^2)*x^4 - 1/2*k^2*x^2 + 1 sage: taylor ((x + 1)^n, x, 0, 4) 1/24*(n^4 - 6*n^3 + 11*n^2 - 6*n)*x^4 + 1/6*(n^3 - 3*n^2 + 2*n)*x^3 + 1/2*(n^2 - n)*x^2 + n*x + 1 sage: taylor ((x + 1)^n, x, 0, 4) 1/24*(n^4 - 6*n^3 + 11*n^2 - 6*n)*x^4 + 1/6*(n^3 - 3*n^2 + 2*n)*x^3 + 1/2*(n^2 - n)*x^2 + n*x + 1 Taylor polynomial in two variables: sage: x,y=var('x y'); taylor(x*y^3,(x,1),(y,-1),4) (x - 1)*(y + 1)^3 - 3*(x - 1)*(y + 1)^2 + (y + 1)^3 + 3*(x - 1)*(y + 1) - 3*(y + 1)^2 - x + 3*y + 3
var('x,k,n')
\newcommand{\Bold}[1]{\mathbf{#1}}\left(x, k, n\right)
taylor ((x-5)^n, x, 0, 2)
\newcommand{\Bold}[1]{\mathbf{#1}}-\frac{1}{5} \, \left(-5\right)^{n} n x + \frac{1}{50} \, {\left(\left(-5\right)^{n} n^{2} - \left(-5\right)^{n} n\right)} x^{2} + \left(-5\right)^{n}
%%% FIM SOMATORIO, PRODUTORIO E SERIES MATEMATICAS %%%
sum?
File: /home/jmarcellopereira/SageMath/local/lib/python2.7/site-packages/sage/misc/functional.py Type: Definition: sum(expression, *args, **kwds) Docstring: Returns the symbolic sum \sum_{v = a}^b expression with respect to the variable v with endpoints a and b. INPUT: expression - a symbolic expression v - a variable or variable name a - lower endpoint of the sum b - upper endpoint of the sum algorithm - (default: 'maxima') one of 'maxima' - use Maxima (the default) 'maple' - (optional) use Maple 'mathematica' - (optional) use Mathematica 'giac' - (optional) use Giac EXAMPLES: sage: k, n = var('k,n') sage: sum(k, k, 1, n).factor() 1/2*(n + 1)*n sage: sum(1/k^4, k, 1, oo) 1/90*pi^4 sage: sum(1/k^5, k, 1, oo) zeta(5) Warning This function only works with symbolic expressions. To sum any other objects like list elements or function return values, please use python summation, see http://docs.python.org/library/functions.html#sum In particular, this does not work: sage: n = var('n') sage: list=[1,2,3,4,5] sage: sum(list[n],n,0,3) Traceback (click to the left of this block for traceback) ... File: /home/jmarcellopereira/SageMath/local/lib/python2.7/site-packages/sage/misc/functional.py Type: Definition: sum(expression, *args, **kwds) Docstring: Returns the symbolic sum \sum_{v = a}^b expression with respect to the variable v with endpoints a and b. INPUT: expression - a symbolic expression v - a variable or variable name a - lower endpoint of the sum b - upper endpoint of the sum algorithm - (default: 'maxima') one of 'maxima' - use Maxima (the default) 'maple' - (optional) use Maple 'mathematica' - (optional) use Mathematica 'giac' - (optional) use Giac EXAMPLES: sage: k, n = var('k,n') sage: sum(k, k, 1, n).factor() 1/2*(n + 1)*n sage: sum(1/k^4, k, 1, oo) 1/90*pi^4 sage: sum(1/k^5, k, 1, oo) zeta(5) Warning This function only works with symbolic expressions. To sum any other objects like list elements or function return values, please use python summation, see http://docs.python.org/library/functions.html#sum In particular, this does not work: sage: n = var('n') sage: list=[1,2,3,4,5] sage: sum(list[n],n,0,3) Traceback (most recent call last): ... TypeError: unable to convert n to an integer Use python sum() instead: sage: sum(list[n] for n in range(4)) 10 Also, only a limited number of functions are recognized in symbolic sums: sage: sum(valuation(n,2),n,1,5) Traceback (most recent call last): ... TypeError: unable to convert n to an integer Again, use python sum(): sage: sum(valuation(n+1,2) for n in range(5)) 3 (now back to the Sage sum examples) A well known binomial identity: sage: sum(binomial(n,k), k, 0, n) 2^n The binomial theorem: sage: x, y = var('x, y') sage: sum(binomial(n,k) * x^k * y^(n-k), k, 0, n) (x + y)^n sage: sum(k * binomial(n, k), k, 1, n) 2^(n - 1)*n sage: sum((-1)^k*binomial(n,k), k, 0, n) 0 sage: sum(2^(-k)/(k*(k+1)), k, 1, oo) -log(2) + 1 Another binomial identity (trac ticket #7952): sage: t,k,i = var('t,k,i') sage: sum(binomial(i+t,t),i,0,k) binomial(k + t + 1, t + 1) Summing a hypergeometric term: sage: sum(binomial(n, k) * factorial(k) / factorial(n+1+k), k, 0, n) 1/2*sqrt(pi)/factorial(n + 1/2) We check a well known identity: sage: bool(sum(k^3, k, 1, n) == sum(k, k, 1, n)^2) True A geometric sum: sage: a, q = var('a, q') sage: sum(a*q^k, k, 0, n) (a*q^(n + 1) - a)/(q - 1) The geometric series: sage: assume(abs(q) < 1) sage: sum(a*q^k, k, 0, oo) -a/(q - 1) A divergent geometric series. Don’t forget to forget your assumptions: sage: forget() sage: assume(q > 1) sage: sum(a*q^k, k, 0, oo) Traceback (most recent call last): ... ValueError: Sum is divergent. This summation only Mathematica can perform: sage: sum(1/(1+k^2), k, -oo, oo, algorithm = 'mathematica') # optional - mathematica pi*coth(pi) Use Maple as a backend for summation: sage: sum(binomial(n,k)*x^k, k, 0, n, algorithm = 'maple') # optional - maple (x + 1)^n Python ints should work as limits of summation (trac ticket #9393): sage: sum(x, x, 1r, 5r) 15 Note Sage can currently only understand a subset of the output of Maxima, Maple and Mathematica, so even if the chosen backend can perform the summation the result might not be convertable into a Sage expression.
|
2019-04-24 08:21:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7237774133682251, "perplexity": 11032.756595141496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578636101.74/warc/CC-MAIN-20190424074540-20190424100540-00294.warc.gz"}
|
https://www.hackerearth.com/practice/basic-programming/bit-manipulation/basics-of-bit-manipulation/practice-problems/algorithm/micro-and-binary-strings/
|
Micro and Binary Strings
Tag(s):
## Basic Programming, Bit manipulation, Easy
Problem
Editorial
Analytics
Micro's wife Mini gave him a bag having N strings of length N. All the strings are binary i.e. made up of 1's and 0's only. All the strings in the bag can be generated by a string S by simply performing right rotations N times. For example if S is "$101$", then the strings in the bag will be "$110$", "$011$", "$101$". Now Mini wants to know the number of ways of selecting one string from the bag with an odd decimal equivalent. Micro got very confused by all this, so he asked for your help.
Input:
The first line consist of an integer T denoting the number of test cases.
First line of each test case consists of an integer denoting N.
Second line of each test case consists of a binary string denoting S.
Ouptut:
Print the answer for each test case in a new line.
Constraints:
$1 \le T \le 100$
$1 \le N \le 10^5$
SAMPLE INPUT
1
2
10
SAMPLE OUTPUT
1
Explanation
Given binary string : "$10$", we need to rotate the string right 2 times.
Rotating Right : "$01$", Decimal Equivalent = 1
Rotating Right : "$10$", Decimal Equivalent = 2
Clearly there is only one way to select a string having odd decimal equivalent
Time Limit: 1.0 sec(s) for each input file.
Memory Limit: 256 MB
Source Limit: 1024 KB
Marking Scheme: Marks are awarded when all the testcases pass.
Allowed Languages: C, C++, C++14, Clojure, C#, D, Erlang, F#, Go, Groovy, Haskell, Java, Java 8, JavaScript(Rhino), JavaScript(Node.js), Julia, Kotlin, Lisp, Lisp (SBCL), Lua, Objective-C, OCaml, Octave, Pascal, Perl, PHP, Python, Python 3, R(RScript), Racket, Ruby, Rust, Scala, Swift, Visual Basic
## CODE EDITOR
Initializing Code Editor...
## This Problem was Asked in
Challenge Name
HackerEarth Collegiate Cup - First Elimination
OTHER PROBLEMS OF THIS CHALLENGE
• Math > Number Theory
• Algorithms > String Algorithms
• Math > Counting
• Math > Basic Geometry
|
2018-04-23 15:15:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1729014366865158, "perplexity": 6006.751123709793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946077.4/warc/CC-MAIN-20180423144933-20180423164933-00157.warc.gz"}
|
https://cs.stackexchange.com/questions/98262/for-each-given-set-choosing-either-it-or-its-complement-such-that-their-union-ex
|
# For each given set choosing either it or its complement such that their union exactly has a given size
Given an integer $$k$$ and $$n$$ sets $$A_1,\ldots,A_n$$, denote $$U=A_1\cup A_2\cup\cdots\cup A_n$$, $$A_i^0=A_i$$ and $$A_i^1=U\backslash A_i$$. The problem asks whether there exists $$(b_1,\ldots, b_n)\in\{0,1\}^n$$ such that
$$\left|\bigcup_{i=1}^nA_i^{b_i}\right|=k.$$
Is this problem NP-complete?
• the space is $2^n$, right? – kelalaka Oct 7 '18 at 18:29
• I am not able to recall, but can you tell us whether the problem without the complement of each set is NP-complete? – Apass.Jack Oct 7 '18 at 19:36
• @kelalaka Yes, of course. – xskxzr Oct 8 '18 at 2:27
• @Apass.Jack Yes, it is NP-complete. – xskxzr Oct 8 '18 at 2:30
If $$k=|U|$$, this problem is exactly another wording for SAT where each clause contains all variables. More precisely, for each element $$j\in U$$, let $$l_{ij}=b_i$$ if $$j\notin A_i$$ while $$l_{ij}=\neg b_i$$ otherwise. The problem is exactly to satisfy the clauses $$l_{1j}\vee l_{2j}\vee\cdots \vee l_{nj}$$ for all $$j$$. If all possible clauses are involved in, the answer is certainly NO. Otherwise, without loss of generality, say one missing clause is $$b_1\vee\cdots\vee b_n$$, we can set $$b_1=\cdots=b_n=0$$ to satisfy all other clauses, thus the answer is YES.
If $$k<|U|$$, then at least one element should not be included in the union. We can test for each element: if this element is not included in the union, we can determine for each set whether it or its complement should be chosen, then we can see whether their union exactly has size $$k$$.
|
2019-11-17 15:38:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.861685574054718, "perplexity": 204.82593893647575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669183.52/warc/CC-MAIN-20191117142350-20191117170350-00157.warc.gz"}
|
https://bioinformatics.stackexchange.com/questions/9365/atac-seq-macs2-peak-splitting-in-sliding-windows
|
# ATAC-seq macs2 peak splitting in sliding windows
This question has also been asked on Biostars
I used macs2 to call peaks for atac-seq data. now my goal is to split the peaks into 50 bp windows with 25 bp steps and then calculate the Tn5 integration frequency in each window.
How should I proceed with that?
import numpy as np
np.random.seed(0)
import pyranges as pr
gr = pr.random()
gr.Score = np.random.randint(100, size=len(gr))
gr = gr.slack(25) # make data wider for this example
print(gr)
t1 = gr.tile(50)
def increase_by_25(df):
df = df.copy()
df.Start += 25
df.End += 25
return df
t2 = t1.apply(increase_by_25)
tiled = pr.concat([t1, t2]).sort()
print(tiled)
# +--------------+-----------+-----------+--------------+-----------+
# | Chromosome | Start | End | Strand | Score |
# | (category) | (int32) | (int32) | (category) | (int64) |
# |--------------+-----------+-----------+--------------+-----------|
# | chr1 | 5205300 | 5205350 | + | 33 |
# | chr1 | 5205325 | 5205375 | + | 33 |
# | chr1 | 5205350 | 5205400 | + | 33 |
# | chr1 | 5205375 | 5205425 | + | 33 |
# | ... | ... | ... | ... | ... |
# | chrY | 41326450 | 41326500 | - | 3 |
# | chrY | 41326475 | 41326525 | - | 3 |
# | chrY | 41326500 | 41326550 | - | 3 |
# | chrY | 41326525 | 41326575 | - | 3 |
# +--------------+-----------+-----------+--------------+-----------+
# Stranded PyRanges object has 7,964 rows and 5 columns from 24 chromosomes.
# For printing, the PyRanges was sorted on Chromosome and Strand.
This is as far as I can get without example data and a clearer explanation of what you need.
|
2021-10-24 11:56:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21502257883548737, "perplexity": 7919.80869610463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00292.warc.gz"}
|
https://support.bioconductor.org/p/67330/
|
Search
Question: UCSC Genome, match accession (NM_#####) with actual gene name
0
3.6 years ago by
United States
n_bormann10 wrote:
Hi all,
I have a question regarding UCSC Genome. I have gone through the process of using galaxy (tophat, cuff diff, etc) and I used the mouse Dec2011 mm10 as my annotation file. What I currently have are all the NM_#### and NR_#### for the genes, but I need the gene names. I was able to get a .csv that had quite a few, so I just used R to search (grep) and match the names for me, but the file does not have all of them.
So I have installed the TxDb.Mmusculus.UCSC.mm10.knownGene package, but can't figure out how to do this very simple task. I'm hoping there is a function that allows me to input, for example, NM_001001130, and the gene name Zfp85 would be returned.
Thank you in advance for you time and expertise!
modified 3.6 years ago by James W. MacDonald48k • written 3.6 years ago by n_bormann10
2
3.6 years ago by
United States
James W. MacDonald48k wrote:
You want the org.Mm.eg.db package, and you should read the help page for select().
> library(org.Mm.eg.db)
> select(org.Mm.eg.db, "NM_001001130", "SYMBOL","REFSEQ")
REFSEQ SYMBOL
1 NM_001001130 Zfp85
|
2018-12-12 16:43:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3690739870071411, "perplexity": 3335.9185314689125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824059.7/warc/CC-MAIN-20181212155747-20181212181247-00298.warc.gz"}
|
https://plainmath.net/22158/a-wire-of-length-r-feet-is-bent-into-a-rectangle-whose-width-is-2-time
|
# A wire of length r feet is bent into a rectangle whose width is 2 time
A wire of length r feet is bent into a rectangle whose width is 2 times its height. Write the area A of the rectangle as a function of the wire's length r. Write the wire's length r as a function of the area A of the rectangle (note A not a).
• Questions are typically answered in as fast as 30 minutes
### Plainmath recommends
• Get a detailed answer even on the hardest topics.
• Ask an expert for a step-by-step guidance to learn to do it yourself.
Fatema Sutton
The length of the wire is the perimeter of the rectangle with width ww and height hh:
$$2w+2h=r$$
Given that the width is 2 times its height, $$w=2h$$, we write: $$2(2h)+2h=r$$
$$4h+2h=r$$
$$6h=r$$
$$\displaystyle{h}=\frac{{r}}{{6}}$$
which follows that:
$$\displaystyle{w}={2}⋅\frac{{r}}{{6}}=\frac{{r}}{{3}}$$
The area of the rectangle is:
$$A=wh$$
In terms of r,
$$\displaystyle{A}={r}{3}⋅{r}{6}$$
$$\displaystyle{A}=\frac{{r}^{{2}}}{{18}}$$
Solve for r in terms of A:
$$\displaystyle{18}{A}={r}^{{2}}$$
$$\displaystyle√{18}{A}={r}$$
or
$$\displaystyle{r}={3}√{2}{A}$$
|
2021-11-28 22:56:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8406214118003845, "perplexity": 643.9371719104241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358673.74/warc/CC-MAIN-20211128224316-20211129014316-00001.warc.gz"}
|
https://www.gauthmath.com/solution/i1117279723
|
Gauthmath
# Given that alpha =7+2 i is one of the roots of a quadratic equation with real coefficients,find the values of alpha +beta and alpha beta and interpret the results.
Question
### Gauthmathier9723
YES! We solved the question!
Check the full answer on App Gauthmath
Given that is one of the roots of a quadratic equation with real coefficients,
find the values of and and interpret the results.
Good Question (68)
Gauth Tutor Solution
4.9
### Noah
University of Illinois Chicago
Tutor for 2 years
The coefficient of in is .
The constant term in is .
Explanation
The coefficient of in is .
The constant term in is .
Thanks (123)
Feedback from students
|
2023-01-30 08:24:40
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8689287900924683, "perplexity": 2772.0972954637855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00601.warc.gz"}
|
https://stats.stackexchange.com/questions/291785/rejection-sampling-and-tildep
|
# rejection sampling and $\tilde{p}$
If I'm understanding rejection sampling correctly, it's a way for us to sample a distribution that is difficult to directly sample. In order to apply rejection sampling of a distribution $p(z)$, we set $p(z) = {1 \over Z_p}\tilde{p}(z)$. From there we create a proposal distribution $q(z)$ and find a constant $k$ such that $kq(z) \ge \tilde{p}(z)$.
What I'm confused about is $\tilde{p}(z)$. In the book (PR and ML by Bishop), it says that we can easily able to evaluate $p(z)$ for any given value of z up to some normalizing constant $Z$. What are some cases of this? I can't think of a reason why we need to use $\tilde{p}(z)$ when we know $p(z)$.
• Sampling from the univariate truncated normal distribution: link.springer.com/article/10.1007%2FBF00143942?LI=true – user3903581 Jul 16 '17 at 8:54
• There are many cases (as in Bayesian statistics) when the density to simulate is known up to a normalising constant. – Xi'an Mar 18 '18 at 17:21
Using $\tilde{p}(z)$ sometimes makes the math easier to work out because you can ignore the normalizing constant $Z$. One example is drawing from a beta(2,2) via rejection sampling. You can bound the density by 2, or you can drop the normalizing constant $\frac{\Gamma(4)}{\Gamma(2)\Gamma(2)}$ and work directly with the kernel, $p(1-p)$.
Or, in the notation of the OP, $\tilde{p}(z) = z(1-z)$ and $p(z) =z(1-z) \frac{\Gamma(4)}{\Gamma(2)\Gamma(2)}$
|
2019-10-16 12:03:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8856237530708313, "perplexity": 199.66612742400602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668569.22/warc/CC-MAIN-20191016113040-20191016140540-00493.warc.gz"}
|
https://codegolf.meta.stackexchange.com/questions/2140/sandbox-for-proposed-challenges/18995
|
# What is the Sandbox?
This "Sandbox" is a place where Code Golf users can get feedback on prospective challenges they wish to post to the main page. This is useful because writing a clear and fully specified challenge on the first try can be difficult. There is a much better chance of your challenge being well received if you post it in the Sandbox first.
See the Sandbox FAQ for more information on how to use the Sandbox.
## Get the Sandbox Viewer to view the sandbox more easily
To add an inline tag to a proposal use shortcut link syntax with a prefix: [tag:king-of-the-hill]
# Reduce the entropy of the input...
## Spec
Given two arguments:
• A list containing 2 or more positive integers (from 0 to artificial limit of 2^32)
• A positive number defining the 'entropy allowance'
Return a sublist containing elements up until the entropy allowance is used up.
For this challenge, we define 'entropy' as the difference in bits between numbers in the list; also known as the Hamming distance.
Note that no 'entropy' is used up when flipping the bits in the first number, only used when flipping subsequent bits.
## Examples
Worked example (MSB...LSB), keeping the numbers low to keep things simple:
Example 1:
List: [1, 2, 3, 4, 5, 6] Allowance: 4
1 => 0000 0001 - ignore implicit change from 0, total used = 0
2 => 0000 0010 - change of 2 bits, total used = 2
3 => 0000 0011 - change of 1 bit, total used = 3
4 => 0000 0100 - change of 3 bits, total used = 6 (would exceed allowance)
5 => 0000 0101 - change of 1 bit, total used = 7
6 => 0000 0110 - change of 2 bits, total used = 9
Output: [1, 2, 3]
Example 2:
List: [255, 0, 127, 64, 32, 100] Allowance: 23
255 => 1111 1111 - ignore implicit change from 0, total used = 0
0 => 0000 0000 - change of 8 bits, total used = 8
127 => 0111 1111 - change of 7 bit, total used = 15
64 => 0100 0100 - change of 6 bits, total used = 21
32 => 0010 0000 - change of 2 bit, total used = 23
100 => 0110 0100 - change of 2 bits, total used = 25
Output: [255, 0, 127, 64, 32]
## Meta
Is this interesting enough a challenge? Is it just a chameleon (is it just the hamming distance with extra steps)? Thoughts?
If it's not shot down for being a plain, any ideas for a better title?
• Guess allowing removing arbitary least item is more interesting – l4m2 Jun 19 '20 at 2:38
• What about a slight pivot - given a list and an allowance, return the list that maximise length of the list? – streetster Jul 29 '20 at 11:55
# The smallest positive integer that cannot be printed in fewer than %NUMBER% bytes of %LANGUAGE% code-challenge
All numbers mentioned below are positive integers. All programs mentioned below output exactly 1 number (including functions that return it).
For every number, there must be at least one program in your language that outputs it. Besides, the problem of determining what number the program outputs must be undecidable without making the assumption that the program halts.
The challenge itself is to choose a number $$\N\$$ and find the smallest number $$\M\$$ that cannot be output by a program shorter than $$\N\$$ ordinary units of measurement used for your language (usually bytes). You have to prove your solution correct. A strong mathematical proof is not necessary, but it should be reasonably convincing for somebody knowledgeable in your programming language.
The answer with the largest $$\M\$$ wins.
# Sandbox stuff
• In this challenge, non-golfing languages seem to have a serious advantage. I think scoring by $$\M\$$ instead of $$\N\$$ is better at reducing the advantage of Lenguage-like languages to manageable levels and preventing ties; is that correct?
• Is the tag appropriate?
• Title suggestions?
• I feel like I misunderstand. Let's say print M (where M is a normal base 10 integer) is the only valid way to output integers in my language. I take $N=7$, then $M=1$. What's to stop me repeatedly increasing $N$ by 1 and adding a zero on to $M$ each time? – Dingus May 28 '20 at 2:23
• @Dingus I used to have a complicated rule to prevent exactly this; I'll try to think of a simpler one. – the default. May 28 '20 at 2:35
• I'm worried that the proof will just be an exhaustive search for most language, as $M$ has to be the smallest number. Alternatively, we could make $M$ a lower bound of the smallest value, and score a solution based on both $M$ and $N$. Another way is to flip this challenge to find the largest printable number in $N$ bytes, but that has already been done here and here. – Surculose Sputum May 29 '20 at 15:18
• @Dingus I think you will have to stop when you can't prove whether whatever precedes print M halts or doesn't halt. I still think this is too cheap a way to get a high-scoring answer, but I am not sure how to formalize things better. – the default. Jun 2 '20 at 16:01
• @mypronounismonicareinstate I think my question is probably not relevant. I overlooked a critical condition - 'the problem of determining what number the program outputs must be undecidable without making the assumption that the program halts'. – Dingus Jun 4 '20 at 0:18
• I'm worried that some ridiculously large numbers will show up here. Also, as $N$ gets too large, maybe we will encounter problems like $M$ cannot be calculated assuming axioms of ZFC, what will happen then? – Trebor Jun 18 '20 at 7:29
• @Trebor As far as I understand, each program either does output a number within finite time or doesn't output a number within finite time. If the program outputs a number, its result is known, and if it doesn't, the program is not valid because it doesn't output a number in finite time. Can you think of any particular way to obtain a ridiculously large number? – the default. Jun 18 '20 at 10:03
• A simple example is a program that enumerates all the valid proofs in ZFC and outputs the Godel encoding of the first proof it encounters of $A \wedge \neg A$. Since ZFC cannot prove its own consistency, it is not decidable in ZFC whether this program terminates. Worse still, if the program do terminate, ZFC cannot compute the return value either, because it is now inconsistent, rendering its deductions unreliable. – Trebor Jun 18 '20 at 11:00
• @Trebor Halting and printing a single number is completely binary. Are you sure you are reading the challenge correctly? I'm asking not for the largest number that can be printed, but for the smallest number that can't be printed. – the default. Jun 18 '20 at 11:27
• @mypronounismonicareinstate Yes, that's why I'm "worried" instead of "sure". Also, these problems will not show up if we keep it small... – Trebor Jun 19 '20 at 0:42
## Can this month tell the day-of-the-week? code-golfdecision-problemdate
June 2020 is a month in which June 1st corresponds to Monday, June 2nd corresponds to Tuesday, ... June 7th corresponds to Sunday. For reference, here's the cal of June 2020.
June 2020
Su Mo Tu We Th Fr Sa
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30
Given a year and a month in the format [year, month], output two distinct values that tell whether this month can tell the day-of-the week.
## Test cases
[2020,6] -> True
[2021,2] -> True
[1929,4] -> True
[1969,1] -> False
[1997,5] -> False
[2060,1] -> False
$$$$
• Have you considered "Does this month start on Monday?" – Domenico Modica Jun 6 '20 at 14:06
# Self-distances completion - Minimum k to get them all
Related minor code-golf challenge
Consider $$\A = (a_1,\dots,a_k)\ k\ge2 \$$ a sequence of positive integers, in which all elements are different.
The self-distances completion of a sequence like $$\A\$$ it's performed recursively as follow:
Starting from $$\i=2\$$, while $$\a_i\in A:\$$ (loop until the last element)
• If $$\d=|a_i-a_{i-1}|\$$ is not already in $$\A\$$, append $$\d\$$ to $$\A\$$
• Increase $$\i\$$
The resulting sequence $$\A^\circ\$$ is presumably longer than $$\A\$$, nevertheless can't contain more than $$\n\$$ terms.
# Examples
$$A = (2,\ 9,\ 13,\ 15) \mapsto A^\circ = (2,\ 9,\ 13,\ 15,\ 7,\ 4,\ 8,\ 3,\ 5)\\ A = (2,\ 9,\ 13) \mapsto A^\circ = (2,\ 9,\ 13,\ 7,\ 4,\ 6,\ 3)\\ A = (2,\ 9) \mapsto A^\circ = (2,\ 9)$$
If we pick a number $$\n\ge 2\$$, we can ask what's the minimum length $$\k_n\$$ of $$\A\$$ such that $$\A^\circ\$$ contains all the numbers up to $$\n\$$.
(Note that $$\\max A = \max A^\circ\$$ so $$\n\$$ has to be in $$\A\$$)
• Generate the sequence of $$\k_n\$$ starting from $$\n=2\$$.
This is .
I'll run your code on my machine (Windows 10, i7-7500U) for 30 minutes.
Obviously longer sequence is better. In case of a tie, who gets to the last term faster wins.
Your submission must not use more than 8GB of memory.
n. k_n - first A* example found (how many A of lenght k_n that satisfy the condition)
2. 2 - 1 2 (2)
3. 2 - 1 3 2 (4)
4. 2 - 4 1 3 2 (2)
5. 2 - 5 4 1 3 2 (1)
6. 3 - 1 6 2 5 4 3 (19)
7. 3 - 3 1 7 2 6 5 4 (10)
8. 3 - 6 1 8 5 7 3 2 4 (3)
9. 4 - 6 1 9 2 5 8 7 3 4 (80)
10. 4 - 10 1 8 3 9 7 5 6 2 4 (39)
11. 4 - 6 8 1 11 2 7 10 9 5 3 4 (18)
12. 4 - 8 12 1 10 4 11 9 6 7 2 3 5 (7)
13. 4 - 5 3 12 13 2 9 1 11 7 8 10 4 6 (2)
14. 5 - 6 14 3 1 13 8 11 2 12 5 9 10 7 4 (68)
15. 5 - 9 13 3 1 15 4 10 2 14 11 6 8 12 5 7 (17)
16. 5 - 11 16 2 15 12 5 14 13 3 7 9 1 10 4 8 6 (9)
17. 5 - 14 15 9 13 17 1 6 4 16 5 2 12 11 3 10 8 7 (1)
18. 5 - 15 18 6 16 17 3 12 10 1 14 9 2 13 5 7 11 8 4 (1)
19. 6 - 16 10 18 1 4 19 6 8 17 3 15 13 2 9 14 12 11 7 5 (38)
20. 6 - 14 5 20 17 1 19 9 15 3 16 18 10 6 12 13 2 8 4 11 7 (8)
21. 6 - 12 19 3 21 20 6 7 16 18 1 14 9 2 17 13 5 15 4 8 10 11 (1)
22. 6 - 20 12 19 3 21 22 8 7 16 18 1 14 9 2 17 13 5 15 4 10 11 6 (1)
23. 7 - 22 4 23 14 16 1 21 18 19 9 2 15 20 3 10 7 13 5 17 6 8 12 11 (46)
24. 7 - 15 21 1 23 13 10 24 6 20 22 3 14 18 2 19 11 4 16 17 8 7 12 9 5 (14)
25. 8 - 19 23 2 22 9 1 25 7 4 21 20 13 8 24 18 3 17 5 16 6 15 14 12 11 10 (942)
26. 8 - 18 25 8 2 24 1 21 26 7 17 6 22 23 20 5 19 10 11 16 3 15 14 9 13 12 4 (254)
27. 8 - 27 1 25 20 6 22 18 3 26 24 5 14 16 4 15 23 2 19 9 12 11 8 21 17 10 13 7 (74)
The veeery elementary program I used. There aren't any optimizations, it's just for reference.
For the purpose of the challenge you don't have to output examples nor the count. k_n is sufficient
(It doesn't appear in OEIS).
# Conjectures
• The sequence of $$\k_n\$$ is monotonically increasing
• $$\k_{n+1}=k_{n}\lor k_{n+1}=k_{n}+1\$$
# Sandbox
• It's all clear and does it makes sense?
• Any suggestion for a "lighter" name of the process other than "self-distance completion"?
• The alignment of Examples is fine in the main post preview :) – Domenico Modica Jun 1 '20 at 8:46
• A k-permutation of n seems to be a permutation of some k-element subset for {1, ..., n}, right? – retzler Jun 4 '20 at 20:13
• @retzler Exactly, equivalently a non-repetitive sequence of integers. And then we consider its max element n. I prefer to keep the notation of k-permutation of n containing n since doesn't require the notion of max and also it's more linear for what comes next – Domenico Modica Jun 4 '20 at 20:36
• If the growth of k_n is unknown, an algorithm's complexity might come in the form f(k_n). Edit Ah well, tagged fastest-code. Maybe there's another ambiguity: Maybe still require an example for each n - otherwise what's to prevent printf("2,2,2,2,3,3,3,4,4,4,4,4,5,5,5,5,5,6,6,6,6,7"); – retzler Jun 4 '20 at 21:39
• @retzler I decided to go for fastest-code since looking around fastest-algorithm are not so popular. And as you spotted, the complexity of the algorithm wouldn't be straightforward (and I know almost nothing how it's calculated). About the output I think that to hardcode a sequence it's automatically seen as cheat. Look – Domenico Modica Jun 4 '20 at 22:09
• Some comments: (1) The description is a little difficult to follow, but I think it's because the problem is that hard to define. You can't probably get the description much clearer than it is now. (2) In the "Task" section you ask for the minimum k such that its completion contains all numbers up to n. However, the math section right afterwards requires, if I'm interpreting it correctly (four quantifiers in a row is a little too much for me), that k and all numbers above it satisfy that property. (3) Maybe give a few more details about your computer: RAM, even cache memory – Luis Mendo Jun 5 '20 at 20:15
• (4) Your submission must not use more than 8GB of memory: isn't that platform-dependent to some extent? (5) Typo: "lenght" – Luis Mendo Jun 5 '20 at 20:15
• @LuisMendo (1) The question was very instinctive, but maybe I've convoluted it too much. Playing with the completion I asked myself how can I "generate" all the number up to a certain one. Of course is interesting to find the minimum requirements that is the minimum (starting) length. Now one particular thing to notice is that the new terms added are always smaller that the max element in the input string (that's because we are always adding distances). So if I hope to generate all the number up to 20, of course 20 has to be within the input string – Domenico Modica Jun 6 '20 at 13:05
• @LuisMendo That's why I define $\mathcal{A}_{n,k}$ to be all the sequences of length $k$ having $n$ as their max element. All the completion of all the sequences in one of these sets always keep the same max element $n$, (the only thing that can change is the resulting length). It's in these sets that make sense to search if it exists one sequence that maps to the "full" $n$-sequence. It's a guideline not a rule (maybe it does more harm than good) – Domenico Modica Jun 6 '20 at 13:09
• @LuisMendo (2) Yes, you've interpreted it correctly. It's nothing special, basically when you find the smaller k that works, all the numbers above it automatically work as well. Take the working sequence you found of length k, you can find a k+1 sequence if you incorporate the first $d$ appended, you can find a k+2 long sequence incorporating also the second $d$ appended in the original one. So, they will all work above a certain threshold. (3-4) I was borrowing from this post – Domenico Modica Jun 6 '20 at 13:23
• @DomenicoModica (2) Ah, maybe clarify that in the text then. On the face of it, they seem different statements – Luis Mendo Jun 6 '20 at 13:33
• @LuisMendo I've pared it down substantially. I don't know why I thought all that notation was fundamental ahahaha. Anyway I'll wait sometimes to post it because I want to experiment a bit with it to have a bigger picture and maybe find some useful reductions – Domenico Modica Jun 6 '20 at 14:00
# Pi or Phi?
Given a positive integer $$\n\$$ where $$\n \geq 10\$$ as input, determine whether $$\n\$$ occurs in the first 100 digits of pi (after the decimal), the first 100 digits of phi, or both.
### Reference
"The first 100 digits" refers to the 100 digits after the decimal place in each number
First 100 digits of Phi:
(1.)6180339887498948482045868343656381177203091798057628621354486227052604628189024497072072041893911374
First 100 digits of Pi:
(3.)1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679
### Input
• You can assume that the input will appear in the first 100 digits of at least one of the two numbers (pi or phi)
• Input can be taken as a number, string or any other reasonable format
• The input number will have 2 or more digits and won't exceed 100 digits
### Output
Output should be one of three consistent values:
• One to represent that the number appears in (the first 100 digits of) Pi (but not phi)
• Another value to represent that the number appears in (the first 100 digits of) Phi (but not pi)
• Another value to represent that the number appears in Both
## Examples
Input: 113
Output: Phi since the substring 113 appears in the first 100 digits of phi, but not in the first 100 digits of pi.
Input: 793
Output: Pi since the substring 793 appears in the first 100 digits of pi, but not in the first 100 digits of phi.
Input: 84
Output: Both since the substring 84 appears both in the first 100 digits of pi and in the first 100 digits of phi.
## Test Cases
113 -> Phi
793 -> Pi
84 -> Both
618 -> Phi
141 -> Pi
86 -> Both
3398 -> Phi
3993 -> Pi
39 -> Both
374 -> Phi
679 -> Pi
35 -> Both
072 -> Phi
078 -> Pi
117 -> Both
1798057628621 -> Phi
71693993751058209 -> Pi
803 -> Both
811 -> Phi
10 -> Pi
11 -> Both
• This seems like mostly a challenge to compute digits of pi or phi, which feels like a chameleon challenge. – xnor Jun 8 '20 at 7:51
• I think at 100 digits I agree with xnor, but if you made the number of digits smaller I would expect some kind of compression to be a better approach. That said, I'm not sure it is then terribly different from other compression based questions since I don't think phi or pi have any exploitable structure. I do think there is a good idea somewhere in here, I'm just not sure this is it. – FryAmTheEggman Jun 9 '20 at 16:33
# Print the SARS-Cov-2 (COVID-19) genome code-golfkolmogorov-complexity
## Background
As you probably learned in biology class, DNA and RNA are composed of strands of nucleotides; each nucleotide consists of a chemical called a base together with a sugar and a phosphate group. The information stored in the DNA or RNA is coded as a sequence of bases. DNA uses the bases A, C, G, and T (standing for adenine, cytosine, guanine, and thymine), while RNA uses A, C, G, and U (with uracil replacing thymine).
## Challenge
The genome of SARS-Cov-2, the virus that causes COVID-19, has been fully sequenced. This genome is a sequence of 29,903 bases, each base being one of A, C, G, or U, since it's an RNA virus.
The challenge is to output that sequence using as few bytes in your program as possible (code golf). You can write either a full program or a function.
Because the names A, C, G, and U are arbitrary, you can use any 4 characters you want instead:
• You must use exactly 4 characters (they must be pairwise distinct--two or more can't be equal).
• Each one of the 4 characters must be a printable ASCII character in the range from '!' to '~', inclusive (ASCII 33 to 126). In particular, this does not include the space character or the newline character.
• Each of the 4 characters you use must always represent the same one of A, C, G, and U -- no changing in the middle!
Your output should be the precise text at the following link, with A, C, G, and U replaced by whichever 4 characters you selected, and you may optionally follow the entire sequence with one or more newline characters (but no newlines or other extraneous characters at the beginning or in the middle are allowed):
Click to see the required output. (Including all 29,903 characters here would cause this to exceed a StackExchange maximum size.)
Because you can use any 4 distinct characters you want, it's acceptable to use, for example, lower-case instead of upper-case, or to use T instead of U, or to use 0123 instead of ACGU, or even to output the complementary strand (with A and U switched, and C and G switched).
## Restrictions
Standard loopholes are prohibited as usual. In particular, it's not allowed to retrieve information online or from any source other than your program. You also can't use any built-in which yields genomic data or protein data (these would generally retrieve data from the Internet so they wouldn't be allowed anyway, but some languages may have this facility built in internally; use of such functionality is prohibited whether implemented internally or externally).
I've set up a way to check that your program's output is correct. Just copy and paste your program's output into the argument in this verification program on TIO and run it.
## Other Info
Some facts that may or may not be of help:
1. There are 29,903 bases in the sequence. The counts for the individual bases are:
• A 8954
• C 5492
• G 5863
• U 9594
2. If you simply code each of the 4 bases in 2 bits, that would get you down to 7476 bytes (plus program overhead), so any competitive answer is likely to be shorter than that.
3. The source for the data can be found at this web page at NIH; scroll down to ORIGIN. The data is written there in lower-case letters, and 't' is used instead of 'u', apparently because DNA sequencing techniques were used.
4. There are variant strains of SARS-Cov-2 known (the base sequences are slightly different, and the length varies a bit); I believe the one here is the first one sequenced, from Wuhan.
5. Groups of 3 consecutive bases code for particular amino acids, so it might be useful to analyze the data in groups of 3. But there are non-coding areas where the number of bytes isn't necessarily a multiple of 3, so you may not want to just divide the data into groups of 3 starting at the beginning. If it might be useful, you can find more info on the structure of the virus RNA here (but this probably isn't needed).
Disclaimer: I'm not a biologist. If anyone has any corrections or improvements on the underlying biology (or anything else, of course), please let me know!
Happy golfing!
• Mathematica has ResourceData["Genetic Sequences for the SARS-CoV-2 Coronavirus"]. It fetches data from the internet, but somebody like me could argue that it's allowed because it's sort of built-in, so I think you should disallow coronavirus genome built-ins here. I get 7846 bytes for Bubblegum with zopfli (probably because the raw storage mode in DEFLATE always stores >=1 byte per source byte, and the other ones have various LZ77 stuff in the Huffman tree, increasing overhead for non-compressible parts, assuming I understand DEFLATE correctly) – the default. Jun 7 '20 at 13:30
• @mypronounismonicareinstate Thanks for pointing that out -- I added in something to handle that. The challenge now specifically prohibits any use of built-in genomic data or protein data. This should take care of somebody somehow getting, for instance, a related virus genome and then just compressing the diff. – Mitchell Spector Jun 7 '20 at 18:23
• I'm sorry to say that somebody has beaten you to it. – Dingus Jun 10 '20 at 1:28
• @Dingus Yes, I just noticed that. I voted to close the other question as a duplicate. Posting in the Sandbox for a couple of days first is the recommended procedure, after all, so my challenge has priority. I've gone ahead and moved it to the main site. (And I think it's better though thought out, and it has a verification program -- plus it benefited from mypronounismonicareinstate's comment about Mathematica built-ins.) – Mitchell Spector Jun 10 '20 at 1:43
• @MitchellSpector I agree that yours is better thought out. The verification program is a great feature - obviously a bit of work went into creating it. I'll leave my answer posted pending the outcome of the close vote. Not because I don't support your claim to priority, but for the sake of my own priority in posting the first answer. – Dingus Jun 10 '20 at 1:52
• @Dingus -- Thank you! I have no problem with answers being posted to both challenges as long as they're still open. (I think the other one should be closed, but I also don't believe in penalizing answerers for problems with a question.) If I had let it linger in the Sandbox for a month, it would be fair game, but it's only been there for a couple of days, which is the right way to do it. – Mitchell Spector Jun 10 '20 at 1:55
Alice decided to improve the security of her website by sending first five characters of an SHA-1 hash to Bob's Leaked Password Detection Service. However, she made two mistakes that let Eve decode the passwords: sending passwords over HTTP and checking the password after each character of a password is typed. Eve asked you for help in decoding the passwords, however she cannot really program, so needs your help in implementing password cracking algorithm as a computer program or function.
Eve eavesdropped the requests for following hashes from Alice.
516B9
379FC
19C2A
9D4E1
08506
F808E
A7F93
5BAA6
How could you decode this password? Well, you can brute-force all lowercase letters. In this case the only letter whose hash starts with 516B9 is p. The hash of letter p is 516B9783FCA517EECBD1D064DA2D165310B19759.
Knowing that the password starts with p, you can brute-force the second character. In this case, the only possible character is a. The hash of pa is 379FC0D5299A71AC0F171FBB5AFB262829B4E765
You can continue to brute-force letters one by one to figure out the password was password (5BAA61E4C9B93F3F0682250B6CF8331B7EE68FD8). Well, that was simple.
Not all passwords are that simple however. Consider the following requests:
4DC7C
A84FD
467D7
BD79D
12D83
First three characters of this password are simple: rxr (467D7856C648A79A096D339A2CE5FC929658967D).
With the fourth character it gets more complicated. BD79D matches for rxrf (BD79DEC8435B8BA509A25F419F31CC2ACDE2FF0A) and rxrp (BD79DC20901B11468F8369B5B0D15894F3D96A5E). There is an ambiguity, but as it turns out, it can be resolved by trying both ways. If you assume the password starts with rxrp there is no valid letters to continue with. However, if you assume the password starts with rxrf, then it's possible to append a, resulting in rxrfa (12D83D3A429CD7D64E9A532C05C2C00C35032A94), which is a valid solution.
All passwords will be composed entirely out of lowercase letters. You can assume all inputs have a solution and there are no inputs that could possibly resolve to multiple passwords (for instance ["4DC7C", "A84FD", "467D7", "BD79D"] is an invalid input because it can match both "rxrf" and "rxrp").
There are no case requirements on the input. Your program is allowed to assume the input is lowercase. Your program is allowed to assume the input is uppercase.
The program must not take longer to execute than 24 hours for a 25 characters long password.
It is allowed to use external libraries or language built-in functions for computation of SHA-1 hash.
# Example Input and Output
This is a JSON.
[
{
"input": [
"516B9",
"379FC",
"19C2A",
"9D4E1",
"08506",
"F808E",
"A7F93",
"5BAA6"
],
},
{
"input": [
"07C34",
"593B7",
"0262F",
"CED65",
"23612",
"4EF76",
"B7A87"
],
"output": "letmein"
},
{
"input": [
"84A51",
"87DDA",
"83F67",
"E6FB0",
"5157D",
"82CD7",
"6F655",
"43426"
],
"output": "codegolf"
},
{
"input": [
"7A81A",
"DB3D4",
"FE05B",
"E7280",
"32726",
"30AE9",
"2C61A",
"A9E46",
"15D98",
"F780A",
"3E949",
"F4BF2",
"6A5C4",
"C4554",
"FA2EA",
"48A40",
"5DD7F",
"5284E",
"C0B8D",
"20D59",
"9184C",
],
"output": "onetwothreefourfivesix"
},
{
"input": [
"84A51",
"87DDA",
"26CA7",
"9D925",
"08A23",
"BE075",
"3179A",
"5D904",
"54C70",
"47790",
"5D3B5",
"0E4CE",
"004C7",
"EC8A8",
"131A6",
"7F47F",
"41BC6",
"FCF07",
"D62BD",
"DD14F",
"6A141",
"EE184",
"595F8",
"9D303",
"BFD36"
],
"output": "correcthorsebatterystaple"
},
{
"input": [],
"output": ""
},
{
"input": [
"4DC7C",
"A84FD",
"467D7",
"BD79D",
"12D83"
],
"output": "rxrfa"
},
{
"input": [
"4DC7C",
"A84FD",
"467D7",
"BD79D",
"7B743"
],
"output": "rxrpa"
}
]
• I wonder whether MD5 might be preferred over SHA1 - as in, more likely to exist in the language without having to load external libraries? – streetster Jun 18 '20 at 16:25
• Languages without a hashing builtin or library would have effectively two challenges: implementing the hash and doing the key part of the challenge. There are already challenges for MD5, SHA-1, and SHA-256 e.g. codegolf.stackexchange.com/questions/81195/implement-sha-256. I see two resolutions to this: 1. not count byte count of the hash; or 2. use a simple hash, such as the digits after the decimal point in the square root of the sum of code points – fireflame241 Jun 18 '20 at 22:19
• You could allow a black-box function as input that computes the SHA256 hash to make this more competitive for languages without builtins. – S.S. Anne Jun 24 '20 at 2:32
# Posted at Baba if you, flag is win
• There are a lot of possible rules (I think a little less than 2^9, as for each X and Y either X is Y or X is not Y, and there are 3*3=9 (X, Y) choices). Is there any documentation on what's the behavior of each rule combination? // i.e., even in this simplified version there are still a lot of fuzzy details on how the rules behaves. – user202729 Jun 22 '20 at 15:04
• @user202729 , Thank you for your input. I’ll take out the clause about “no non-core packages” as suggested. In terms on the moves after win, I think the easiest thing will be to say that one can assume the input sequence to end on a winning move. If a longer sequence is given, that’s undefined behaviour and the program can do whatever. – MarcinKonowalczyk Jun 22 '20 at 16:55
• @user202729 Finally, I admit I'm not certain what is your source of confusion. The rules work just like in the main game (with the caveat of everything is stop), and I've specified a lot of tricky cases both in this post and in the accompanying GitHub repo. Arguably, the code on GitHub specifies the problem precisely (as it is an execution of it). I've also added test cases to allow one to check the behaviour. I'm not sure what else could I do? – MarcinKonowalczyk Jun 22 '20 at 17:00
• Now that this has been posted to main, could you delete this proposal to create more space for new answers? – caird coinheringaahing Sep 25 '20 at 1:05
• The default for kolmogorov-complexity is that the exact, constant string must be output, so I suggest no leading spaces allowed. Some languages can't output in certain forms (e.g. printing) without a trailing newline, so I'd say it's okay (instead of "print this logo", I'd suggest saying "output this logo exactly as the following string") – fireflame241 Jun 27 '20 at 7:45
• Now that this has been posted to main, could you delete this proposal to create more space for new answers? – caird coinheringaahing Sep 25 '20 at 1:04
# Migrate Try it online! to CommonMark
Try it online! generates old-style MarkDown code blocks which indent all lines with 4 spaces and then optionally precedes the block with a language comment.
Furthermore if the code block can't be parsed by old-style MarkDown (e.g. it has a leading newline, common in Retina answers), then it instead uses a <pre><code> block, with HTML escapes for all nonprinting characters.
Your program or function must take a whole TIO post, and change its code block into CommonMark style.
Examples:
# [Python 2], 16 bytes
<!-- language-all: lang-python -->
print "Python 2"
[Try it online!][TIO-kdaf9y51]
[Python 2]: https://docs.python.org/2/
[TIO-kdaf9y51]: https://tio.run/##K6gsycjPM/r/v6AoM69EQSkAzFcwUvr/HwA "Python 2 – Try It Online"
becomes
# [Python 2], 16 bytes
python
print "Python 2"
[Try it online!][TIO-kdaf9y51]
[Python 2]: https://docs.python.org/2/
[TIO-kdaf9y51]: https://tio.run/##K6gsycjPM/r/v6AoM69EQSkAzFcwUvr/HwA "Python 2 – Try It Online"
which displays as
# Python 2, 16 bytes
print "Python 2"
Try it online!
while
# [Retina 0.8.2], 13 bytes
<pre><code>
Retina 0.8.2
</code></pre>
[Try it online!][TIO-kdafdbm1]
[TIO-kdafdbm1]: https://tio.run/##K0otycxL/P@fKwjMUDDQs9Az@v8fAA "Retina 0.8.2 – Try It Online"
becomes
# [Retina 0.8.2], 13 bytes
Retina 0.8.2
[Try it online!][TIO-kdafdbm1]
[TIO-kdafdbm1]: https://tio.run/##K0otycxL/P@fKwjMUDDQs9Az@v8fAA "Retina 0.8.2 – Try It Online"
which displays as
# Retina 0.8.2, 13 bytes
Retina 0.8.2
Try it online!
This is , so the shortest program or function that breaks no standard loopholes wins!
# Where are the traps? code-golfnumbersequence
### Background Partially copied from my related challenge
The trapped knight sequence is a finite integer sequence of length 2016, starting from 1, and has the following construction rules:
1. Write a number spiral in the following manner:
17 16 15 14 13 ...
18 5 4 3 12 ...
19 6 1 2 11 ...
20 7 8 9 10 ...
21 22 23 24 25 ...
1. Place a knight on 1.
2. Move the knight to the grid with the smallest number it can go that has not been visited before, according to the rules of chess (i.e. 2 units vertically and 1 unit horizontally, or vice versa).
3. Repeat until the knight gets stuck.
It is known that the sequence ends at 2084 where the knight is trapped. But here is a twist. Suppose a knight can step back to the previous grid whenever it is stuck, and choose the grid with the next smallest number possible. By doing so, the sequence can be further extended until it is stuck again at 2720. Then, the knight steps back and choose another path, which further extends the sequence until it is stuck again at 3325...
Then, we call these numbers at which the knight is being trapped "traps". So we now know that the first few traps are at 2084, 2720, 3325, ... and it continues to infinity.
### Challenge
Write a shortest program or function, receiving an integer $$\N\$$ as input, output the first $$\N\$$ traps in the extended trapped knight sequence.
### Values
The first 100 terms of the sequence are as follows.
2084, 2720, 3325, 3753, 7776, 5632, 7411, 8562, 14076, 8469,
9231, 22702, 14661, 21710, 21078, 25809, 27112, 24708, 19844, 26943,
26737, 32449, 31366, 45036, 37853, 37188, 43318, 62095, 67401, 68736,
70848, 62789, 63223, 69245, 85385, 52467, 71072, 68435, 76611, 84206,
81869, 70277, 81475, 83776, 70767, 84763, 99029, 82609, 103815, 86102,
93729, 100614, 108039, 82111, 99935, 85283, 109993, 119856, 119518, 116066,
109686, 92741, 124770, 92378, 104657, 125102, 107267, 107246, 117089, 117766,
99295, 121575, 98930, 117390, 123583, 112565, 122080, 111612, 111597, 97349,
105002, 130602, 133509, 153410, 127138, 143952, 153326, 157774, 122534, 136542,
163038, 134778, 140186, 162865, 171044, 159637, 171041, 174368, 184225, 152988
### Winning Criteria
The shortest code of each language wins. Restrictions on standard loopholes apply.
# Convert LifeOnTheEdge to LifeOnTheSlope
Your task here is to take a LifeOnTheEdge pattern and convert it to LifeOnTheSlope.
A LifeOnTheEdge pattern is composed of these four characters: |_L . A pattern corresponds to a certain arrangement of "on" edges in a square grid. The pattern is placed in the grid first with the characters in the cells, and each of the four letters specifies the state of the edges on the left and the bottom of that cell. | means the edge on the left is on, _ means the bottom edge is on, L means both of them are on and means neither of them are on.
For example the following LifeOnTheEdge:
|_L
|
translates to:
. . . . .
| |
. ._._. .
|
. . . . .
Your task is however convert it to LifeOnTheSlope. LifeOnTheSlope is a LifeOnTheEdge equivalent but only uses three symbols: /\ . You should rotate the pattern 45-degree clockwise, for example the above example translates to:
/
/\/
\
# Sandbox
I'm not sure if I described the problem clearly. Improvements on the wording and other things?
• Nice challenge! The task is clear, I just think you may specify if and how leading/trailing newlines/spaces are allowed, for example in the example there may be a trailing space. And also.. Are the set of characters strictly fixed? People usually ask for free sets, for example some values [1,2,3,0] instead of |_L but since this is ascii-art I think it's fine to have a fixed set. Let's see if anyone else has any opinion. – AZTECCO Aug 2 '20 at 12:35
• @AZTECCO For the second question I'm fine with both options. This convertion is a thing that annoys me in my CA exploration. – null Aug 2 '20 at 12:38
# Identify the tonic from a key signature
## Objective
Given a key signature in major, output its tonic.
## Input
An integer from -14 to +14, inclusive. Its absolute value is the numbers of flats/sharps. Negative number represents flats, and positive number represents sharps. Note that theoretical keys are also considered.
## Mapping
Note the use of Unicode characters ♭(U+266D; music flat sign), ♯(U+266F; music sharp sign), 𝄪(U+1D12A; musical symbol double sharp), and 𝄫(U+1D12B; musical symbol double flat).
-14 → C𝄫
-13 → G𝄫
-12 → D𝄫
-11 → A𝄫
-10 → E𝄫
-9 → B𝄫
-8 → F♭
-7 → C♭
-6 → G♭
-5 → D♭
-4 → A♭
-3 → E♭
-2 → B♭
-1 → F
0 → C
1 → G
2 → D
3 → A
4 → E
5 → B
6 → F♯
7 → C♯
8 → G♯
9 → D♯
10 → A♯
11 → E♯
12 → B♯
13 → F𝄪
14 → C𝄪
Output must be a string. Whitespaces are permitted everywhere.
## Rule
• Invalid inputs fall in don't care situation.
• "Or a sequence of bytes representing a string in some existing encoding"? (I think this should be the default, but I don't remember seeing any meta post about it) – user202729 Aug 4 '20 at 6:06
# Source Code Byte Frequency - Posted here
Changes from the original idea:
• Without the requirement of fixed representation of the result (percentage and trimming).
• With constraint: source code must be at least 1 byte long
• Changed from character to byte, plus removing the constraint of SBCS languages only.
• This may qualify for the quine tag but I'm not so sure about that – golf69 Aug 4 '20 at 6:40
• Trimming the output may be difficult for some languages, maybe you could also allow fractions, or require that the output is only accurate to x decimal places? Something to consider when writing a challenge is if a rule actually contributes to the problem or is just an accessory of sorts (here I think the main problem is finding the proportions, and rounding is an accessory) – golf69 Aug 4 '20 at 6:47
• @golf69 I'm also not sure about quine... About the trimming, my intention on the trimming and percentage format was to add a little bit of "work" that the program should do and make the frequencies a bit more different/challenging. Do you think I should drop the trimming part from the challenge? – SomoKRoceS Aug 4 '20 at 9:05
• I do think so, yes (also it might be better received that way) – golf69 Aug 4 '20 at 17:21
• I do not think the average person who does not use this site will know what a SBCS is, so it is probably still worth explaining. Alternatively, I think it would be cleaner to just require that the input be a byte and the output reflects the frequency of that byte. That way you don't eliminate multibyte languages from using it to their benefit, and I don't think it allows any "cheating." – FryAmTheEggman Aug 4 '20 at 21:52
• Sounds okay to me. I agree that it is better to avoid elimination of multi-byte languages. – SomoKRoceS Aug 4 '20 at 22:03
• The thing I try to avoid is to get a lot of 0 bytes answers (for languages that print 0 as default). So I want to add a task that the program should do, like printing in percentage format. So the question is, before I reduced the trimming task, if this is enough to achieve that. – SomoKRoceS Aug 5 '20 at 9:06
• Posted here with some changes listed in this edited answer. – SomoKRoceS Aug 9 '20 at 16:50
# Simulate simple Bloons Tower Defense!
For those who are unaware of this legendary series of video games, here is a link.
You are going to be given an integer number and type of bloon wave and two integers describing the damage and pierce (max amount of bloons you can damage in one attack) of each attack. Your task is to output in how many attacks can you destroy the bloon wave.
## Bloon types
For simplicity, there will be no special properties like fortified, regrow, camo e.t.c. White bloons will also not be present as, without special properties, they are the same as black bloons
Name - health - what it pops into
BAD - 20000 - 3x DDT and 2x ZOMG
ZOMG - 4000 - 4x BFB
BFB - 700 - 4x MOAB
MOAB - 200 - 4x Ceramic
DDT - 350 - 6x Ceramic
Ceramic - 60 - 1x Rainbow
Rainbow - 1 - 2x Zebra
Zebra - 1 - 2x Black
Black - 1 - 2x Pink
Pink - 1 - 1x Yellow
Yellow - 1 - 1x Green
Green - 1 - 1x Blue
Blue - 1 - 1x Red
Red - 1 - Nothing!
## I/O
Input: A string describing the type of bloon, and three integers: the amount of bloons in the wave, attack damage and attack pierce
Output: An integer describing how many attacks are needed for destroying the whole wave.
## Examples
Note: If there is not enough pierce n to attack the whole wave, then only the first n bloons are attacked
Input: Rainbow 3 2 10
Starting: 3x Rainbow
Attack 1: 12x Black
2: 20x Yellow 2x Black
3: 10x Blue 10x Yellow 2x Black
4: 10x Yellow 2x Black
5: 10x Blue 2x Black
6: 2x Black
7: 4x Yellow
8: 4x Blue
9: Done!
Output: 9
This is the 4/0/x Sniper Monkey:
Input: BFB 1 30 1
1: BFB(670)
2: BFB(640)
...
13: BFB(10)
14: 4x MOAB(180)
15: 1x MOAB(150) 3x MOAB(180)
...
19: 1x MOAB(30) 3x MOAB(180)
20: 4x Ceramic(60) 3x MOAB(180)
21: 1x Ceramic(30) 3x Ceramic(60) 3x MOAB(180)
22: 3x Ceramic(60) 3x MOAB(180)
...
27: 1x Ceramic(30) 3x MOAB(180)
28: 3x MOAB(180)
...
69: 1x Ceramic(30)
70: Done!
This is codegolf, so lowest byte-count wins
• This is extremely complicated. I feel like this will be in unanswered for a while. – Razetime Aug 10 '20 at 17:04
• In the second example, how is ceramic destroyed without giving out any lower class bloons? – Bubbler Aug 11 '20 at 0:31
• +1 because btd is awesome lol. However this is a very complicated challenge, even for people who know how the mechanics work. It might be better if you limit the problem to 1 pierce only – thesilican Aug 18 '20 at 23:34
• or you could even do a challenge that simply requires calculating the RBE for a bloon wave, that could still be an interesting challenge – thesilican Aug 18 '20 at 23:35
• actually RBE calculating is probably a bit too simple – thesilican Aug 19 '20 at 0:02
# Solve the Halting Problem for Oneplis
Oneplis is a "very simple esolang" (I don't want to count this one toward my esolangs) made by me which only have three commands. As you can probably see from the name, it is a subset of 1+, along the lines of Befinge.
The three commands are:
• 1, which pushes 1. (Obviously!)
• +, which pops the top two numbers and pushes their sum. (Obviously!)
• #, pops a number n and jumps to the instruction after the nth (0-based) #.
Oneplis is almost certainly a (very limited) push-down automaton, since it's impossible to decrement a number and impossible to retrieve elements arbitrary deep in the stack! Oh, and the only way to read a number is with #, which cannot handle arbitrarily large numbers!
This is , so shortest code wins! Your output should be truthy for halting, and falsy for non-halting. You can use any set of five characters for the instructions. Don't care if it jumps to a non-existence # or trying to execute + when there are <2 numbers on the stack.
## Test cases
11+ -> True
1##1# -> False
1## -> True
11+1+###11+# -> True
11+##1#1 -> False
# Sandbox
• Test cases?
• Shall I require the answers to deal with errors?
• For "nth #", is it 1- or 0-based? (I guess it's 0-based, but you need to be explicit on it anyway.) – Bubbler Aug 20 '20 at 9:39
• @Bubbler Uh, ok. It's 0-based in 1+, but 0-based indexing does not make any sense in this challenge anyway, it's impossible to push 0... Should I change it to 1-based? – null Aug 20 '20 at 9:42
• I don't think it's that nonsense, as the only effect is that all instructions between first and second #s are unreachable. – Bubbler Aug 20 '20 at 9:47
• @Bubbler Oh, okay then. So if no one objects I'll post this to main. – null Aug 20 '20 at 10:15
• if you don't plan to require answers to deal with errors then also mention that they don't need to worry about popping from an empty stack – Mukundan314 Aug 20 '20 at 11:20
• Or: errors terminate the program. – user253751 Aug 24 '20 at 13:29
• @user253751 Yes, that's also good. Although, I prefer it this way. – null Aug 24 '20 at 13:43
# Noncommutative Quineoid Triple
This is the hard mode of Quineoid Triple
Write three different programs such that all of the following properties hold:
• $$\ A(B) = C \$$
• $$\ B(C) = A \$$
• $$\ C(A) = B \$$
• $$\ A(C) = -B \$$
• $$\ B(A) = -C \$$
• $$\ C(B) = -A \$$
• $$\ A(A) = \epsilon \$$
• $$\ B(B) = \epsilon \$$
• $$\ C(C) = \epsilon \$$
Where:
• $$\ f(g) \$$ is the output obtained from feeding the program text of $$\g\$$ into program $$\f\$$
• $$\ -x \$$ is the program text of $$\x\$$ in reverse (reversed in terms of either raw bytes or unicode codepoints)
• $$\ \epsilon \$$ is the empty string / an empty output
# Rules and Scoring
• This is , so the shortest program length total, in bytes wins.
• Standard quine rules apply.
• Each program can be in any language. Any number of them may share languages or each may use a different language.
• Use any convenient IO format as long as each program uses a consistent convention.
• Functions are allowed, as this counts as "any convenient IO".
• The result of feeding anything other than program text of one of the three programs is undefined.
Sandbox note: This is partially inspired by There's a fault in my vault!, which I thought had some interesting ideas in it. This is my effort to frame those ideas in a clearer fashion.
# Cops/Robbers: Create a weak block cipher
In cryptography, we often use block ciphers, which are a form of keyed encryption. More specifically, for a plain text string $$\s\$$ and a secret key $$\k\$$, we design an encryption function $$\E(s, k)\$$ and a decryption function $$\D(\hat{s}, k)\$$ such that if we encrypt and then decrypt the text with the same key, we get back our original text. That is, we have $$\D(E(s,k),k) = s\$$ for all possible strings $$\s\$$ and $$\k\$$.
One security property a good block cipher has is that it is resistant against key-recovery attacks. This means that if we have the ability to run $$\E(s, k)\$$ and $$\D(\hat{s}, k)\$$ for various choices of $$\s\$$ and $$\\hat{s}\$$ and collect pairs of encrypted and decrypted text we cannot tell what the key is.
In this challenge, you will design a simple block cipher that is intentionally vulnerable to a key recovery attack, and challenge others to try and exploit it.
## The Cops' Challenge
1. Design a block cipher. Design an encryption function $$\E(s,K)\$$ and decryption function $$\D(\hat{s},k)\$$ that take strings (or your language's closest equivalent) of a fixed length $$\16\$$ bytes and a key of fixed length $$\16\$$ bytes and outputs a string of length $$\16\$$ bytes. Your $$\E\$$ and $$\D\$$ functions must have the property that $$\D(E(s,k),k) = s\$$ for all 16-byte strings $$\s\$$ and $$\k\$$.1 The functions must be deterministic (not use any randomness) and pure (not rely on any outside state). Your $$\E\$$ and $$\D\$$ must work within the integer/float precision of your language. Specifically, you may not treat floating point as if it's arbitrary precision, nor may you assume integers of arbitrary size if your language utilizes fixed-size integers.
2. Implement a secret key-recovery attack on your block cipher. Write a program that makes calls to $$\E\$$ and $$\D\$$ for a secret, unknown key $$\k\$$ and fully recovers the key by observing properties of the input/output pairs. The key must be recovered with probability $$\1\$$ - you may not rely on probabilistic approaches.2 You must treat $$\E\$$ and $$\D\$$ as black boxes, from which you can only observe their input and output. This means you must not utilize runtime introspection, timing information, or other side effects of the implementation. You must only pass full $$\16\$$ byte strings to $$\E\$$ and $$\D\$$, and not any other type. This means you may not rely on special objects with overloaded operators or similar to glean information about how the input is processed by $$\E\$$ and $$\D\$$. Your attack may be adaptive, in that it decides which strings to pass in based on outputs to previous strings. To enforce a practicality limit, your attack must work for a combined total of strictly less than $$\2^{16}\ = 65536\$$ calls to $$\E\$$ and $$\D\$$ for any key $$\k\$$. If the block cipher you design has the property that for keys $$\k_1\$$ and $$\k_2\$$ that $$\E(s,k_1)=E(s,k_2)\$$ and $$\D(s,k_1)=D(s,k_2)\$$ for all $$\s\$$, then we call these keys functionally identical, and your attack may recover any functionally identical key to the original.
That's it! You will reveal both the encryption and decryption functions $$\E\$$ and $$\D\$$, and challenge the robbers to find your key recovery attack (or possibly a different one).
Clearly, the challenge is to design your $$\E\$$ and $$\D\$$ to look secure, but they have some catastrophic weakness that allow you to recover the key with very few calls. Another approach is to 'trapdoor' the function in some way only known to you. In the spirit of Kerckhoffs's principle, you are encouraged to post a short explanation of what your $$\E\$$ and $$\D\$$ do, especially if they are written in an esoteric language.
You may use cryptographic functions if you wish, but using them presents several practical problems. Hashing functions are designed to be one way and your are unlikely to be able to design both an encryption and decryption function that utilizes them. Symmetric ciphers have both encryption and decryption, but is unlikely to allow the key recovery attack outlined here.
If no-one mounts a successful attack in 7 days, you may post your key recovery attack and mark your answer as safe, which prevents it from being cracked. Note your submission can still be cracked until your reveal your attack.
Your answer is invalid if you do not follow the rules set above. Your answer can be declared invalid even after it is marked safe, if it turns out your revealed attack does not obey the rules.
The shortest safe submission, calculated as the sum of the bytes of the two functions $$\E\$$ and $$\D\$$, wins. Your functions must be named.
## The Robbers' Challenge
1. Find a vulnerable answer. That is an answer, which hasn't been cracked yet and which isn't safe.
2. Crack it by designing a key recovery attack. Your attack must follow the rules outlined in the cops section. To recap, this means:
• The total number of calls to $$\E\$$ and $$\D\$$ with the key $$\k\$$ must be strictly less than $$\2^{16}\$$
• You must only pass $$\16\$$ byte strings to $$\E\$$ and $$\D\$$, and must have the key $$\k\$$ initially be unknown
• The attack may be adaptive but must work to recover any 16 byte key $$\k\$$ (or a functionally identical key)
• You must treat $$\E\$$ and $$\D\$$ as black box, and may not use runtime introspection, timing information, etc.
If you've found such a attack, post an attack on the robber's thread linking back to the answer. If possible, you should post a link to an online interpreter which allows others to run your attack for various keys $$\k\$$. You are encouraged to post how your answer works, and the maximum number of calls your approach makes to $$\E\$$ and $$\D\$$. If your attack does not recover the key, but instead a functionally identical one, explain (briefly) why they are functionally identical.
The user who cracked the largest number of answers wins the robbers' challenge. Ties are broken by the sum of bytes of cracked answers (more is better).
## Example #1
### Python 3, 133 bytes (cop)
E=lambda s,k:''.join(chr((ord(c)+ord(d))%256) for c,d in zip(s,k))
D=lambda s,k:''.join(chr((ord(c)-ord(d))%256) for c,d in zip(s,k))
Try it online!
My program computes the sum of $$\s_i\$$ and $$\k_i\$$ for each $$\i\$$.
### Python 3, cracks xxx's answer
leaked_key = E('\0'*16,k)
print('key = %s' % leaked_key)
Try it online!
My crack completes in $$\1\$$ call and uses that fact that $$\0 + k = k\$$.
## Example #2
### Python 3, 147 bytes (cop)
def E(s,k):
o=''
V=[*range(256)]
j=0
for i in range(16):
j+=V[i]+ord(k[0])
j%=256
V[i],V[j]=V[j],V[i]
o+=chr(ord(s[i])^j)
return o
D=E
Try it online!
My program uses a complicated thing.
### Python 3, cracks yyy's answer
leaked_key = ''
for c in range(256):
if E('f'*16,chr(c))==E('f'*16,k):
leaked_key = chr(c)+'x'*15
break
print('key = %s' % leaked_key)
assert E('abcdabcdabcdabcd', leaked_key) == E('abcdabcdabcdabcd', k)
assert D('abcdabcdabcdabcd', leaked_key) == D('abcdabcdabcdabcd', k)
Try it online!
They only ever use the first byte of the key, so we can just bruteforce the first byte and pad with anything to get a functionally identical key. This involves a maximum of $$\256\$$ calls to $$\E\$$ with the secret key.
1. This means that if your language uses null-terminated strings, such as C, then you should be using memcpy-type operations instead of string operations. Since the input length is fixed as 16 bytes, this should be no issue.
2. This requirement forbids most kinds of Birthday attack.
# Questions to sandbox users:
• I know this is a lot to take in. Is it clear?
• Can anyone think of a trivial way to trapdoor $$\E\$$ and $$\D\$$ with eg. a hashing function? I don't think it's possible, but I could be wrong.
• I love this idea! I think it's written in a pretty clear way, I think you could trivially trapdoor E and D, by doing something like if (s == hash("sixteen_byte_str")) return k, but disallowing cryptography functions should fix that – Redwolf Programs Sep 7 '20 at 14:06
• @RedwolfPrograms Glad you think it's clear! Out of curiosity, if you wrote that as your encryption function, how would you write the corresponding decryption function? – Sisyphus Sep 7 '20 at 22:58
• Something like if (ŝ == k) return hash("sixteen_byte_str"), you'd just need to ensure there's no way it could be confused with a value that legitimately encrypts to k (which would be easily doable by replacing it with whatever hash("sixteen_byte_str") would typically encrypt to). Using crypto functions to trivially win a CnR challenge is practically a loophole, and is likely to be downvoted anyway. (Btw, when I write x == hash("sixteen_byte_str"), I mean hash(x) == "sixteen_byte_str") – Redwolf Programs Sep 8 '20 at 1:51
• Actually, wait, I'm being stupid. I think there's no way to not have it return hash(x) == "sixteen_byte_str" in one of the two functions, so there doesn't appear to be a trivial way to trapdoor it. I'd still disallow crypto in case someone uses some sort of fancy asymmetric thing, but I can't figure it out if there is. – Redwolf Programs Sep 8 '20 at 12:08
# Take 6!
A good card game is a wonderful thing. I got me a nice fresh set of Take 6! Too bad though, I have no-one to play with. And so I turn to you!
## The Game
The game is played with a set of 104 cards, numbered 1 to 104 inclusively. Each card has a number of 'cows' attached. Here's a quick Python function to calculate the number of cows:
def cows(card):
out = 1
if(card % 5) == 0:
out += 1
if(card % 10) == 0:
out += 1
if(card % 11) == 0:
out += 4
if(card % 5) == 0: # C-c-c-combo
out += 1
return out
Therefore, there is a total of
• 1 card with 7 cows (number 55)
• 8 cards with 5 cows (the other multiples of 11: 11, 22, 33, 44, 66, 77, 88, 99)
• 10 cards with 3 cows (multiples of ten: 10, 20, 30, 40, 50, 60, 70, 80, 90, 100)
• 9 cards with 2 cows (other multiples of five: 5, 15, 25, 35, 45, 65, 75, 85, 95)
• 76 cards with 1 cow (all other cards)
The game is played by up to 10 players.
Each player is given 10 cards. 4 cards are placed on the table as the starts of 'rows'. Then 10 turns of play take place. Then, results are calculated.
### A turn
Each player selects one of their remaining cards. At the same time, they reveal their selected cards.
Going in the order of lowest card number, the player whose card it is must place it into a row according to rules:
1. If there is a row with the top card of a lower number than the player's and no such row with a lower number exists, their card must be placed at the end of the row. If their card is the sixth in a row, they take the first 5 cards and put them on their result pile, leaving theirs as the new start.
2. If no such row exists, they must pick one of the rows, take all the cards there to their result pile, and leave their card as the new start.
Examples:
row tops: 10 20 30 40
played: 25
must be placed on the row with a 20, creating the configuration 10 25 30 40 with a possible cow gain
row tops: 10 20 30 40
played: 9
pick any row, creating for example 10 20 9 40, but guaranteed to gain cows
### Counting
The sum of cow values of the cards in a player's result pile is their score. The lower the score the better.
Scores may be added up over several games, creating an overall score for a match.
## Bots
Bots will be standalone programs. Everything belonging to a bot will be placed in a single directory, the name of the directory will be used as the name of the bot. A launch script named launch (may be the entire bot) must be provided. If necessary, a compilation script named build may be provided. Both scripts shall be placed directly in the bot's directory and should use shebangs to specify how they are to be run.
Bots shall not interfere with other bots, the controller, or the git repositories used.
The bots will have the option of storing extra information in files in their own directory. It will be wiped when a fresh series is being run (such as after adding a new bot).
An override input format may be provided. I intend to use StringTemplate for this, I'll write up some details when working on the controller. The default format will have all messages newline-terminated.
Once launched, the bot will be first given their cards, as a list of card numbers, where the numbers may or may not be ordered.
The default format will be
cards 0 1 2 3 4 5 6 7 8 9
No response is expected.
For each round, the bot will be prompted with the current state of the grid, that is the number of cards in each row, the sum of cows in each row and the top number card in the row.
The default format will be
count 1 2 3 4
cows 5 6 7 8
top 11 20 22 35
The bot shall answer with the number of one of its remaining cards.
The list of all the cards used by all bots in the round will be given to each bot. Not that this includes the bot's own card. The order of bots in this message will be consistent within a game.
The default format will be
used 0 1 2 3 4 5 6 7 8 9
No response is expected.
If the placement rule 2. has to be invoked, the bot will receive a message containing the board state at the time when it needs to pick a row
The default format will be
pickrow
count 1 2 3 4
cows 5 6 7 8
top 11 20 22 35
The bot shall respond with the number of the row it wishes to take. The rows will be 0-indexed for this.
If the bot's move results in a gain of result cows, it will be informed of which cards and how many cows it has gained (note that the lower the number the better).
The default format will be
cardgain 1 2 3 4 5
cowgain 6
No response is expected.
At the end, all bots will be shown their score as well as all the scores of others, in the order consistent with the used cards message.
The default format will be
score 30
others 0 1 2 3 4 5 6 7 8
No response is expected.
If the bot makes an invalid move, it will be delivered a special message informing it of such. From that point the bot's current game is over. It gets 100 points of penalty.
The default format will be
invalid
A timely shutdown is expected.
The bot may of course try to save information to its private file at any time, including at the end.
After the final message, the bot shall terminate in a timely manner.
Scoring will be added up over many games, number depends on how fast the games end up running, but at least 100 sounds reasonable to me.
Bots will be placed in a separate github repository TODO for easy setup and reseting. Bots that need a compilation script but don't have one will be given one.
## Controler
Work has started at https://github.com/MrRedstoner/Take6KOTH
The controller will be designed to run in Java 1.8+, using the Process API to launch bots.
# Notes:
While the number of bots is too low, it will be padded to 10 by using multiples of primitive bots. The tournament style once 11+ submissions exist is for now playing all subsets of size 10.
I intend to write up at least a few primitive bots, to get the games going. Something like using cards in the order they were given, or randomly. These will also demonstrate the custom input functionality. Maybe even one that uses external input, to let me play for fun!
Limits for execution time, storage of data etc. are not given at this time. If bots start to behave excessively limits may be added.
Sandbox notes:
Any better idea for tournament?
Should bots be given the names of their competitors as well? Currently leaning towards yes.
Planned tags:
• Even though most people can read python, you should still include a written description of how the cows are counted. As it is, your program counts twice for it being divisible by 5 in the case of 55, is that intentional? – FryAmTheEggman Sep 18 '20 at 18:13
• @FryAmTheEggman it is indeed intentional, it's a combo for a reason :D. The result also matches what wikipedia describes about the game. Should have some more to edit soon so I'll make the change then. – Mr Redstoner Sep 18 '20 at 18:16
• But when do you take 720?? /s – Jo King Sep 21 '20 at 9:39
# Complete the landscape
Carcassonne is a tile-based game, where the objective is to construct Roads, Cities and Monasteries, in order to score points. The game works by players taking turns to draw and place tiles to construct a landscape, then claiming roads, cities and monasteries. An example landscape is:
There are $$\19\$$ distinct tiles (ignoring rotations), each of which contains at least one feature (Road, City or Monastery):
Also, notice that the landscape must be consistent. This means that roads must connect to other roads, city edges must connect to other city edges and fields must connect to fields. Therefore, these tiles are inconsistent:
To avoid this challenge being about image processing, we can translate each tile into a list containing $$\5\$$ values, according to this legend:
[North edge, East edge, South edge, West Edge, # of cities]
0: Field
2: City
For instance, this tile can be described as [2, 0, 1, 1, 1]. Using this legend, we can describe each tile uniquely, and it's rotations are rotations of the first four elements. The entire grid can be described as a rectangular matrix, with a $$\20^\text{th}\$$ distinct value for an empty square. Translating the first landscape into this format, we get:
[
[ [], [], [1, 1, 0, 0, 0], [1, 1, 2, 1, 1], [0, 1, 0, 1, 0], [], []],
[[1, 0, 1, 0, 0], [], [0, 0, 0, 0, 0], [2, 0, 2, 0, 2], [], [0, 2, 2, 2, 1], [0, 0, 0, 2, 1]],
[[1, 1, 0, 1, 0], [0, 0, 1, 1, 0], [0, 0, 0, 0, 0], [2, 2, 0, 0, 1], [2, 2, 0, 2, 1], [2, 0, 0, 2, 1], []]
]
using [] to represent an empty square. The complete list of tiles (ignoring rotations) in the same grid as the second image is
[1, 0, 1, 0, 0] [0, 0, 1, 1, 0] [2, 1, 1, 1, 1] [0, 1, 1, 1, 0] [2, 0, 0, 0, 1]
[2, 2, 0, 2, 1] [0, 0, 0, 0, 0] [2, 2, 2, 2, 1] [2, 2, 0, 0, 1] [2, 1, 1, 2, 1]
[2, 2, 0, 0, 2] [0, 0, 1, 0, 0] [2, 0, 1, 1, 1] [2, 1, 1, 0, 1] [0, 2, 0, 2, 1]
[1, 1, 1, 1, 0] [2, 1, 0, 1, 1] [2, 2, 1, 2, 1] [2, 0, 2, 0, 2]
Your task is to take in a rectangular matrix where every element save one is one of the 19 tiles given above or one of their rotations. Tiles can appear more than once, and not every tile will appear in every input. This landscape will be consistent, as defined above. You should take in this input and output the tile that could fill the empty space in the array, keeping the landscape consistent, as defined above.
If there are multiple tiles that would work, you may output either all of them or just one. If no such tile exists, you may produce any output/result that could not be construed as a tile (i.e. it's not in the 19 tiles specified above, nor in any of their rotations). The "empty space" in the input may be your choice, so long as its consistent, and (although I'm not sure why you would) it isn't one of the 19 tiles above or their rotations, and there will only ever be a single empty space.
This is , so the shortest code in bytes wins.
# Meta
• Is this clear enough?
• More specifically, is the definition of a "consistent landscape" objective and understandable?
• This is a somewhat related question, but I believe there are enough differences between the two for them to not be duplicates. Thoughts?
• Tags are , , , . Any suggestions?
• Any further feedback?
• @Beefster Aside from involving tiling, I'm not quire sure how that challenge is related, let alone a possible duplicate – caird coinheringaahing Sep 28 '20 at 22:00
• Filling Carcassonne tiles in a grid can be thought of as a specific case of wang tiles with a different set of tiles, but upon closer inspection, seeing as your challenge is to complete the landscape, rather than fill a grid from nothing, this is actually a pretty different challenge. – Beefster Sep 28 '20 at 22:03
• Related – Beefster Sep 28 '20 at 22:04
# AOG Day 1: The Advent Begins
What good is Advent... without the actual Advent Calendar? Fortunately, we already have an advent calendar, so we don't need to worry about getting one of those! But since it's not a physical calendar, we can't open the doors with our hands; rather, it's a block of characters. How do we open a door on that calendar?
## Challenge
You will be given an advent calendar in a state from day 0 to 24 and you are to open the next door. Essentially you will be given a calendar (a 5x5) containing all numbers for (X+1) to 25 (inclusive) and blanks to fill the rest except for one square which is the current door. Your task is to take the treat and open the next door for this day.
## Input
Input will be a 5x5 of values. You can choose to take this in any reasonable format, but you must leave it as a grid, you cannot I/O as a flat list. Three types of values are needed: the days must be represented as integers from 1 to 25, and opened windows / the treat window need to be two consistent distinct values; for example, 0 and -1, [] and "", or anything else reasonable enough.
## Output
Output should be a 5x5 of values in the same format as the input. The next day (the smallest remaining integer) should be replaced with a treat window, and the treat window from the prior day should become empty (take the treat every day).
## Sample Test Cases
These use _ for empty windows and * for the treat. This is mostly to make it look visually nice for this question.
Input -> Output
3 7 25 10 14 3 7 25 10 14
24 12 * 15 9 24 12 _ 15 9
2 22 18 23 17 -> * 22 18 23 17
4 8 5 13 19 4 8 5 13 19
6 21 20 11 16 6 21 20 11 16
8 23 16 12 14 * 23 16 12 14
9 _ 24 _ _ 9 _ 24 _ _
17 _ 13 25 10 -> 17 _ 13 25 10
15 19 21 18 11 15 19 21 18 11
_ * _ 20 22 _ _ _ 20 22
8 18 16 21 _ * 18 16 21 _
10 17 _ _ 19 10 17 _ _ 19
24 22 14 20 25 -> 24 22 14 20 25
15 12 _ 23 13 15 12 _ 23 13
11 _ _ * 9 11 _ _ _ 9
11 3 6 14 7 11 3 6 14 7
23 15 10 1 21 23 15 10 * 21
5 13 2 16 25 -> 5 13 2 16 25
19 4 12 9 8 19 4 12 9 8
24 18 20 22 17 24 18 20 22 17
You can generate more test cases here.
## Rules and Specifications
• The calendar will always contain 25. However, it may not always contain * or _.
• Standard loopholes apply, as always.
• this is a challenge so your score is determined by your code length in bytes with a lower score being better; however, a solution will not be accepted.
## Sandbox
• This challenge will be posted on December 1st, 2020.
• Is this too easy/trivial, or a duplicate?
• tags will be
• Why would the input not include a *? Furthermore, I'd include a test case with no _ (or * if that's possible). Just to clarify, input can be taken as a 5x5 2D array (but not as a flat list of 25 elements)? – caird coinheringaahing Oct 2 '20 at 1:12
• @cairdcoinheringaahing If it's day 0 (so if the calendar is unmodified) then nothing has been opened. I will include a case to reflect that; the first test case doesn't contain a _. Also yes, though I could be convinced to change that. – HyperNeutrino Oct 2 '20 at 1:13
• So on day 0 the advent calendar is just a grid of 25 numbers, and the program must "open" door 1? – caird coinheringaahing Oct 2 '20 at 1:14
• @cairdcoinheringaahing Yes. I added a test case (last one) for that – HyperNeutrino Oct 2 '20 at 1:15
# Output all of printable ASCII using all of printable ASCII
Posted
• "Irreducible" isn't really an observable requirement; I'd recommend looking into using pristine-programming to make it an objective criterion. – HyperNeutrino Oct 12 '20 at 18:31
• What do you mean by "observable"? "irreducible" simply means you can't purely remove characters (not purely substrings) from the program and have it still work (not merely not error). That's pretty objective, is it not? – pxeger Oct 12 '20 at 18:39
• Actually, yes it seems you're right, I was probably thinking of some other common criteria that isn't valid. Otherwise challenge looks good, doesn't seem to be a duplicate. I would say this isn't kolmogorov complexity since it's not constant but it is restricted source albeit not in the common usage. – HyperNeutrino Oct 12 '20 at 18:48
• Can my program contain additional non-ASCII bytes? – Adám Oct 12 '20 at 19:00
• @Adám yes, in the post it says "Your program, and its output, can contain any additional non-printable-ASCII bytes (bytes, not characters) if you like, such as newlines". "non-printable-ASCII" includes "non-ASCII" – pxeger Oct 12 '20 at 19:01
• Ah, I see. Maybe clarify that you mean both non-[printable-ASCII] and [non-printable]-ASCII. – Adám Oct 12 '20 at 19:03
• Perhaps subtract 95 from each score so that scores look more reasonable – Lyxal Oct 13 '20 at 10:51
• @Lyxal my reasoning for not doing that was because I suspect most answers will be quite a lot longer in order to make sure they're irreducible, it would complicate things, and IMO it doesn't really matter if they're that length – pxeger Oct 13 '20 at 10:55
# Round a Matrix
Your input is a 2d array of nonnegative floats A. It can be supplied in whatever format is most acceptable for your language. It can have any dimensions.
Let r and c be the 1d arrays of row and column sums of A respectively, rounded to the nearest integer, with the rule that 0.5 is rounded up to 1.
Your task is to output a 2d array of nonnegative integers B such that |b_{ij} - a_{ij}| < 1 for all i and j, and also the row and column sums of B are equal to r and c respectively.
In other words, B is obtained by rounding each element of A up or down, in such a way that the row and column sums are preserved.
There may be many possible solutions. In this case, you only need to output one of them.
If there is no solution, your program's behaviour can be undefined.
Example:
A = 1.2 3.4 2.4
3.9 4.0 2.1
7.9 1.6 0.6
in this case, the row sums are [7.0, 10.0, 10.1] and the column sums are [13.0, 9.0, 5.1] so after rounding these, you get r = [7 10 10] and c = [13 9 5]. One acceptable solution is
B = 1 3 3
4 4 2
8 2 0
This is code golf, so the shortest code wins.
## Motivation
I am also interested in what clever algorithms people can come up with. I guess the most obvious is just to do a random search, but that can take a very long time, even if the array is only 10x10 or so.
## Questions
• Is it clear? Please can you edit it if it's not in the right format?
• Has it appeared here before? (I don't think so, because I was searching Stackoverflow for a while in order to come up with a solution to this.)
• Is there always a solution under the conditions given here?
• Would it be better in some other format than code golf?
• Should the condition |b_{ij} - a_{ij}| < 1 be |b_{ij} - a_{ij}| <= 1?
• Since you want optimal, interesting solutions, rank by time complexity. You'll get fewer answers, but they will be more optimal than a direct brute force approach. – Razetime Oct 22 '20 at 6:53
• The suggestion of using complexity isn't often a good one - most challenges here that try to do that wind up closed or unanswered. It would be much simpler to go by execution time for some number of test cases that you pick. For the actual question, I think you should explicitly say that r and c are computed by summing and then rounding (assuming that is the correct order) as it isn't precisely clear from what you have right now. – FryAmTheEggman Oct 22 '20 at 20:34
# The Fibonacci Rectangular Prism Sequence (posted)
• There are the square roots of A127546. It looks like there are ways to generate this sequence shorter than just generating Fibonacci numbers and adding their squares. So, this doesn't strike me as a duplicate but an interesting challenge in its own right. I'd recommend removing the square-root step from the challenge and just asking for the sum of the three squares, which is a whole number. This might also allow for more interesting recursive solutions. You should include test cases, perhaps something like the first 15 elements of the sequence and maybe one big one. – xnor Oct 27 '20 at 0:39
• For clarity, I also recommend explicitly giving the formula for the k-th term in terms of the respective Fibonacci numbers, so that solvers don't need to know the Pythagorean formula for the diagonal of a prism. And, just in case, give the recursive formula for the i'th Fibonacci numbers. Mathjax is enabled here, but you have to use $ delimiters in place of . – xnor Oct 27 '20 at 0:45 • @xnor Just throwing in an equation seems odd for a code golf challenge. Do you have any ideas for context? Or is that okay here? (I guess I could always just write that you have to square it after...) – nthnchu Oct 27 '20 at 1:18 • Not quite sure what you mean here. I do think it would be good to keep the Fibonacci prism context as some motivation and flavor. I'm not suggesting removing that, but adding a formula like$g(n)=F_n^2 + F_{n+1}^2 + F_{n+2}^2$(or with a square root if you want to keep that) and the definition of Fibonacci numbers$F_n$. I can say there's a preference here for challenges to have the task easy to read by skimming. And, to give a formula if possible and save solvers a bit of a time from doing math problem, although clever golfers may find shorter alternative ways to express or compute it. – xnor Oct 27 '20 at 2:32 • I've edited the question. Is that what you wanted @xnor? – nthnchu Oct 27 '20 at 13:08 • Yes, this looks good. You should still add test cases. I'd suggest also linking oeis.org/A127546. – xnor Oct 28 '20 at 4:12 • I think the first test case ought to be 1 ==> 6 – Giuseppe Oct 28 '20 at 21:15 • @Giuseppe Yeah, you're right. Thanks for the correction! – nthnchu Oct 28 '20 at 22:12 • I made some clean-up edits, in part to avoid references to programs and functions, since either is allowed by allowed. – xnor Oct 29 '20 at 0:38 • @xnor Thank you! When should I post this (out of the sandbox)? (I'm new :D) – nthnchu Oct 29 '20 at 22:18 • @nthnchu The usual recommendation is 3 days minimum, but it's really up to you. I just read through it again, and I think it all looks good. One minor thing is that we allow zero-indexing for sequence challenges by default, which would allow doing the mapping as 0 ==> 6, 1 ==> 14, .... So I think it would be good to say that input may be taken zero-indexed to remind solvers of this. – xnor Oct 29 '20 at 22:43 • @xnor I choose the 1 index off of${F_1}^2+{F_{1+1}}^2+{F_{1+2}}^2 = 6$. 0 would therefore be${F_0}^2+{F_{0+1}}^2+{F_{0+2}}^2 = 2$. The index is based off of$F_0=0$and$F_1=1$– nthnchu Oct 29 '20 at 23:08 # I only want some primes, not all of them It is well known that there are various formulae for calculating primes that span from calculating a subset of primes, to all possible primes. However, for this challenge, I only want a specific subset. You are to write a program which takes a single natural integer $$\n>0\$$ as input. This program will then output a function, $$\f(p)\$$, which will take a integer $$\p\ge0\$$ and do the following: • If $$\p < n\$$, return the $$\p\$$th value of a contiguous subset length $$\n\$$ of primes • If $$\p \ge n\$$, returns a non-prime integer (including $$\0\$$, $$\1\$$ and negative integers). For example, Euler's quadratic $$\p^2+p+41\$$ returns the $$\p\$$th value of the subset of primes $$\\{41, 43, 47, ..., 1601\}\$$ for $$\0 \le p \le 39\$$. However, for $$\p=43\$$, this returns $$\1933\$$, which is prime, so this would not be a valid function to return for $$\n = 40\$$. You may choose the subset (and it may differ for different $$\n\$$), so long as it is finite and contiguous. You may also choose to use 1-indexing for $$\p\$$, meaning that $$\f(p)\$$ returns primes for $$\1 \le p \le n\$$. You may output in the most natural form of a function in your language. For example, Jelly would return a string representing a link, Python would return a named function or lambda etc. This is so the shortest code in bytes wins. # Meta • 1. Why output a function and not just have a program with two inputs? 2. Maybe you should note that 0, 1, and negative numbers are not prime. 3. I also think you should clarify that "length n" refers to the subset, not its elements. That intially confused me for a while. 4. What if$p$or$n$is negative? Or zero? - is the "$p$th value" using a 0-based index? – pxeger Nov 3 '20 at 20:20 • @pxeger 1. the idea of the challenge is based on prime calculating formulae, so I want submissions to return "a formula" for a given$n$, rather than just a single value for two values$n$and$p$. Whether the "formula" is a mathematical one or just "if$p<n$then ... else ..." is irrelevant. 2, 3, 4. All clarified – caird coinheringaahing Nov 3 '20 at 20:26 • "a natural integer$p \ge 0$" - zero isn't technically a natural number. Personally I think you should just drop the "natural" because an inequality is clearer. – pxeger Nov 3 '20 at 20:29 • @pxeger There's some disagreement about whether$0\\$ is natural or not (something I've had to deal with in past challenges), but I agree, the inequality + just "integer" is much clearer – caird coinheringaahing Nov 3 '20 at 20:30
• Seems to be a heavy dictionary-ing challenge that might not make it suitable, but otherwise cool idea. – Beefster Nov 9 '20 at 20:16
# Middle-Square RNG: What Number Came Before? (WIP)
A well-known, but statistically poor, way of generating random numbers is to square the number and take the middle digits (when expressed in base 10)
Your task is to take a 4-digit number as input and output any 4-digit number that produces the input number (there may be more than one, which is one of the statistical flaws of this method) when applying middle-square. If the square has an odd number of digits, take an extra digit off the left side.
If there is no such number (some numbers with this method have no predecessor- yet another statistical flaw), indicate that clearly in a way that cannot be mistaken as a valid answer. Some possible ways of indicating this:
• Output nothing
• Output null/None/nil/false
• Output an empty list
• Output a negative number
• Output an error message that is clearly not a 4-digit number
• Throw an exception
• Crash
• Exit with a nonzero status
• Is there a theoretical guarantee that any number if attainable? (i.e. is there always a solution?) – Robin Ryder Nov 23 '20 at 8:21
• @RobinRyder no. Some numbers have no predecessor. – Beefster Nov 30 '20 at 21:11
# Operational countdown
• Posted.
• How to handle floating point imprecision in this given that your doing floating point division and roots then checking to see if those are integers? I cooked up a solution in Perl that is off by 1 on several of your examples because as it gets near 1, the subtraction ends in .999999...... – Xcali Dec 3 '20 at 4:15
• @Xcali it's a trivial part of the challenge, this kind of problem is common to many languages, anyway I think that Perl, like most of languages, can handle integer numbers properly. – AZTECCO Dec 3 '20 at 5:11
# A Snake, A Camel And A Kebab.
As many of you will know, almost every programming language has a standard casing system; unfortunately, we have not been able to agree on a singular system to use and now must frequently switch between camelCase, snake_case and kebab-case.
Now I know what you're thinking... wouldn't it be nice if we had a program that could convert from one casing to another?
Well - soon we're going to have plenty!!! (This is where you come in)
## Challenge
You're job is to write a program/function that will take an input string, and a casing system. It will then print/return the converted string.
### Inputs:
You're program will receive two inputs, an alphabetic string that is to be converted and a string that will always be one of kebab camel or snake.
### Outputs:
You're program should output a string that conforms to the new casing if it is possible. If the input string was invalid, and had mixed casing, you're program should print/do nothing.
## Test Cases:
Valid Examples:
"aJavaVariable", "snake" = "a_java_variable"
"a_python_variable", "kebab" = "a-python-variable"
"golf", "camel" = "golf"
"", "snake" = ""
"doHTMLRequest", "kebab" = "do-h-t-m-l-request"
Invalid Examples (no output):
"an_InvalidName", "kebab"
"invalid-inPut_bad", "camel"`
|
2021-02-25 15:53:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 149, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38911840319633484, "perplexity": 1023.0617501115835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351374.10/warc/CC-MAIN-20210225153633-20210225183633-00389.warc.gz"}
|
https://mathzsolution.com/intuitive-approach-to-de-rham-cohomology/
|
# Intuitive Approach to de Rham Cohomology
The intuition behind homology may be summarized in a sentence: to find objects without boundary which are not the boundary of an object. This has geometric meaning and explains the algebraic boundary operator $\partial$ – quotient of vector spaces procedure.
On the other hand, the definition of de Rham cohomology comes always unprovided of such intuitive approach. My question is: how may be intuitively understood de Rham cohomology?
EDIT: An extended version of this answer and further discussion may be found here
This is a way to explain the intuition behind de Rham cohomology:
Cohomolgy comes up as a dual answer to homology. Homology identifies the shape of an object finding ‘holes’. More concretely, it looks for objects without boundary which are not the boundary of an object (and therefore the definition $$Hk(M)=ker∂n/im∂n+1H_k(M)=\text{ker}\partial_n/\text{im}\partial_{n+1}$$).
Cohomology works in a completely different fashion. Instead of looking for subspaces detecting holes, cohomology assigns a real value to each object in our space. For example, in $$R2\mathbb{R}^2$$ we may assign to each curve (oriented, with startpoint and endpoint) the value of the $$xx$$-projection. When the curve moves rightwards, gains projection, whereas loses projection when moving leftwards; that’s good if we want our assignment additive and differentiable: if our curve is divided into several pieces, then it is the same to calculate the value of the whole curve or to add the values of the small ones.
The previous example is actually rather simple; it does not matter the whole curve, only the $$xx$$-coordinates of the startpoint and endpoint, from which we calculate the difference. Indeed a closed curve has always zero value. Let’s consider a less obvious example. In $$R2\mathbb{R}^2$$ we have a vector field, $$f(x,y)=(y,0)f(x,y)=(y,0)$$. We may perform the following assignment: each curve $$γ\gamma$$ has value the circulation integral $$∫γf\int_{\gamma}f$$. As the picture shows, the value does not depend only on the startpoint and endpoint, because the curves that go up and then go down have a positive circulation, but the curves that go down and then go up have a negative circulation. Moreover, a closed curve has nonzero circulation (in general); in the picture, the small closed curve has slightly negative circulation, because on top it goes in the opposite direction to the field and on bottom it is in the field direction, but the field is stronger in the top.
A third example: in $$R2∖(0,0)\mathbb{R}^2\smallsetminus (0,0)$$ we consider the assignment swept out central angle. This example resembles the first one. The important data is the start angle and the end angle. And that’s why a little closed curve has zero swept angle. But pay attention! In the second picture there is a curve that encloses the origin and that sweeps an angle of $$2π2\pi$$, contrary to what we thought about closed curves. This phenomenon is only possible if there are holes in the topological space: we have given zero value to small closed curves, those which are the boundary of a little disc, but other values are allowed for big closed curves, those which perhaps are not the boundary of anything. It is as if we had given values to different homological objects: 0 to the curves not enclosing the origin, $$2π2\pi$$ to those which circle the origin once, $$4π4\pi$$ to those which circle the origin twice, and so on.
As stated, we want our assignment additive. Therefore we only need to know the value we would assign to, say, little curves, little surface pieces or little volumes. This is done by means of a differential form. Differential forms a local valuation in each point and each direction.
Differential forms language is well suited for describing these phenomena. The previous three examples are described by three 1-forms in $$R2\mathbb{R}^2$$: $$α1=dx\alpha_1=\mathrm{d}x$$, $$α2=ydx\alpha_2=y\mathrm{d}x$$ and $$α3=−yx2+y2dx+xx2+y2dy\alpha_3=\frac{-y}{x^2+y^2}\mathrm{d}x+\frac{x}{x^2+y^2}\mathrm{d}y$$ (please note that the last one is not defined at the origin). De Rham cohomology studies these differential forms and a so called exterior derivative $$d\mathrm{d}$$. In the first and third cases, $$dα1=dα3=0\mathrm{d}\alpha_1=\mathrm{d}\alpha_3=0$$ and that’s why small closed curves have zero value; we say that $$α1\alpha_1$$ and $$α3\alpha_3$$ are closed forms. $$dα2≠0\mathrm{d}\alpha_2\neq 0$$, so $$α2\alpha_2$$ is not a closed form. On the other hand, $$α1\alpha_1$$ is exact: $$α1=d(x)\alpha_1=\mathrm{d}(x)$$, so $$xx$$ is the function to be evaluated in the startpoint and the endpoint, and that’s the reason why large closed curves have value zero, because they have the same initial and final point. $$α3\alpha_3$$ is not exact; we would be delighted to say that $$α3=d(angle)\alpha_3=\mathrm{d}(angle)$$, but there is not such $$angleangle$$ function defined in all $$R2∖(0,0)\mathbb{R}^2\smallsetminus (0,0)$$, we always fall into $$2π2\pi$$ steps.
So for our cohomological search of holes, we must find closed forms which are not exact.
|
2022-09-29 04:14:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 30, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7806203365325928, "perplexity": 211.19726421046835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00690.warc.gz"}
|
https://stats.stackexchange.com/questions/326354/can-we-state-that-if-kl-divergencepq-hp-then-q-is-informative-of-p-and?noredirect=1
|
# Can we state that If KL-Divergence(P||Q) < H(P) then Q is “informative” of P and not otherwise?
From what I've read the KL-Divergence between $P||Q$ is the extra amount of "bits" you need to describe $P$ if you are encoding it with $Q$.(Analysis of Kullback-Leibler divergence).
I want to know when does $Q$ gives me at least "some" information about $P$?
My logic is as follows:
• We normally need $H(P)$ bits to describe $P$.
• If I am given $Q$ then I would only need $KL(P||Q)$ more bits to specify $P$.
• Therefore as long as $KL(P||Q) < H(P)$, I have been able to reduce the amount of bits needed to describe $P$.
Can I conclude that if $KL(P||Q) < H(P)$ then $Q$ has predictive power ( > 0) over $P$?
Additionally, if $KL(P||Q) > H(P)$ then there is no predictive power.
• H(P) is the average number of bits needed to describe a value randomly drawn from P, if you use a code based on perfect knowledge of P. Your train of thought seems rooted in the idea that the code is built with no knowledge of P. You can't gain information about P by approximating it with Q. – Jonny Lomond Feb 1 '18 at 22:40
• I see, yes I was incorrectly assuming code was built w/o knowledge of P. Regarding your last statement, isn't mutual information (similar to KL-divergence but you need to bin the data) used for quantifying such things? Can KL-div help us do such things? – RM- Feb 2 '18 at 7:15
• What do you mean by "predictive power" in this context? – DiveIntoML Feb 2 '18 at 16:09
• I guess I am referring to a "goodness" of fit, if that makes sense. I basically want to compare two distributions and say how good is my model for predicting the truth but in normalized way. Normalized mutual information could be a valid option but I would have to "bin" the data, which I'd rather not. I was wondering if I could use KL-divergence and have a simple thumb rule that would say "pass" or "fail". – RM- Feb 2 '18 at 18:11
|
2020-04-06 02:53:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7012202143669128, "perplexity": 450.37765680726625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371612531.68/warc/CC-MAIN-20200406004220-20200406034720-00321.warc.gz"}
|
http://mathhelpforum.com/number-theory/108415-primes.html
|
# Math Help - Primes
1. ## Primes
Prove that if one chooses more than n numbers from the set $\lbrace1, 2, 3, . . . , 2n\rbrace,$ then two of them are relatively prime.
2. Partition them as $\{1,2\},\ \{3,4\},\ \hdots,\{2n-1,2n\}$. Since there are $n$ parts, one part must contain two of the numbers, and hence two of the numbers are adjacent. Since adjacent numbers are relatively prime we are done.
|
2014-09-23 09:18:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 1, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8459888696670532, "perplexity": 300.14529715035405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657138086.23/warc/CC-MAIN-20140914011218-00059-ip-10-234-18-248.ec2.internal.warc.gz"}
|
https://www.trustudies.com/question/2425/q-8-out-of-15-000-voters-in-a-constit/
|
3 Tutor System
Starting just at 265/hour
# Q.8 Out of 15,000 voters in a constituency, 60% voted. Find the Percentage of voters who did not vote. Can you now find how many actually did not vote?
Total number of voters = 15,000
Percentage of the voters who voted = 60%
$$\therefore$$Percentage of the voters who did not vote = (100 – 60)% = 40%
Actual number of voters who did not vote
= 40% of 15,000
=$$\frac{40}{100}$$×15,000=6,000
|
2022-12-07 15:49:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39909812808036804, "perplexity": 3086.5288850411753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711200.6/warc/CC-MAIN-20221207153419-20221207183419-00777.warc.gz"}
|
http://mathoverflow.net/questions/39098/the-question-about-kolmogorov-tightness-criterion?sort=oldest
|
# The question about Kolmogorov tightness criterion
We know about Kolmogorov Criterion for the tightness of a stochastic process $X_n(t)$
1.The sequence $(X_{n}(0))_{n\geq0}$ is tight.
2.There exist constants $\gamma\geq0$,$\alpha>1$, $K>0$ and an integer $n_0$ such that $$E(|X_{n}(t_{2})-X_{n}(t_{1})|^{\gamma})\leq K|t_{2}-t_{1}|^{\alpha}, \forall n \geq n_0$$ for all $t_{1},t_{2}$.
My first question: what should the $n_0$ depend? Could it depend on the $t_{1}$ and $t_{2}$?
My second question: Is there any other criterion for tightness with the parameter $\alpha=1$ for the version of the moment condition?
-
What space is your stochastic process in? If it's say C[0,1], then the result holds for all n and all t1,t2 in [0,1] – Alex R. Sep 17 '10 at 20:15
Yes it is in space C[0,1], thanks for your help – syh2010 Sep 20 '10 at 6:54
$n_0$ must be independent of $t_1$ and $t_2$, of course. If it's not, the processes might be even discontinuous. For instance, $X_n$ is a Poisson process with parameter $1/n$. Then $$E(|X_n(t_1)-X_n(t_2)|^2)\le |t_1-t_2|^2$$ for all $n>|t_1-t_2|^{-1}$ (for all $n\ge 1$ if $t_1=t_2$).
And the same answer works for the second question: when $\alpha=1$, the processes need not to be continuous. In some special cases, where you have higher moments controlled by a lower one polynomially, it may help (e.g. in the Gaussian case $\gamma=2$ and $\alpha=1$ is enough).
|
2015-03-06 02:27:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7695937156677246, "perplexity": 286.7869005680957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936465456.40/warc/CC-MAIN-20150226074105-00134-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/einstein-notation-rot-rot-a.673143/
|
# Einstein-Notation: rot(rot(A))
1. Feb 20, 2013
### smoking-frog
1. The problem statement, all variables and given/known data
Write $$\nabla \times (\nabla \times \vec A)$$ in Einstein-Notation, whereas $$\vec A$$ is the vector potential of the magnetic field.
2. Relevant equations
$$(\vec a \times \vec b)=\varepsilon_{ijk} a_j b_k$$
3. The attempt at a solution
$$\nabla \times (\nabla \times \vec A)=\varepsilon_{ijk} \partial_j(\varepsilon_{lmn}\partial_m A_n)_k$$
What to do with $$(\varepsilon_{lmn}\partial_m A_n)_k$$ though?
2. Feb 20, 2013
### CompuChip
On the right hand side, there is summation over j and k, but i is a free index which only occurs once. What you really meant to write, then, is
$$(\vec a \times \vec b)_i=\varepsilon_{ijk} a_j b_k$$
3. Feb 21, 2013
|
2017-11-19 01:33:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.735539436340332, "perplexity": 1644.5534369146535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805242.68/warc/CC-MAIN-20171119004302-20171119024302-00139.warc.gz"}
|
https://deepai.org/publication/the-delta-method-and-influence-function-in-medical-statistics-a-reproducible-tutorial
|
# The Delta-Method and Influence Function in Medical Statistics: a Reproducible Tutorial
Approximate statistical inference via determination of the asymptotic distribution of a statistic is routinely used for inference in applied medical statistics (e.g. to estimate the standard error of the marginal or conditional risk ratio). One method for variance estimation is the classical Delta-method but there is a knowledge gap as this method is not routinely included in training for applied medical statistics and its uses are not widely understood. Given that a smooth function of an asymptotically normal estimator is also asymptotically normally distributed, the Delta-method allows approximating the large-sample variance of a function of an estimator with known large-sample properties. In a more general setting, it is a technique for approximating the variance of a functional (i.e., an estimand) that takes a function as an input and applies another function to it (e.g. the expectation function). Specifically, we may approximate the variance of the function using the functional Delta-method based on the influence function (IF). The IF explores how a functional ϕ(θ) changes in response to small perturbations in the sample distribution of the estimator and allows computing the empirical standard error of the distribution of the functional. The ongoing development of new methods and techniques may pose a challenge for applied statisticians who are interested in mastering the application of these methods. In this tutorial, we review the use of the classical and functional Delta-method and their links to the IF from a practical perspective. We illustrate the methods using a cancer epidemiology example and we provide reproducible and commented code in R and Python using symbolic programming. The code can be accessed at https://github.com/migariane/DeltaMethodInfluenceFunction
• 2 publications
• 6 publications
• 3 publications
• 2 publications
• 3 publications
• 3 publications
• 5 publications
• 4 publications
03/26/2019
### Estimation of a regular conditional functional by conditional U-statistics regression
U-statistics constitute a large class of estimators, generalizing the em...
03/08/2019
### Kernel Based Estimation of Spectral Risk Measures
Spectral risk measures (SRMs) belongs to the family of coherent risk mea...
10/25/2017
### Asymptotically Efficient Estimation of Smooth Functionals of Covariance Operators
Let X be a centered Gaussian random variable in a separable Hilbert spac...
02/22/2021
### A Small-Uniform Statistic for the Inference of Functional Linear Regressions
We propose a "small-uniform" statistic for the inference of the function...
12/18/2019
### Estimation of Smooth Functionals in Normal Models: Bias Reduction and Asymptotic Efficiency
Let X_1,..., X_n be i.i.d. random variables sampled from a normal distri...
12/07/2021
### A generalization gap estimation for overparameterized models via the Langevin functional variance
This paper discusses the estimation of the generalization gap, the diffe...
03/20/2019
### On approximate validation of models: A Kolmogorov-Smirnov based approach
Classical tests of fit typically reject a model for large enough real da...
## 1 Introduction
A fundamental problem in inferential statistics is to approximate the distribution of an estimator constructed from the sample (
i.e. a statistic). The standard error (SE) of an estimator characterises its variability.Boos2013 Oftentimes, it is not directly the estimator which is of interest but a function of it. In this case, the Delta-Method can approximate the standard error (with known asymptotic properties) using Taylor expansions because a smooth function of an asymptotically normal estimator is also asymptotically normal. Vaart1998 In a more general setting, this technique is also useful for approximating the variance of some functionals. For instance, in epidemiology the Delta-method is used to compute the SE of functions such as the risk difference (RD) and the risk ratio (RR),Agresti2010
which are all functions of the risk (a parameter representing the probability of the outcome).
Armitage2005, Boos2013 Alternatively to the Delta-method to approximate the distribution of the SE Boos2013, MillarMaximumADMB for large samples, we can use other computational methods such as the bootstrap.Efron1993, efron1982 In the course of their research, it may be necessary for applied statisticians to assess whether a large sample approximation of the distribution of a statistic is appropriate, how to derive the approximation, and how to use it for inference in applications. The distribution of the statistic must be approximated to directly estimate its variance and hence the SE because the number and type of inference problems for which it can be analytically determined is narrow.
In this tutorial we introduce the use of the classical and functional Delta-method, the Influence Function (IF), and their relationship from a practical perspective. Hampel introduced the concept of the IF in 1974.hampel1974 He highlighted that most estimators can actually be viewed as functionals constructed from the distribution functions. The IF was further developed in the context of robust statistics but is now used in many fields, including causal inference.hampel1974 The IF is often used to approximate the SE of a plug-in asymptotically linear estimator.Tsiatis:2007aa Mathematically, the IF is derived using the second term of the first order Taylor expansion used to empirically approximate the distribution of the plug-in estimator.Boos2013 It can be easily derived for most common estimators and it appears in the formulas for asymptotic variances of asymptotically normally distributed estimators. The IF is equivalent to the normalized score functions of maximum likelihood estimators.hampel1974
Furthermore, the tutorial includes boxes with R code (R Foundation for Statistical Computing, Vienna, Austria)R2020 to support the implementation of the methods and to allow readers to learn by doing. The code can be accessed at https://github.com/migariane/DeltaMethodInfluenceFunction
. In section 1, we introduce the importance of the Delta-method in statistics and justify the need of a tutorial for applied statisticians. In section 2, we review the theory of the classical and functional Delta-methods and the influence function (IF). In section 3, we provide multiple worked examples and code for applications of the classical and functional Delta-method, and the IF. The first examples involve deriving the SE for the sample mean of a variable, the ratio of two means of two independent variables, and the ratio of two sample proportions (i.e. the risk ratio). Also, we provide a example where the required conditions for the for the Delta-method do not hold. We then show how to use the functional Delta-method based on the IF to derive the SE for the quantile function and the correlation coefficient. Our final example is motivated by an application in cancer epidemiology and involves a parameter of interest that is a combination of coefficients in a logistic regression model. Finally, in section 4, we provide a concise conclusion where we mention additional interconnected methods with the Delta-method and the IF such as M-estimation and the Huber Sandwich estimator.
## 2 Theory: The Classical Delta-method
Let be a parameter. For this tutorial, we are interested in working with an estimand that can be written as a function of (i.e., ) rather than itself. For instance, we may not be interested in the probability of having a particular disease, but in the ratio of two probabilities , where the first probability () is of developing the disease under treatment, the second () is of developing the disease without treatment. The estimand represents the relative risk. Define the estimator of to be , the ratio of the estimators of the respective probabilities. The question is: if we know the variances of and , how do we obtain the variance of ? The Delta-method is one approach to answer this.
Let be an estimator of from a random sample where the s are independent and identically distributed (i.i.d) with a distribution defined with a parameter (i.e. ). Examples of parameters include the rate of an exponential variable (), the mean and variance of a normal distribution or the probability of a specific category under a multinomial model with different categories: with .
Any (measurable) function of the random sample is called a statistic.Casella1998TheoryEstimation In particular, any estimator of is a function of the random sample making it a statistic. For example, if , the mean, is a function of the s. To emphasize the dependency of the estimator, , on the sample size, , we write: . Thus would denote the estimator under a random sample of size and denotes the estimator under a random sample “of infinite size”. Any (measurable) function of the estimator,
also depends upon the random sample and hence it is a statistic too. Due to the dependency upon the random sample, any statistic by itself is a random variable. We can thus characterise the estimator in terms of its distribution. As an example, if the
are i.i.d. then also has a normal distribution with parameters . Furthermore, the statistic has a distribution.
More often than not, the distribution of a statistic cannot be estimated directly and we rely on the asymptotic (large sample) properties of where approaches
. A most powerful and well-known result is the central limit theorem which states under reasonable regularity conditions (i.i.d. variables with mean
)Billingsley1961StatisticalChains that if then, for large ,
√n(^θn−μ)approx∼%Normal(0,σ2) (1)
which is the property that allows us to construct the Wald-type asymptotic confidence intervals:
.Agresti2012ApproximateProportions
However, when the function – a function of one or more estimators with large-sample normality with known variance – is not linear (e.g. the ratio of two proportions) and there is not a closed functional form to derive the SE, we use the Delta-method. The classical Delta-method states that under certain regularity conditions for the function , the statistic , and the i.i.d. random variables s, the distribution of can be approximated via a normal distribution with a variance proportional to ’s rate of change at , the derivative . In the one dimensional case of and , if is asymptotically normal, this theorem states that, for large (Appendix: Delta-method proof):
√n(ϕ(^θn)−ϕ(μ))approx∼Normal(0,ϕ′(θ)σ2).
This provides the researcher with confidence intervals based on asymptotic normality:
^θn±Z1−α/2⋅√ϕ′(θ)σ2n.
To better understand the Delta-method we need to discuss four concepts. First, we need to discuss how derivatives approximate functions such as via a Taylor expansion. Second, we describe convergence in distribution which is what allows us to characterise the asymptotic properties of the estimator. Third, we present the central limit theorem, which is at the core of the Delta-method. Finally we’ll generalize these results to the functional Delta-method using influence functions.
### 2.1 Taylor’s Approximation
For Taylor’s approximation to work we need to have a function that is differentiable at . Following the classical definition of differentiability,Courant1988DifferentialCalculus a real valued function with domain , a subset of , () is differentiable at and has derivative if the following limit exists:
ϕ′(θ):=limh→0h≠0ϕ(θ+h)−ϕ(θ)h.
Intuitively, this definition states that one can estimate a unique tangent line to with slope at by calculating the values of the function at and and reducing the size of (see Figure 0(a)).
This definition can be extended to the multivariate case via directional derivatives (Gâteaux derivatives).Gateaux1919FonctionsIndependantes In multiple dimensions, there is no one unique tangent line that can be generated (see Figure 0(b)); hence, in addition to the function
, one must also specify the direction of the vector
in which the tangent line will be calculated. This results in , the derivative of at in the direction :111 You might notice a slight change in notation where the limit is stated as instead of the classical . The notation implies that the limit is taken with decreasing towards zero in order to distinguish the direction from .
~∂vϕ(θ):=limh↓0h≠0ϕ(θ+h⋅v)−ϕ(θ)h. (2)
As an example, Figure 0(b) is a graph of the function with two different vectors and . Each vector results in a different directional derivative, and , respectively, corresponding to the slopes of the tangent lines in the directions of and respectively.
It turns out that for the Delta-method to be generalized to functionals (i.e. functions of functions) having a Gâteaux derivative is not enough. We require not only that the directional derivative exists but also that it exists and coincides with the one obtained for any sequence of directions that converge to (i.e. ). This is called (equivalently) the compact derivative or the HadamardBeutner2016FunctionalFunctionals (one-sided directional) Zajicek2014HadamardDifferentiability derivative of at in the direction (as long as it is a linear function for any ) and is usually denoted as:
∂vϕ(θ):=limh↓0h≠0ϕ(θ+h⋅vh)−ϕ(θ)h for any sequence vh→v as h↓0. (3)
This concept is illustrated in Figure 0(c) where the specific sequence converges to .
An equivalent definition of the Hadamard (one-sided directional) derivative which is useful for calculations involves setting for some function and with which allows us to rewrite (3) as:
∂G−θϕ(θ)=limh↓0h≠0ϕ((1−h)θ+h⋅Gh)−ϕ(θ)h for any sequence Gh→G as h↓0. (4)
In the particular case of a constant sequence such that the expression reduces to a Gâteaux derivative which can oftentimes be computed as a classical derivative. We discuss a particular case of this derivative, the influence function, IF, (also known as influence curve) in Section 2.4. It is interpreted as the rate of change of our functional in the direction of a new observation, .
Recall that the derivative, , represents the slope of the line tangent to the function. Intuitively, if is close to , the tangent line at should provide an adequate approximation of Figure 0(d)). This is stated in the Taylor first order approximation of around as follows:
ϕ(^θn)≈ϕ(θ)+∂vϕ(θ) (5)
with and the sign is interpreted as approximately equal. This can be rewritten as the more classical approach:
ϕ(^θ)−ϕ(θ)≈∂vϕ(θ) with% v=^θ−θ. (6)
Readers might be familiar with the theorem in the classical notation of univariate calculus which states the approximation:
ϕ(^θ)≈ϕ(θ)+ϕ′(θ)(^θ−θ)v. (7)
In this case the Hadamard derivative coincides with the classical one multiplied by :
∂vϕ(θ)=ϕ′(θ)(^θ−θ).
The justification for this connection is given by Fréchet’s derivative which represents the slope of the tangent plane. Intuitively, if the Hadamard (one-sided directional) derivatives exist for all directions we can talk about the tangent plane to at . The tangent plane is “made up” of all the individual (infinite) tangent lines. The slope of the tangent plane is the Fréchet derivative .Zajicek2014HadamardDifferentiability, ciarlet2013linear. For univariate functions in the Fréchet derivative is ; for functions of a multivariate returning one value, , this derivative is called the gradient and corresponds to the derivative of the function by each entry:
∇ϕ=(∂ϕ∂θ1,∂ϕ∂θ2,…,∂ϕ∂θn).
For multivariate functions, , the Fréchet derivative is an matrix called the Jacobian (matrix):
∇ϕ=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝∂ϕ1∂θ1∂ϕ1∂θ2…∂ϕ1∂θn∂ϕ2∂θ1∂ϕ2∂θ2…∂ϕ2∂θn⋮⋮⋱⋮∂ϕm∂θ1∂ϕm∂θ2…∂ϕm∂θn⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠. (8)
To obtain the Hadamard (one-sided directional) derivative from the Fréchet derivatives, either or , one needs to apply the derivative operator to the direction vector . This operation can be seen as “projecting” the tangent plane into the direction of hence resulting in the directional derivative:
∂vϕ(θ)=ϕ′(θ)⋅v or ∂vϕ(θ)=∇ϕ(θ)Tv. (9)
Thus the notation in (6) which we’ll use for the remainder of the paper includes not only the functional scenario but also the classical cases of functions in and respectively which can be obtained as the usual (classical) Fréchet derivatives projected onto .
Finally, as a side note, we remark that it is possible to improve the approximation via higher order Taylor’s expansion around (see 0(d))Courant1988DifferentialCalculus, ren2001second:
Tn(^θ)=ϕ(θ)+n∑i=1ϕ(n)(θ)⋅(θ−^θ)nn!
where denotes the -th derivative of defined as the derivative of the -th derivative. Readers interested in pursuing higher order Hadamard derivatives can consult Ren and Sen (2001) and Tung and Bao (2022) REN2001187, tung2022higher.
### 2.2 Convergence in distribution
For any random variable,
, the cumulative distribution function (CDF), also commonly referred to as the distribution function, quantifies the probability that
is less than or equal to a real number . Thus ’s (i.e., the CDF) is given by:
FX(z)=P(X≤z)
where the sign is interpreted pointwise if is a random vector of size (i.e. implies , , etc. for the vector ). The distribution function completely determines all the probabilities associated with a random variable as, for example, can be estimated as for any .
Given a statistic that depends upon the sample size, , the statistic’s distribution function also depends on . Let denote the distribution of and be the distribution of a random variable, . We say that converges in distribution to the random variable if the CDF of and the distribution of coincide at infinity:
limn→∞Fn=FΘ.
We remark that convergence in distribution does not imply that the random variables and are the same; it solely entails that the probabilistic model of and are identical (e.g. both are ) They are different random variables with a common distribution. Convergence in distribution is usually interpreted as an approximation stating that for large , the distribution of is approximately (written ).
One of the most important results concerning convergence in distribution is the Central Limit Theorem (CLT). The CLT applies to any random sample with and finite variance: . It states that the error of the sample mean, , times the square root of the sample size is normally distributed:
√nσ2(^θn−μ)d→Z, (10)
where and stands for convergence in distribution as . Figure 2 illustrates the distribution of for different sample sizes, , when the s are distributed.
### 2.3 Two sides of the same coin: the classical and functional Delta-method
The Delta-method uses both the Taylor approximation and the concept of convergence in distribution. It states that if for some series of numbers that depend on the sample size, with , we have that converges in distribution to then the weighted difference, , converges to the distribution of the derivative of in the direction of :
rn(ϕ(^θn)−ϕ(θ))d→∇ϕ(θ)TZ,
as long as is a function that can be approximated via its Taylor Series around . Examples of numbers include as in (10). The idea behind the Delta-method relates to the fact that we can transform (7) into:
rn(ϕ(^θ)−ϕ(θ))≈∇ϕ(θ)Trn(^θn−θ)d→∇ϕ(θ)TZ (11)
where the random quantity, converges in distribution to and thus converges (approximately) to (the derivative in the direction of ).
In practical terms this implies that the variance of can be approximated by an scaling of the variance of , i.e.:
(12)
The same idea can be extended when the parameter of interest, , is not a real number (or vector of numbers) but a function. In this case, is a functional (i.e. a function of functions) and the corresponding method is oftentimes called the functional Delta-method. The result is that if with now denoting a random function, then:
rn(ϕ(^θn)−ϕ(θ))d→∂Zϕ(θ) (13)
where denotes the Hadamard derivative of as in (6). We remark that the theorem of (13) is general in the sense that it works for classical derivatives (), gradients and jacobians (), and Hadamard derivatives () all following the notation from (3).
The reader is invited to consult the supplementary material for the classical proof of the Delta-Method as well as the more general proof of the functional one.
### 2.4 The influence function
It is common to represent scientific questions by estimands (i.e., a quantity we are interested in estimating from our data). For example, suppose we are interested in a random variable which follows a (possibly unknown) discrete distribution . The variable might be a binary indicator for disease status, for example, in a particular population. If we are interested in the probability of having the given disease, our estimand is . In this case, we have , the estimand is equivalent to the expectation of , i.e.
. The estimand can thus be seen as the parameter of the Bernoulli distribution. However a second interpretation is of importance: the estimand can also be seen as a
functional as it takes a function – specifically, the probability mass function – as an input and applies a function to it: the expectation. For taking discrete values, we have
ψ=ϕ(PX)=∑x∈Xx⋅PX(X=x)=E[X] (14)
where denotes the support (i.e. possible values) of . In the binary case, . If is continuous, an estimand defined as the expectation of
is a functional of the probability density function
, such that .
It is important to highlight that the estimand , which represents our scientific question, relates to a functional of the mass . Following the previous notation, we have that . If we have a random sample, , we can compute the empirical probability mass function (ePMF):
^θ=^PX(z):=1nn∑i=1I{Xi}(z)(=Number of Xis=zn) (15)
where the indicator function of a set is defined as
IA(z):={1 if z∈A,0 otherwise.
The ePMF can be used to estimate , which gives us . This is called a “plug-in” estimator, as we plug the estimator of (i.e. of ) into the function . In the above example, this implies calculating:
^ψ=ϕ(^θ)=ϕ(^PX)=∑x∈Xx⋅^PX(X=x)
which, for an observed dataset is equivalent to taking its meanVaart1998:
^ψ=∑x∈Xx⋅^PX(X=x)=∑x∈Xx[1nn∑i=1I{xi}(x)]=1nn∑i=1∑x∈Xx⋅I{xi}(x)=1nn∑i=1xi
where the last equality follows from the fact that only when and in that case the product is (we exchange with by using that in this scenario). The cases where don’t appear in the sum as results in adding to the sum.
The functional notation of allows us to study the robustness of our estimations using Hadamard derivatives. In particular, if the data are distributed according to the mass we can study the rate of change from distribution in the direction of another distribution, , by analyzing the derivative:
∂Q−PXϕ(PX)=limh↓0h≠0ϕ((1−h)PX+h⋅Q)−ϕ(PX)h
where we have substituted for all and in (4).
Intuitively this quantifies the rate of change in if the model deviates a little from towards (for example in the case of noisy data). Choosing as the indicator of the set that only contains the value (2.4) we can study the rate of change of in the direction of an observation, . In particular stands for the model that assigns probability to taking the value . Hence the derivative analyzes how an observation, , influences our estimation of .
The Hadamard derivative, in this special case, is called the influence function (IF) of the functional under model at and is denoted:
(16)
The IF stands for the Hadamard derivative in a special case, thus the Taylor expansion in (5) can be rewritten as:
ϕ(^PX)^ψ≈ϕ(PX)ψ+IFϕ,PX(Y)∂ϕI{Y}−PX(PX). (17)
Note that the Hadamard derivative establishes the change of value of a parameter (written as a functional) resultant from small perturbations of the estimator in the direction of
. Plotting the IF provides a tool to discover outliers and is informative about the robustness of the estimator
. Finally, if the difference is (asymptotically) normally distributed, the Delta-method implies that:
^ψn−ψ=ϕ(^θn)−ϕ(θ)approx∼Normal(0,Var[IFϕ,PX(Y)]) (18)
where the variance, , is taken with respect to the random variable (with mass ). We remind the reader that an estimator for such a variance given by a random sample is:
ˆVar[IFϕ,PX(Y)]=1nn∑i=1(IFϕ,PX(Xi))2. (19)
Notice that this estimator is the classical variance estimator for when the mean is known (the mean of the influence function is always ).
### 2.5 Summary
The Delta-method to estimate the SE of any particular estimator of – a Hadamard-differentiable function of a parameter – can be summarized in the following steps:
1. Determine the asymptotic distribution of . This variable, , is a function of the distance between the estimator and the true value .
2. Define the function related to the scientific question of interest, and compute its Hadamard derivative. Usually can be obtained from the mass or the distribution (i.e. the CDF). Recall that in the case of real valued functions coincides with the classical derivative in the direction of as in equation (3).
3. Use the asymptotic distribution of obtained in step 1 and multiply it by the Hadamard derivative in step two. Then, estimate the variance of the distribution and compute the confidence intervals accordingly. Note that in most cases (e.g. when comes from ), the difference is approximately normal and Wald-type confidence intervals can be constructed using the variance in (19), i.e. by estimating the variance through the sample variance of the estimated IF to derive the SE of Agresti2012ApproximateProportions.
## 3 Examples
In the following sections we’ll provide several examples and R code in a set of 6 boxes of applications of the classical and functional Delta-method based on the Hadamard derivative and the IF. The code in the boxes can be accessed at https://github.com/migariane/DeltaMethodInfluenceFunction. All calculations and analytical derivations for the classical method were verified using the sympy packagemeurer2017sympy in Python 3.7 in a notebookpython which can be accessed either in the same repository or in our Google Collab: https://github.com/migariane/DeltaMethodInfluenceFunction/tree/main/CalculationsDerivationsSympy.
### 3.1 Derivation of the Standard Error for the Sample Mean based on the Influence Function (Classical Delta-method)
In this section we derive the standard error for the sample mean. We illustrate how to apply the proposed steps practically, i.e. by applying equations (3), (8) and (7). Note that the classical statistical inference for the sample mean is straightforward, but the interest here is to show how to derive the IF for the sample mean to then compute the SE applying the steps highlighted before. To derive the SE of the mean for a random sample we proceed as follows: First (Step 1), we find the distribution of the difference between the estimator and the parameter . We know from the central limit theorem that
√nσ2⋅(^θn−θ)approx∼Normal(0,1).
In this case, corresponds to the identity function: . Then, following Step 2, we calculate the Hadamard derivative which in this case corresponds to the classical derivative in the direction of . Hence, following (9), we have:
∂^θ−θϕ(θ)=∂ϕ∂θ⋅(^θ−θ)=1⋅(^θ−θ)
We use Taylor’s expansion around to obtain:
ϕ(^θ)=ϕ(θ)+∂^θ−θϕ(θ)=ϕ(θ)+∂ϕ∂θ(1nn∑i=1Xi−θ)IFϕ,θ(X)=θ+1⋅(1nn∑i=1Xi−θ)=1nn∑i=1Xi. (20)
Due to the asymptotic normality we can use (18) to proceed with Step 3:
ϕ(^θ)−ϕ(θ)≈Normal(0,Var[% IFϕ,θ(X)])
The variance of the influence function is
Var[IFϕ,θ(X)]=Var[1nn∑i=1Xi]=1n2n∑i=1Var[Xi]=σ2n (21)
and thus:
ϕ(^θ)−ϕ(θ)≈Normal(0,σ2n) (22)
The variance of the influence function can be estimated via using the standard estimator of the variance, i.e. :
ˆVar[IFϕ,θ(X)]=S2n (23)
Two-sided confidence intervals for can thus be estimated through
^θ±Z1−α/2√S2n
This shows how to obtain the results which are widely known from textbooks through the use of the IF.
In Box 1 we provide the code to compute the SE for a sample mean using the IF and compare the results with the Delta-method implementation from the R package MSM kavroudakis2015 and in Figure 1 we plot the IF for the sample mean.
Box 1. Derivation of the IF for the sample mean
### 3.2 Derivation of the Standard Error for the Sample Mean seen as a Functional (Functional Delta-Method) based on the Influence Function
To develop the intuition of how to use the functional delta method we first derive the IF for the sample mean as in section 3.1 but writing the mean as a functional. Afterwards we’ll derive the IF for a more complicated situation: the quantile function.
Consider again the problem of estimating the mean. From the empirical probability mass function, we obtain the empirical mean, , as a functional of . Here we are considering . To simplify the example, assume that the are sampled from a discrete probability mass function such that there are only possible values of the . In this case following step 1 we know from the central limit theorem that for each value , the difference between the empirical probability mass function (which is an average) and its true value is asymptotically normal:
limn→∞√n(Pn−P)∼% Normal(0,P⋅(1−P)). (24)
Where we have defined the empirical probability mass function as in (15):
Pn(x)=Number of times x is in the samplen=1nn∑k=1I{xi}(x),
where the indicator variables are defined in section 2.4. We remark that the variance from (27) results from the variance of the indicators which are Bernoulli distributed.
We then follow step 2 to write the functional in terms of the estimator. In this case, the population mean is written as:
ϕ(P):=μ=N∑i=1xiP(xi).
while the sample mean is given by the following expression:
ϕ(Pn)=^μ=N∑i=1xiPn(xi)
We remark that in this case we will use the functional delta method as is a functional of the function . Hence to obtain the approximation in this case (step 3) we calculate the influence function from the definition in (16):
IFϕ,PX(Y) (25) =N∑i=1xi⋅I{Y}(xi)−N∑i=1xi⋅P(xi) =ϕ(I{Y})−ϕ(P) =Y−ϕ(P)
Finally the variance of the influence function corresponds to the variance of :
Var[IFϕ,PX(Y)]=Var[Y]=σ2 (26)
hence:
limn→∞√n(ϕ(Pn)−ϕ(P))∼Normal(0,σ2). (27)
which is equivalent to the expression found by the classical method in (22).
### 3.3 Derivation of the Standard Error for the Ratio of Two Means
Consider a random sample of size of the i.i.d random variables and , which are both normally distributed, with respective means and which are estimated by their sample means and . We are interested in deriving the variance for the ratio of the two means (i.e. the ratio estimator) defined as: . In this case (following step 1) it is known that the difference is asymptotically normal.
Second (step 2) we obtain the Hadamard derivative which in this case corresponds to the gradient in the direction of
v=^θ−θ=(¯X¯Y)−(μXμY)=(¯X−μX¯Y−μY).
∇ϕ=⎛⎜⎝∂ϕ∂μX∂ϕ∂μY⎞⎟⎠=⎛⎜⎝1μY−μXμ2Y⎞⎟⎠
where we assume . The Hadamard derivative (i.e. the influence function) is given by:
IFϕ,P(X,Y)=∂vϕ(¯X,¯Y)=(1μY,−μXμ2Y)(¯X−μX¯Y−μY)=1μY(¯X−μX)−μXμ2Y(¯Y−μY).
The variance is hence given by the variance of the influence function (i.e. the Hadamard derivative):
Var(IFϕ,P(X,Y)) =Var(1μY(¯X−μX)−μXμ2Y(¯Y−μY))=1n(1μ2YVar(X)+μ2Xμ4YVar(Y)−2μXμ3YCov(X,Y)) (28)
where we used that Var(X) under the independence assumption, Var(X) and .
For step 3, the estimated standard error is then obtained as the square root of the estimated variance and Wald-type confidence intervals (level ) follow:
¯X¯Y±Z1−α/2√ˆVar(IFϕ,P(X,Y)),
where the estimator for the variance is:
Box 2. Derivation of the IF for the ratio of two sample means
### 3.4 Derivation of the Standard Error for the Ratio of Two Probabilities (Risk Ratio)
In medical statistics, we are often interested in marginal and conditional (sometimes causal) risk ratios. Consider Table 1, where we are interested in the mortality risk by cancer status. Let denote the probability of being alive given that the patient has cancer and
|
2022-08-13 04:00:23
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9293004274368286, "perplexity": 387.0393310149095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571869.23/warc/CC-MAIN-20220813021048-20220813051048-00299.warc.gz"}
|
https://www.jobilize.com/online/course/1-5-transformation-of-functions-by-openstax?qcr=www.quizover.com
|
# 1.5 Transformation of functions
Page 1 / 22
In this section, you will:
• Graph functions using vertical and horizontal shifts.
• Graph functions using reflections about the $\text{\hspace{0.17em}}x$ -axis and the $\text{\hspace{0.17em}}y$ -axis.
• Determine whether a function is even, odd, or neither from its graph.
• Graph functions using compressions and stretches.
• Combine transformations.
We all know that a flat mirror enables us to see an accurate image of ourselves and whatever is behind us. When we tilt the mirror, the images we see may shift horizontally or vertically. But what happens when we bend a flexible mirror? Like a carnival funhouse mirror, it presents us with a distorted image of ourselves, stretched or compressed horizontally or vertically. In a similar way, we can distort or transform mathematical functions to better adapt them to describing objects or processes in the real world. In this section, we will take a look at several kinds of transformations.
## Graphing functions using vertical and horizontal shifts
Often when given a problem, we try to model the scenario using mathematics in the form of words, tables, graphs, and equations. One method we can employ is to adapt the basic graphs of the toolkit functions to build new models for a given scenario. There are systematic ways to alter functions to construct appropriate models for the problems we are trying to solve.
## Identifying vertical shifts
One simple kind of transformation involves shifting the entire graph of a function up, down, right, or left. The simplest shift is a vertical shift , moving the graph up or down, because this transformation involves adding a positive or negative constant to the function. In other words, we add the same constant to the output value of the function regardless of the input. For a function $\text{\hspace{0.17em}}g\left(x\right)=f\left(x\right)+k,\text{\hspace{0.17em}}$ the function $\text{\hspace{0.17em}}f\left(x\right)\text{\hspace{0.17em}}$ is shifted vertically $\text{\hspace{0.17em}}k\text{\hspace{0.17em}}$ units. See [link] for an example.
To help you visualize the concept of a vertical shift, consider that $\text{\hspace{0.17em}}y=f\left(x\right).\text{\hspace{0.17em}}$ Therefore, $\text{\hspace{0.17em}}f\left(x\right)+k\text{\hspace{0.17em}}$ is equivalent to $\text{\hspace{0.17em}}y+k.\text{\hspace{0.17em}}$ Every unit of $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ is replaced by $\text{\hspace{0.17em}}y+k,\text{\hspace{0.17em}}$ so the $\text{\hspace{0.17em}}y\text{-}$ value increases or decreases depending on the value of $\text{\hspace{0.17em}}k.\text{\hspace{0.17em}}$ The result is a shift upward or downward.
## Vertical shift
Given a function $f\left(x\right),$ a new function $g\left(x\right)=f\left(x\right)+k,$ where $\text{\hspace{0.17em}}k$ is a constant, is a vertical shift of the function $f\left(x\right).$ All the output values change by $k$ units. If $k$ is positive, the graph will shift up. If $k$ is negative, the graph will shift down.
## Adding a constant to a function
To regulate temperature in a green building, airflow vents near the roof open and close throughout the day. [link] shows the area of open vents $\text{\hspace{0.17em}}V\text{\hspace{0.17em}}$ (in square feet) throughout the day in hours after midnight, $\text{\hspace{0.17em}}t.\text{\hspace{0.17em}}$ During the summer, the facilities manager decides to try to better regulate temperature by increasing the amount of open vents by 20 square feet throughout the day and night. Sketch a graph of this new function.
We can sketch a graph of this new function by adding 20 to each of the output values of the original function. This will have the effect of shifting the graph vertically up, as shown in [link] .
Notice that in [link] , for each input value, the output value has increased by 20, so if we call the new function $\text{\hspace{0.17em}}S\left(t\right),$ we could write
$S\left(t\right)=V\left(t\right)+20$
This notation tells us that, for any value of $\text{\hspace{0.17em}}t,S\left(t\right)\text{\hspace{0.17em}}$ can be found by evaluating the function $\text{\hspace{0.17em}}V\text{\hspace{0.17em}}$ at the same input and then adding 20 to the result. This defines $\text{\hspace{0.17em}}S\text{\hspace{0.17em}}$ as a transformation of the function $\text{\hspace{0.17em}}V,\text{\hspace{0.17em}}$ in this case a vertical shift up 20 units. Notice that, with a vertical shift, the input values stay the same and only the output values change. See [link] .
$t$ 0 8 10 17 19 24 $V\left(t\right)$ 0 0 220 220 0 0 $S\left(t\right)$ 20 20 240 240 20 20
a colony of bacteria is growing exponentially doubling in size every 100 minutes. how much minutes will it take for the colony of bacteria to triple in size
I got 300 minutes. is it right?
Patience
no. should be about 150 minutes.
Jason
It should be 158.5 minutes.
Mr
ok, thanks
Patience
what is the importance knowing the graph of circular functions?
can get some help basic precalculus
What do you need help with?
Andrew
how to convert general to standard form with not perfect trinomial
can get some help inverse function
ismail
Rectangle coordinate
how to find for x
it depends on the equation
Robert
whats a domain
The domain of a function is the set of all input on which the function is defined. For example all real numbers are the Domain of any Polynomial function.
Spiro
foci (–7,–17) and (–7,17), the absolute value of the differenceof the distances of any point from the foci is 24.
difference between calculus and pre calculus?
give me an example of a problem so that I can practice answering
x³+y³+z³=42
Robert
dont forget the cube in each variable ;)
Robert
of she solves that, well ... then she has a lot of computational force under her command ....
Walter
what is a function?
I want to learn about the law of exponent
explain this
what is functions?
A mathematical relation such that every input has only one out.
Spiro
yes..it is a relationo of orders pairs of sets one or more input that leads to a exactly one output.
Mubita
Is a rule that assigns to each element X in a set A exactly one element, called F(x), in a set B.
RichieRich
If the plane intersects the cone (either above or below) horizontally, what figure will be created?
|
2021-01-19 15:59:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 32, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5616443157196045, "perplexity": 464.55236002585445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519395.23/warc/CC-MAIN-20210119135001-20210119165001-00392.warc.gz"}
|
https://www.educative.io/answers/what-are-sampling-techniques-in-data-science
|
Related Tags
data science
# What are sampling techniques in data science?
Hassaan Waqar
Data scientists and researchers need to collect data for running tests, analyzing scenarios, and testing hypotheses. An ideal situation might be to obtain data from the entire population of the subject in question. However, this situation is not feasible. Lack of resources means data scientists must rely on data samples of the subject population.
Data samples are derived from the population that is being studied. The aim is to obtain samples that can represent the population so that the findings applicable to the sample can be generalized to the population.
The illustration below shows the difference between population and sample:
Population and sample
## Sampling techniques
There are several ways data can be sampled from a target population. Sampling techniques can be divided into two broad categories:
Probability sampling: Every element of the population has an equal chance of getting selected and being a part of the sample space. Probability samples tend to be more representative of the population.
Non-probability sampling: Every element of the population does not have an equal chance of getting selected. This method of sampling might not always represent the population as a whole.
## Probability sampling techniques
We will now discuss techniques that fall under the category of probability sampling:
#### Simple random sampling
Simple Random Sampling or SRS is of the simplest methods of sampling that selects a subject randomly based on probability. Each element has an equal chance of getting selected. Sampling is usually done by assigning numbers to each sample and carrying out a lucky draw.
In the illustration on the right, each individual has a chance of $\frac{1}{15}$ of getting selected.
Simple Random Sampe
#### Stratified sampling
In stratified sampling, elements are first sub-grouped based on common characteristics such as gender, age, income level, profession, etc. These subgroups are known as stratas. Elements are then sampled from each strata. This method ensures that sampled data has representation from all subgroups.
The illustration on the right creates stratas based on profession and then samples them.
Statified Sampling
Elements are homogeneous within stratas.
It is not necessary that there is an equal number of elements within each strata.
Each element within a strata has an equal probability of being selected.
#### Cluster sampling
In cluster sampling, we divide our target population into subgroups known as clusters and then choose a cluster at random. Each cluster has an equal chance of getting selected.
The illustration on the right shows each cluster having a chance of $\frac{1}{4}$ of being selected.
Cluster Sampling
Elements within clusters are heterogeneous.
## Non-Probability sampling techniques
We will now discuss techniques that fall under the category of non-probability sampling:
#### Convenience sampling
In convenience sampling, samples are selected based on availability and convenience. This might include on the basis of first-come-first-serve or willingness to take part in a survey.
The illustration on the right chooses first three individuals from each line.
Convenience samples are not representative of the population since they are subject to biases such as gender, race, age, religion, etc.
Convenience Sampling
#### Quota sampling
Quota sampling involves selecting elements based on some pre-determined rule. This can include selecting multiples of a number, taking every fifth person to sign up, etc.
The illustration on the right shows balls that are multiples of two being selected.
Quota samples are not representative of the population as well.
Quota Sampling
RELATED TAGS
data science
CONTRIBUTOR
Hassaan Waqar
|
2022-08-10 17:56:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.673784613609314, "perplexity": 1054.2198599245585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00329.warc.gz"}
|
https://nforum.ncatlab.org/discussion/2533/derived-critical-locus/
|
## Not signed in
Want to take part in these discussions? Sign in if you have an account, or apply for one below
## Site Tag Cloud
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
• CommentRowNumber1.
• CommentAuthorUrs
• CommentTimeMar 2nd 2011
on my personal web I am starting a page derived critical locus (schreiber) with some notes.
I think so far I can convince myself of the claim that the page currently ends with (without proof). My next goal is to show that the homotopy fibers discussed there are given by BV-BRST complexes. But I have to interrupt now.
• CommentRowNumber2.
• CommentAuthorUrs
• CommentTimeMar 9th 2011
• (edited Mar 9th 2011)
maybe I made some progress with understanding the BV-complexes formally as derived critical loci: as homotopy fibers of sections $d S : \mathfrak{a} \to T^* \mathfrak{a}$ of the cotangent bundle on an $\infty$-Lie algebroid $\mathfrak{a}$ (a formal dual to a BRST complex).
New, rewritten notes are at derived critical locus (schreiber).
In fact I think I understand the full story if I assume that homotopy pushouts of my unbounded commutative dg-algebras are computed by mapping cones as usual. This is what one expects, but one needs to be a bit careful with what model structure exactly one uses to present the derived geometry, and what assumptions on projectivity are being made. This I need to think more about.
• CommentRowNumber3.
• CommentAuthorzskoda
• CommentTimeSep 27th 2011
• (edited Sep 27th 2011)
New entry derived critical locus at the main $n$Lab to record the Vezzosi’s paper. I am a bit surprised that the page lists that it is linked from derived critical locus while I have not put that self-referencing link.
• CommentRowNumber4.
• CommentAuthorUrs
• CommentTimeSep 28th 2011
Oh, wow.
Thanks, I had not seen that.
• CommentRowNumber5.
• CommentAuthorUrs
• CommentTimeOct 1st 2017
I have brought into derived critical locus the core of my old notes (from my personal web) aiming to show that the BV-BRST complex really is (the formal dual of) the derived critical locus in dg-geometry of a function on a Lie algebroid (BRST complex).
Looking at this material from 2011 again I notice two things:
1. In the example I don’t check the smoothness assumption made in this prop.
2. meanwhile there ought to be a reference that provides all the required model-category theoretic background in the entry in an easily citable way.
I don’t quite have the time right now to dig into this again. If anyone has a hint, I’d be grateful.
• CommentRowNumber6.
• CommentAuthorDavidRoberts
• CommentTimeOct 1st 2017
The Costello-Gwilliam reference seems to now be this book (pdf); does anyone have a more precise location for the claim referred to in the nLab page? Or is it just the general philosophy of the approach?
• CommentRowNumber7.
• CommentAuthorUrs
• CommentTimeOct 2nd 2017
a more precise location for the claim
In the book it is now the beginning of section 4.8.1
(Back in 2011 I was pointing to their wiki, which however no longer exists. I have added the section pointer to the entry now.)
• CommentRowNumber8.
• CommentAuthorUrs
• CommentTimeOct 2nd 2017
• (edited Oct 2nd 2017)
Vincent Schlegel kindly pointed out to me that, as stated, the computation gave the critical locus in $C \times \mathbb{A}^1$ instead of in $C$, while the latter is really the further pullback along $C \to C \times \mathbb{A}^1$. I have fixed this.
Dually the point is that in $Sym_{\mathcal{O}(C)}\left( \mathcal{O}(C) \oplus \cdots \right)$ there are “two copies” of $\mathcal{O}(C)$, and they eventually need to be identified. Indeed that’s what is necessary to yield the desired conclusion (which tacitly made that identification).
I have fixed this now.
• CommentRowNumber9.
• CommentAuthorDavid_Corfield
• CommentTimeOct 2nd 2017
Re #6, we have a dedicated page Factorization algebras in perturbative quantum field theory, which points to two volumes.
• CommentRowNumber10.
• CommentAuthorUrs
• CommentTimeOct 2nd 2017
Thanks. I have made the reference line point to that page. I’d have to check which volume is relevant.
• CommentRowNumber11.
• CommentAuthorDavidRoberts
• CommentTimeOct 2nd 2017
Thanks, Urs.
|
2017-10-21 04:57:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8173139095306396, "perplexity": 1882.769672343894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824570.79/warc/CC-MAIN-20171021043111-20171021063111-00147.warc.gz"}
|
https://www.physicsforums.com/threads/stationary-solution-to-reaction-diffusion-equation-with-certain-boundary-conditions.179317/
|
# Stationary solution to reaction-diffusion equation with certain boundary conditions
1. Aug 4, 2007
### Signifier
1. The problem statement, all variables and given/known data
What is the stationary (steady state) solution to the following reaction diffusion equation:
$$\frac{\partial C}{\partial t}= \nabla^2C - kC$$
Subject to the boundary conditions C(x, y=0) = 1, C(x = 0, y) = C(x = L, y) (IE, periodic boundary conditions along the x-axis, the value at x=0 is the same as at x=L). Also, at y = 0 and y = L, $$\frac{\partial C}{\partial x} = \frac{\partial C}{\partial y} = 0$$.
2. Relevant equations
With
$$\frac{\partial C}{\partial t} = 0$$,
rearrange to:
$$\nabla^2C = kC$$
...
3. The attempt at a solution
I believe I can solve this PDE without the boundary conditions, at least the one equation is satisified by a sum of hyberbolic sine or cosine functions. I have absolutely no idea how to incorporate the boundary conditions though. That they are periodic across x tells me that the solution should be symmetric about x = L / 2, but I have no mathematical reasons for this. I have never taking a PDE class before so I am a bit out of my element... any help would be very useful. I know that there IS an analytic solution with these constraints, but I haven't a clue what it is.
2. Aug 4, 2007
### Kummer
Steady state implies that $$\frac{\partial C}{\partial t} = 0$$
And so,
$$\frac{\partial ^2 C}{\partial x^2} + \frac{\partial ^2 C}{\partial y^2} = kC$$
I do not understand the Dirichlet problem. Is this on a rectangle? Can you be more implicit with the boundary conditions?
3. Aug 4, 2007
### HallsofIvy
Staff Emeritus
As Kummer said, your "stationary solution" implies
$$\frac{\partial ^2 C}{\partial x^2} + \frac{\partial ^2 C}{\partial y^2} = kC$$
Now "separate variables"- Let C= X(x)Y(y) so that the equation becomes
$$Y\frac{d^2X}{dx^2}+ X\frac{d^2Y}{dy^2}= kXY$$
Divide by XY to get
$$\frac{1}{X}\frac{d^2X}{dx^2}+ \frac{1}{Y}\frac{d^2Y}{dy^2}= k$$
In order that that be true for all x the two parts involving only X and only Y must be constants (other wise, by changing x but not y, we could change the first term but not the second- their sum could not remain the same constant, k). That is, we must have
$$\frac{1}{X}\frac{d^2X}{dx^2}= \lambda$$
or
$$\frac{d^2X}{dx^2}= \lambda X$$
and
$$\frac{1}{Y}\frac{dY^2}{dy^2}= k- \lambda$$
or
$$\frac{d^2Y}{dx^2}= (k- \lambda )Y$$
The general solution will be a sum of $X(x,\lambda)Y(y,\lambda)$ summed over all possible values of $\lambda$.
Can you see that, in order to satisfy periodic boundary conditions on the x-axis, $\lambda$ must be $-2n\pi$ for some integer n?
Last edited: Aug 4, 2007
4. Aug 4, 2007
### Kummer
1)What are the boundary conditions? I tried reading the post several times, I do not understand what they are?
2)Is it on a rectangle?
5. Aug 4, 2007
### Signifier
I am sorry to have not been more descriptive. I have not yet had time to digest HallsofIvy's response, which seems to be the most complete. To respond otherwise, though, the equation is being solved for the stationary state on a square of length/width L. The boundary conditions are: no flux at y = 0 or y = L (that is, top and bottom of square = no flux); periodic boundary conditions along x (that is, N(x = 0) = N(x = L)), and N = 1 along y = 0 (at the top of the square).
Thank you all; HallsofIvy, I will now proceed to consider what you've posted.
6. Aug 5, 2007
### Kummer
What is that supposed to mean?
Anyway, it seems to me this is a partial differential equations with a non-homogenous boundary value problem.
Which means you will have to solve for $$u_1(x,y)$$ so to satisfy:
$$\frac{\partial^2 u_1}{\partial x^2}+\frac{\partial^2 u_1}{\partial y^2} = 0 \mbox{ with }\left\{ \begin{array}{c}u_1(x,0)=u_1(L,y)=u_1(x,L)=0\\ u_1(0,y)=f(y) \end{array} \right.$$
And then you need to solve for $$u_2(x,y)$$ so to satify:
$$\frac{\partial^2 u_2}{\partial x^2}+\frac{\partial^2 u_2}{\partial ^2 u_2}{\partial y^2}=0 \mbox{ with }\left\{ \begin{array}{c}u_2(x,0)=u_2(L,y)=u_2(x,0)=0\\u_2(L,y)=f(y) \end{array} \right.$$
Then, $$u(x,y)=u_1(x,y)+u_2(x,y)$$ will be the solution to this equation.
But to solve for those two individually use the methods of seperation of variables.
7. Aug 5, 2007
### Signifier
Kummer: N = 1 along y = 0 means: N(x, 0) = 1 (all of the points along the line at the top of the square, at y = 0, have unit concentration).
I will now consider your response... thank you.
|
2017-08-23 22:01:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8117543458938599, "perplexity": 546.141404308833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886124563.93/warc/CC-MAIN-20170823210143-20170823230143-00336.warc.gz"}
|
https://byjus.com/question-answer/let-p-and-q-are-any-two-points-on-the-circle-x-2-y-2/
|
Question
# let $$P$$ and $$Q$$ are any two points on the circle $$x^{2}+y^{2}=4$$ such that $$PQ$$ is a diameter. If $$L_{1}$$ and $$L_{2}$$ are the lengths of perpendicular from $$P$$ and $$Q$$ on $$x+y=1$$, then the maximum value of $$L_{1},L_{2}$$ is
A
1/2
B
7/2
C
1
D
2
Solution
## The correct option is D $$7/2$$Let one end point of the circle be $$\left( 2\cos t, 2\sin t \right)$$ so the diametric point to it would be $$\left( 2\cos \left( 180+t\right), 2\sin \left( 180+t\right) \right)$$ that is $$\left( -2\cos t , -2\sin t \right).$$ The value of $$L_1\times L_2$$ would be -$$\Rightarrow \left[ \dfrac{\left( 2\left( \cos t+\sin t\right) -1\right) }{\sqrt{2}}\right]\left[ \dfrac{2\left( \cos t+\sin t\right) +1 }{\sqrt{2}}\right]$$$$\Rightarrow \left[ \dfrac{4\left( \cos t+\sin t\right)^2 -1 }{2}\right]$$now, maximum value $$\left( \cos t+\sin t\right)$$ is $$\sqrt{2}$$maximum value of $$L_1\times L_2=\dfrac{\left[ 4\left( 2\right) -1\right]}{2}$$ $$=\dfrac{8-1}{2}$$ $$=3.5$$ or $$\dfrac{7}{2}$$Hence, the answer is $$\dfrac{7}{2}.$$Maths
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
2022-01-21 11:32:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9017035961151123, "perplexity": 275.79998024393336}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303356.40/warc/CC-MAIN-20220121101528-20220121131528-00557.warc.gz"}
|
https://tbc-python.fossee.in/convert-notebook/Engineering_Heat_Transfer/CHAPTER2.ipynb
|
# chapter 2 :Steady state Conduction in one dimension¶
### Example 2.1 Page No.42¶
In [10]:
T3=-10.00 # temperature of inside wall in degree Fahrenheit
T0=70.0 # temperature of outside wall in degree Fahrenheit
dT=T0-T3 # overall temperature difference
k1=0.38 # brick masonry
k2=0.02 # glass fibre
k3=0.063 # plywood
dx1=4/12.0 # thickness of brick layer in ft
dx2=3.5/12.0 # thickness of glass fibre layer in ft
dx3=0.5/12.0 # thickness of plywood layer in ft
A=1.0 # cross sectional area taken as 1 ft**2
R1=dx1/(k1*A) # resistance of brick layer in (hr.degree Rankine)/BTU
R2=dx2/(k2*A) # resistance of glass fibre layer in (hr.degree Rankine)/BTU
R3=dx3/(k3*A) # resistance of plywood layer in (hr.degree Rankine)/BTU
qx=(T0-T3)/(R1+R2+R3)
print"Resistance of brick layer is ",round(R1,3),"(hr.degree Rankine)/BTU"
print"Resistance of glass fibre layer is ",round(R2,1),"(hr.degree Rankine)/BTU"
print"Resistance of plywood layer is ",round(R3,3),"(hr.degree Rankine)/BTU"
print"Heat transfer through the composite wall is ",round(qx,2),"(hr.degree Rankine)/BTU"
Resistance of brick layer is 0.877 (hr.degree Rankine)/BTU
Resistance of glass fibre layer is 14.6 (hr.degree Rankine)/BTU
Resistance of plywood layer is 0.661 (hr.degree Rankine)/BTU
Heat transfer through the composite wall is 4.96 (hr.degree Rankine)/BTU
### Example 2.2 Page No.45¶
In [15]:
k1=0.45 # thermal conductivity of brick
k2a=0.15 # thermal conductivity of pine
k3=0.814 # thermal conductivity of plaster board
k2b=0.025 # thermal conductivity of air from appendix table D1
A1=0.41*3 # cross sectional area of brick layer
A2a=0.038*3 # cross sectional area of wall stud
A2b=(41-3.8)*0.01*3 # cross sectional area of air layer
A3=0.41*3 # cross sectional area of plastic layer
dx1=0.1 # thickness of brick layer in m
dx2=0.089 # thickness of wall stud and air layer in m
dx3=0.013 # thickness of plastic layer in m
R1=dx1/(k1*A1) # Resistance of brick layer in K/W
R2=dx2/(k2a*A2a+k2b*A2b) # Resistance of wall stud and air layer in K/W
R3=dx3/(k3*A3) # Resistance of plastic layer in K/W
T1=25 # temperature of inside wall in degree celsius
T0=0 # temperature of outside wall in degree celsius
qx=(T1-T0)/(R1+R2+R3) # heat transfer through the composite wall in W
print"Resistance of brick layer is ",round(R1,3),"k/W"
print"Resistance of wall stud and air layer is ",round(R2,2),"k/W"
print"Resistance of plastic layer is ",round(R3,3),"k/W"
print"Heat transfer through the composite wall is",round(qx,1),"W"
Resistance of brick layer is 0.181 k/W
Resistance of wall stud and air layer is 1.98 k/W
Resistance of plastic layer is 0.013 k/W
Heat transfer through the composite wall is 11.5 W
### Example 2.3 Page No. 50¶
In [21]:
k1=24.8 # thermal conductivity of 1C steel in BTU/(hr.ft.degree Rankine)from appendix table B2
k2=0.02 # thermal conductivity of styrofoam steel in BTU/(hr.ft.degree Rankine)
k3=0.09 # thermal conductivity of fibreglass in BTU/(hr.ft.degree Rankine)
hc1=0.79 # convection coefficient between the air and the vertical steel wall in BTU/(hr.ft**2.degree Rankine)
hc2=150.0 # the convection coefficient between the ice water and the fiberglass
A=1.0 # calculation based on per square foot
dx1=0.04/12.0 # thickness of steel in ft
dx2=0.75/12.0 # thickness of styrofoam in ft
dx3=0.25/12.0 # thickness of fiberglass in ft
Rc1=1/(hc1*A) # Resistance from air to sheet metal
Rk1=dx1/(k1*A) # Resistance of steel layer
Rk2=dx2/(k2*A) # Resistance of styrofoam layer
Rk3=dx3/(k3*A) # Resistance of fiberglass layer
Rc2=1/(hc2*A) # Resistance from ice water to fiberglass
U=1/(Rc1+Rk1+Rk2+Rk3+Rc2) # overall heat transfer coefficient
T_inf1=90 # temperature of air in degree F
T_inf2=32 # temperature of mixture of ice and water in degree F
q=U*A*(T_inf1-T_inf2)
print"Resistance from air to sheet metal: ",round(Rc1,3),"degree F.hr/BTU"
print"Resistance of steel layer is ",round(Rk1,4),"degree F.hr/BTU"
print"Resistance of styrofoam layer is ",round(Rk2,3),"degree F.hr/BTU"
print"Resistance of fiberglass layer is ",round(Rk3,3),"degree F.hr/BTU"
print"Resistance from ice water to fiberglass is ",round(Rc2,4),"degree F.hr/BTU"
print"The overall heat transfer coefficient is ",round(U,3),"BTU/hr ft**2"
print"The heat transfer rate is %.1f BTU/hr",round(q,2),"BTU/hr"
Resistance from air to sheet metal: 1.266 degree F.hr/BTU
Resistance of steel layer is 0.0001 degree F.hr/BTU
Resistance of styrofoam layer is 3.125 degree F.hr/BTU
Resistance of fiberglass layer is 0.231 degree F.hr/BTU
Resistance from ice water to fiberglass is 0.0067 degree F.hr/BTU
The overall heat transfer coefficient is 0.216 BTU/hr ft**2
The heat transfer rate is %.1f BTU/hr 12.53 BTU/hr
### Example 2.4 Page No 55.¶
In [23]:
k=14.4 # thermal conductivity of 304 stainless steel in W/(m.K) from appendix table B2
D2=32.39 #Diameter (cm)
D1=29.53
T1=40 #Temprature
T2=38
import math
Qr_per_length=(2*3.14*k)*(T1-T2)/math.log(D2/D1)#format(6)
print"The heat transfer through the pipe wall per unit length of pipe is ",round(Qr_per_length/1000,2),"kw/m"
The heat transfer through the pipe wall per unit length of pipe is 1.96 kw/m
### Example 2.5 Page NO. 58¶
In [26]:
k1=231 # thermal conductivity of copper in BTU/(hr.ft.degree Rankine)from appendix table B1
k2=0.02 # thermal conductivity of insuLtion in BTU/(hr.ft.degree Rankine)
D2=1.125/12 # outer diameter in ft
D1=0.08792 # inner diameter in ft
t=0.5/12 # wall thickness of insulation in ft
R3=R2+t
LRk1=(log(R2/R1))/(2*3.14*k1) # product of length and copper layer resistance
LRk2=(log(R3/R2))/(2*3.14*k2) # product of length and insulation layer resistance
T1=40 # temperature of inside wall of tubing in degree fahrenheit
T3=70 # temperature of surface temperature of insulation degree fahrenheit
q_per_L=(T1-T3)/(LRk1+LRk2) # heat transferred per unit length in BTU/(hr.ft)
print"The heat transferred per unit length is ",round(q_per_L,2)," BTU/(hr.ft"
The heat transferred per unit length is -5.92 BTU/(hr.ft
### Example 2.6 Page No 63¶
In [31]:
k12=24.8 # thermal conductivity of 1C steel in BTU/(hr.ft.degree Rankine)from appendix table B2
k23=.023 # thermal conductivity of glass wool insulation in BTU/(hr.ft.degree Rankine)from appendix table B3
D2=6.625/12.0 # outer diameter in ft
D1=0.5054 # inner diameter in ft
t=2/12.0 # wall thickness of insulation in ft
D3=D2+t
hc1=12 # convection coefficient between the air and the pipe wall in BTU/(hr. sq.ft.degree Rankine).
hc2=1.5 # convection coefficient between the glass wool and the ambient air in BTU/(hr. sq.ft.degree Rankine).
U=1/((1/hc1)+(D1*log(D2/D1)/k12)+(D1*log(D3/D2)/k23)+(D1/(hc2*D3)))
print"Overall heat transfer coefficient is ",round(U,4)," BTU/(hr.sq.ft. Fahrenheit)"
Overall heat transfer coefficient is 0.1596 BTU/(hr.sq.ft. Fahrenheit)
### Example 2.7 Page NO.72¶
In [30]:
k=14.4 # thermal conductivity of 304 stainless steel in W/(m.K)from appendix table B2
T1=543.0 # temperature in K at point 1
T2=460.0 # temperature in K at point 2
dT=T1-T2 # temperature difference between point 1 and 2
dz12=0.035 # distance between thermocouple 1 and 2 in cm
dz56=4.45 # distance between thermocouple 5 and 6 in cm
dz6i=3.81 # distance between thermocouple 6 and interface in cm
dz5i=dz56+dz6i # distance between thermocouple 5 and interface in cm
T5=374 # temperature in K at point 5
T6=366 # temperature in K at point 6
dzi7=2.45 # distance between thermocouple 7 and interface in cm
dz78=4.45 # distance between thermocouple 7 and 8 in cm
dzi8=dzi7+dz78 # distance between thermocouple 8 and interface in cm
T7=349 # temperature in K at point 7
T8=337 # temperature in K at point 8
qz_per_A=k*dT/dz12 # heat flow calculated in W/m**2 calculated using Fourier's law
T_ial=T5-(dz5i*(T5-T6)/dz56) # temperature of aluminium interface in K
T_img=dzi8*(T7-T8)/dz78+T8 # temperature of magnesium interface in K
T_img_=355.8 #Approx value in the book
Rtc=(T_ial-T_img_)/(qz_per_A)
print"The required thermal contact resistance is",round(Rtc,7),"K sq.m/W"
The required thermal contact resistance is 9.81e-05 K sq.m/W
### Example 2.8 Page No. 85¶
In [ ]:
%matplotlib inline
In [1]:
import math
k=24.8 # thermal conductivity of 1C steel in BTU/(hr.ft.degree Rankine)from appendix table B2
D=(5.0/16.0)/12.0 # diameter of the rod in ft
P=(math.pi*D) # Circumference of the rod in ft
A=(math.pi/4)*D**2 # Cross sectional area of the rod in sq.ft
hc=1.0 # assuming the convective heat transfer coefficient as 1 BTU/(hr. sq.ft. degree Rankine)
m=math.sqrt(hc*P/(k*A))
L=(9/2.0)/12.0 # length of rod in ft
T_inf=70.0
T_w=200.0
dT=T_w-T_inf
const=dT/math.cosh(m*L)
qz=k*A*m*dT*tanh(m*L)
mL=m*L
efficiency=0.78 # from fig. 2.30
effectiveness=math.sqrt(k*P/(hc*A))*tanh(mL)
print"(b)The heat transferred is ",round(qz,2),"BTU/hr"
print"(c)The efficiency found from the graph in figure 2.30 is",efficiency
print"(d)The effectiveness is found to be",round(effectiveness,1)
import matplotlib.pyplot as plt
fig = plt.figure()
mL1=[0,1,1.2,5]
n1=[0,0.2,0.25,0.25]
mL2=[0,1,1.8,5]
n2=[0,0.35,0.5,0.5]
mL3=[0,1,2,5]
n3=[0,0.75,1,1]
mL4=[0,1,2,5]
n4=[0,1.5,2,2]
xlabel("mL")
ylabel("n ")
plt.xlim((0,6))
plt.ylim((0,2.5))
ax.annotate('(0.25)', xy=(5,0.25))
ax.annotate('(0.5)', xy=(5,0.5))
ax.annotate('(1)', xy=(5,1))
ax.annotate('(2)', xy=(5,2))
a1=plot(mL1,n1)
a2=plot(mL2,n2)
a3=plot(mL3,n3)
a4=plot(mL4,n4)
import matplotlib.pyplot as plt
fig = plt.figure()
z1=[0,3,4.8]
T1=[200,165,158]
z2=[0,4.8]
T2=[140,140]
z3=[0,4.8]
T3=[130,130]
z4=[4.8,4.8]
T4=[140,130]
xlabel("z (m)")
ylabel("T (K)")
plt.xlim((0,5.5))
plt.ylim((120,200))
ax.annotate('(5/16 in)', xy=(4.85,135))
a1=plot(z1,T1)
a2=plot(z2,T2)
a3=plot(z3,T3)
a4=plot(z4,T4)
(b)The heat transferred is 3.13 BTU/hr
(c)The efficiency found from the graph in figure 2.30 is 0.78
(d)The effectiveness is found to be 45.2
### Example 2.9 Page No. 90¶
In [75]:
k=136.0 # thermal conductivity of aluminium in BTU/(hr.ft.degree Rankine)from appendix table B1
L=9/(8*12.0) #length in ft
W=9/(4*12.0) #width in ft
delta=0.002604 #ft
hc=0.8 # the convective heat transfer coefficient estimated as 1 BTU/(hr.ft**2. degree Rankine)
T_w=1000.0 # the root temperature in degree fahrenheit
T_inf=90.0 # the ambient temperature in degree fahrenheit
import math
m=math.sqrt(hc/(k*delta))
P=2*W
A=2*delta*W
qz1=math.sqrt(hc*P*k*A)*(T_w-T_inf)*(sinh(m*L)+(hc/(m*k)*cosh(m*L)))/(cosh(m*L)+(hc/(m*k)*sinh(m*L)))
qz2=math.sqrt(k*A*hc*P)*(T_w-T_inf)*math.tanh(m*L)
Lc=L+delta
qz3=k*A*m*(T_w-T_inf)*math.tanh(m*L*(1+delta/Lc))
print"(a)The heat transferred is ",round(qz1,2),"BTU/hr"
print"(b)The heat transferred is ",round(qz2,2),"BTU/hr In the book the answer is incorrect"
print"(c)The heat transferred is ",round(qz3,2)," BTU/hr"
(a)The heat transferred is 26.12 BTU/hr
(b)The heat transferred is 25.43 BTU/hr In the book the answer is incorrect
(c)The heat transferred is 26.1 BTU/hr
### Example 2.10 Page No 94¶
In [35]:
k=8.32 # thermal conductivity BTU/(hr.ft.degree Rankine)
hc=400.0 # the convective heat transfer coefficient given in BTU/(hr.ft**2. degree Rankine)
import math
delta_opt=0.55/(12*2)
Lc=math.sqrt(delta_opt*k/(0.583*hc))
A=Lc*delta_opt
parameter=Lc**1.5*math.sqrt(hc/(k*A))
efficiency=0.6
W=1/(2.0*12.0) # width in ft
T_w=190.0 # wall temperature in degree fahrenheit
T_inf=58.0 # ambient temperature in degree fahrenheit
L=1.0 # length in ft
delta=W/2.0
q_ac=efficiency*hc*2*W*math.sqrt(L**2+delta**2)*(T_w-T_inf)
print"(a)The optimum length is ",round(Lc*12,2),"inch"
print"(b)The actual heat transferred is ",round(q_ac,2),"BTU/hr. NOTE: In the book answer is incorrect"
(a)The optimum length is 0.34 inch
(b)The actual heat transferred is 2640.57 BTU/hr. NOTE: In the book answer is incorrect
### Example 2.11 Page No 95¶
In [9]:
N=9 # number of fins
delta=0.003/2.0
L=0.025
Lc=L+delta
R=0.219/2
R2c=R+delta
R1=R-L
T_w=260 # root wall temperature in degree celsius
T_inf=27 # ambient temperature in degree celsius
hc=15
k=52 # thermal conductivity of cast iron in W/(m.K)from appendix table B2
import math
Ap=2*delta*Lc
As=2*math.pi*(R2c**2-R1**2)
radius_ratio=R2c/R1 # for finding efficiency from figure 2.38
variable=Lc**1.5*math.sqrt(hc/(k*Ap))
efficiency=0.93 # efficiency from figure 2.38
qf=N*efficiency*As*hc*(T_w-T_inf)
Sp=0.0127 # fin spacing
Asw=2*math.pi*R1*Sp*N # exposed surface area
qw=hc*Asw*(T_w-T_inf)
q=qf+qw
H=N*(Sp+2*delta)
Aso=2*math.pi*R1*H # surface area without fins
qo=hc*Aso*(T_w-T_inf)
effectiveness=q/qo # effectiveness defined as ratio of heat transferred with fins to heat transferred without fins
print"(a)The total heat transferred from the cylinder is ",round(q,0),"W"
print"(b)The Heat transferred without fins is W",round(qo,0),"W"
print"(c)The fin effectiveness is ",round(effectiveness,2)
(a)The total heat transferred from the cylinder is 1164.0 W
(b)The Heat transferred without fins is W 262.0 W
(c)The fin effectiveness is 4.44
In [ ]:
|
2021-07-29 01:27:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5481465458869934, "perplexity": 10764.652689174158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153814.37/warc/CC-MAIN-20210729011903-20210729041903-00697.warc.gz"}
|
https://gumeo.github.io/post/emplace-back/
|
# emplace_back vs push_back
Short summary
tl;dr emplace_back is often mistaken as a faster push_back, while it is in fact just a different tool. Do not blindly replace push_back by emplace_back, be careful of how you use emplace_back, since it can have unexpected consequences.
I have repeatedly run into the choice of using emplace_back instead of push_back in C++. This short blog post serves as my take on this decision.
Both of the methods in the title, along with insert and emplace, are ways to insert data into standard library containers. emplace_back is for adding a single element to the dynamic array std::vector. There is a somewhat subtle difference between the two:
1. push_back calls the constructor of the data that you intend to push and then pushes it to the container.
2. emplace_back “constructs in place”, so one skips an extra move operation, potentially creating faster bytecode. This is done by forwarding the arguments to the container’s template type constructor.
On the surface, emplace_back might look like a faster push_back, but there is a subtle difference contained in the act of forwarding arguments. Searching for the problem online yields a lengthy discussion on the issue (emplace_back vs push_back). In summary, the discussion leans towards choosing emplace_back to insert data into your container, however the reason is not completely clear.
## Be careful
After searching a bit more I found this post, which stresses how careful one should be. To further stress the ambiguity of the matter, the google c++ style guide does not provide an explicit preference. The reason they don’t state a preference, is because these are simply slightly different tools, and you should not use emplace_back unless you properly understand it, and there is a proper reason for it.
The following code should make it clear how emplace_back is different from push_back:
#include<vector>
#include<iostream>
int main(){
// Basic example
std::vector<int> foo;
foo.push_back(10);
foo.emplace_back(20);
// More tricky example
std::vector<std::vector<int>> foo_bar;
//foo_bar.push_back(10); // Throws error!!!!
foo_bar.emplace_back(20); // Compiles with no issue
std::cout << "foo_bar size: " << foo_bar.size() << "\n";
std::cout << "foo_bar[0] size: " << foo_bar[0].size() << "\n";
return 0;
}
Uncommenting the line foo_bar.push_back(10) yields the following compilation error.
$g++ test.cpp -o test test.cpp: In function ‘int main()’: test.cpp:11:24: error: no matching function for call to ‘std::vector<std::vector<int> >::push_back(int)’ foo_bar.push_back(10); ^ ... Some more verbose diagnostic So we get an error. An extra diagnostic provides us with the following: no known conversion for argument 1 from ‘int’ to ‘const value_type& {aka const std::vector<int>&}’. So it seems there is no conversion here that makes sense. The compilation completes without errors if we comment out the trouble line. Running the code yields the following result: $ ./test
foo_bar size: 1
foo_bar[0] size: 20
What is the difference? For emplace_back we forward the arguments to the constructor, adding a new std::vector<int> new_vector_to_add(20) to foo_bar. This is the critical difference. So if you are using emplace_back you need to be a bit extra careful in double checking types.
## Another example with conversion
The example above is very simple and shows the difference of forwarding arguments to the value type constructor or not. The following example I added in an SO question shows a bit more subtle case where emplace_back could make us miss catching a narrowing conversion from double to int.
#include <vector>
class A {
public:
explicit A(int /*unused*/) {}
};
int main() {
double foo = 4.5;
std::vector<A> a_vec{};
a_vec.emplace_back(foo); // No warning with Wconversion
//A a(foo); // Gives compiler warning with Wconversion as expected
}
## Why is this a problem?
The problem is that we are unaware of the problem at compile-time. If this was not the intended behavior, we have caused a runtime error, which is generally harder to fix. Let us catch the issue somehow. You might wonder that some warning flags, e.g., -Wall could reveal the issue. However, the program compiles fine with -Wall. -Wall contains narrowing, but it does not contain conversion. Further, adding -Wconversion yields no warnings.
$g++ -Wall -Wconversion test_2.cpp -o test The problem is that this conversion is happening in a system header, so we also need -Wsystem-headers to catch the issue. $ g++ -Wconversion -Wsystem-headers test_2.cpp
In file included from /usr/include/c++/7/vector:60:0,
from test_2.cpp:1:
/usr/include/c++/7/bits/stl_algobase.h: In function ‘constexpr int std::__lg(int)’:
/usr/include/c++/7/bits/stl_algobase.h:1001:44: warning: conversion to ‘int’ from ‘long unsigned int’ may alter its value [-Wconversion]
{ return sizeof(int) * __CHAR_BIT__ - 1 - __builtin_clz(__n); }
^
/usr/include/c++/7/bits/stl_algobase.h: In function ‘constexpr unsigned int std::__lg(unsigned int)’:
/usr/include/c++/7/bits/stl_algobase.h:1005:44: warning: conversion to ‘unsigned int’ from ‘long unsigned int’ may alter its value [-Wconversion]
{ return sizeof(int) * __CHAR_BIT__ - 1 - __builtin_clz(__n); }
^
In file included from /usr/include/x86_64-linux-gnu/c++/7/bits/c++allocator.h:33:0,
from /usr/include/c++/7/bits/allocator.h:46,
from /usr/include/c++/7/vector:61,
from test_2.cpp:1:
/usr/include/c++/7/ext/new_allocator.h: In instantiation of ‘void __gnu_cxx::new_allocator<_Tp>::construct(_Up*, _Args&& ...) [with _Up = A; _Args = {double&}; _Tp = A]’:
/usr/include/c++/7/bits/alloc_traits.h:475:4: required from ‘static void std::allocator_traits<std::allocator<_Tp1> >::construct(std::allocator_traits<std::allocator<_Tp1> >::allocator_type&, _Up*, _Args&& ...) [with _Up = A; _Args = {double&}; _Tp = A; std::allocator_traits<std::allocator<_Tp1> >::allocator_type = std::allocator<A>]’
/usr/include/c++/7/bits/vector.tcc💯30: required from ‘void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {double&}; _Tp = A; _Alloc = std::allocator<A>]’
test_2.cpp:9:25: required from here
/usr/include/c++/7/ext/new_allocator.h:136:4: warning: conversion to ‘int’ from ‘double’ may alter its value [-Wfloat-conversion]
{ ::new((void *)__p) _Up(std::forward<_Args>(__args)...); }
And there you have it - look at all the verbose output. The problem is not apparent from this wall of text for the uninitiated.
## So when should you use emplace_back?
One reason to use emplace_back is when the move operation that we can save, is actually quite expensive. Consider the following example:
class Image {
Image(size_t w, size_t h);
};
std::vector<Image> images;
images.emplace_back(2000, 1000);
Instead of moving the image, which consists of potentially a lot of data, we construct it in place, and forward the constructor arguments in order to do that. These kind of cases are quite special, we should already be aware that this is large data that we are adding to the container.
Another case for using emplace_back was added in C++17. Then emplace_back returns a reference to the inserted element, this is not possible at all with push_back.
## emplace_back is a potential premature optimization.
Going from push_back to emplace_back is a small change that can usually wait, and like the image case, it is usually quite apparent when we want to use it. If you want to use emplace_back from the start, then make sure you understand the differences. This is not just a faster push_back. For safety, reliability, and maintainability reasons, it is better to write the code with push_back when in doubt. This choice reduces the chance of pushing an unwanted hard to find implicit conversion into the codebase, but you should weigh that risk against the potential speedups, these speedups should then ideally be evaluated when profiling.
|
2023-01-30 04:36:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4160827398300171, "perplexity": 4688.72363389126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499801.40/warc/CC-MAIN-20230130034805-20230130064805-00254.warc.gz"}
|
http://math.stackexchange.com/questions/176778/why-does-the-2s-and-1s-complement-subtraction-works/176833
|
Why does the $2$'s and $1$'s complement subtraction works?
The algorithm for $2$'s complement and $1$'s complement subtraction is tad simple:
$1.$ Find the $1$'s or $2$'s complement of the subtrahend.
$2.$ Add it with minuend.
$3.$ If there is no carry then then the answer is in it's complement form we have take the $1$'s or $2$'s complement of the result and assign a negative sign to the final answer.
$4.$ If there is a carry then add it to the result in case of $1$'s complement or neglect in case of $2$'s complement.
This is what I was taught in undergraduate, but I never understood why this method work. More generally why does the radix complement or the diminished radix complement subtraction works? I am more interested in understanding the mathematical reasoning behind this algorithm.
-
Here is a very good explanation of why the method for subtraction in 2's complement works: cs.cornell.edu/~tomf/notes/cps104/twoscomp.html#whyworks. – anonymous Jul 30 '12 at 21:48
@anonymous: That only explains why 2's complement of a number is same as inverting the bits and adding one. – VelvetThunder Jul 31 '12 at 0:22
But, at least, it offers insight into why inverting and adding one works for finding the negative of a number (since this is essentially the part of the algorithm that distinguishes it from addition of positive numbers, I assumed this is what you wanted to know, but I may be misinterpreting what you are asking, so correct me if I'm wrong). Then, calculating (a - b) is the same as calculating the sum of a and the negative of b. – anonymous Jul 31 '12 at 16:13
To give you the intuition I will focus on non-negative numbers $x,y\ge 0$ and that the numbers are $k$-bit integers.
1-complenet
To get the 1-complement from a number $x$ you flip every bit in $x$s binary representation $\text{bin}(x)$, 1 $\Leftrightarrow$ 0. You can do the same, by subtracting $x$ from the binary number that contains only 1s and is as long as $\text{bin}(x)$. For example,
111111
-010111
-------
101000
This works for every single bit, and there are no carries, so it works for the whole binary representation.
Thus we can describe the 1-complement of $x$ by $c_1(x):=(2^{k+1}-1)-x$, where $k$ is the number of bits we have per number. By definition of the 1-complement we have $c_1(c_1(x))=x$.
Assume that $x>y$ (rule 4), hence when subtracting, we will have an overflow. Thus we will add $+1$ to the result, which yields $$x+c_1(y)+1=x+(2^{k+1}-1)-y+1=(2^{k+1})+(x-y),$$ since we consider only the first $k$ bits, the 1 representing $2^{k+1}$ will vanish, and we get $x-y$.
Assume now that $x\ge y$ (rule 3). Then there is no overflow and $$c_1(x+c_1(y))=c_1 (x+(2^{k+1}-1)-y)=c_1(c_1(y-x))=y-x=|x-y|.$$
2-complement
To get the 2-complement of an number $x$ we flip the bits, starting after the right most 1. This can be interpreted as subtracting $x$ from $2^{k+1}$. Again we look at an example
1000000
-0010100
--------
101100
Until the first 1 we don't have carries, so the bits are just taken from $x$, after the first 1, we have always a carry, and hence the bits get flipped.
Similarly to the 1-complement let $c_2(x):=2^{k+1}-x$, and again $c_2(c_2(x))=x$.
Assume that $x>y$ (rule 4), this gives $$x+c_2(y)=2^{k+1}-y+x=2^{k+1}+(x-y).$$ Again, the $2^{k+1}$ vanishes since we consider only the first $k$ bits.
Assume that $x<y$ (rule 3), then $$c_2(x+c_2(y))=c_2 (x+2^{k+1}-y)=c_2(c_2(y-x))=y-x=|x-y|.$$
Both concepts can be extended (on a very natural way) to work with negative numbers.
-
Thanks.. but I think the minuend should be $2^{k+1}-1$ in case of $1$'s complement and from the steps it seems that this answer wanted to show $(y-x)$ (since $x$ is complemented) but somehow end up computing $(x-y)$?! Still it doesn't explain the adding and neglecting of carry in $1$'s and $2$'s complement respectively. – VelvetThunder Jul 30 '12 at 15:03
We consider $k$-bit integers with an extra $k+1$ bit, that is associated with the sign. If the "sign-bit" is 0 then we treat this number as usual as a positive number in binary representation. Otherwise this number is negative (details follow).
Let us start with the 2-complement. To compute the 2-complement of an number $x$ we flip the bits, starting after the right most 1. Here is an example:
0010100 -> 1101100
We define for every positive number $a$, the inverse element $-a$ as the 2-complement of $a$. When adding two numbers, we will forget the overflow bit (after the sign bit), if an overflow occurs. This addition with the set of defined positive and negative numbers forms a finite Abelian group. Notice that we have only one zero, which is 00000000. The representation 10000000 does not encode a number. To check the group properties, observe that
• $a + (-a) = 0$,
• $a+0=a$,
• $-(-a)=a$,
• $(a+b)=(b+a)$,
• $(a+b)+c= a+ (b+c)$.
As a consequence, we can compute with the 2-complement as with integers. To subtract $a$ from $b$ we compute $a+(-b)$. If we want to restore the absolute value of a negative number we have to take its 2-complement (this explains step 3 in the question's algorithm).
Now the 1-complement. This one is more tricky, because we have two distinct zeros (000000 and 111111). I think it is easiest to consider the 1-complement of $a$ as its 2-complement minus 1. Then we can first reduce the 1-complement to the 2-complement and then argue over the 2-complements.
-
|
2015-10-09 11:25:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9243809580802917, "perplexity": 388.47497961339326}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737927030.74/warc/CC-MAIN-20151001221847-00030-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://motls.blogspot.com/2007/01/phenomenology-2006-mirage-mediation.html?m=1
|
Saturday, January 13, 2007
Phenomenology 2006: mirage mediation
Hyung Do Kim whom I know from Santa Cruz told me what he - and probably many others - consider to be the two most important directions of research in particle phenomenology of 2006:
• hiding Higgs,
• mirage mediation.
We have already discussed Dermíšek's and Gunion's idea about a possible 105 GeV Higgs. Their work has led to some activity of other physicists - including Schuster and Toro of Harvard University - who have tried to modify some assumptions and avoid the 114 GeV lower bound on the Higgs mass.
Mirage mediation
The second development is a new kind of supersymmetry breaking. It is usually dubbed "mirage mediation" and was started by Choi, Jeong, Kobayashi, Okumura at the end of 2005. They described it as a mixed modulus-anomaly mediation of supersymmetry breaking. This scenario may be interpreted as a consequence of the canonical string-theoretical KKLT models: this acronym is well-known to everyone who likes to talk about the "landscape".
For an appropriate type of uplifting potential, the little hierarchy problem is solved by canceling the integrated RG running of the Higgs mass between the GUT scale and the TeV scale. It just naturally happens that the total RG evolution of the Higgs mass is compensated by the anomaly-mediated contribution to its mass. The result is that the Higgs is light and one generates a little hierarchy between the Higgs mass and the supersymmetry scale that happens to be "sqrt(8).pi" times heavier than the Higgs.
The models that solve the hierarchy problem usually want to predict a nearly degenerate spectrum of the superpartners. It's because all superpartners must be heavier than the experimental lower bound. If the superpartner masses were too diverse, some of the superpartners would have to be far heavier which would lead to imperfect cancellations of the corrections to the Higgs mass and a stronger required tuning ("fine-tuning" could be too strong a word for the 1% accuracy).
The mirage mediation model is no exception. Indeed, the gaugino masses are nearly degenerate. The lightest superpartner (LSP) is not just a neutralino - it is, in fact, a pure Higgsino. Stop and gaugino masses are related and SUSY flavor and CP violation is suppressed. This mechanism leads to a fictitious (mirage) unification at an intermediate scale around 10^{10-11} GeV.
Let me say that the RG running to the Higgs mass is dominated by the dominant Yukawa coupling - the top Yukawa coupling - and the corresponding superpartner mass - the stop mass. Because it is a running effect, it depends on the logarithm:
• Delta (mHiggs:u2) = -3/(4 pi2) yt2 mstop2 ln (mGUT/mstop)
That's pretty big and it wants to make the stop squark as light as the Higgs which seems incompatible with the experiments. Everyone who became interested should read the paper by Choi et al. and its followups. But let me say the following: if this realization of supersymmetry breaking were observed at the LHC, it would be not only a proof of SUSY but also a very strong circumstantial evidence in favor of the flux compactifications of string theory and maybe even the landscape itself although we would have to think twice before making such a conclusion.
|
2021-05-18 01:49:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.616318941116333, "perplexity": 977.0375334377821}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991650.73/warc/CC-MAIN-20210518002309-20210518032309-00262.warc.gz"}
|
http://crypto.stackexchange.com/questions/6497/are-there-any-practical-implementation-of-a-homomorphic-hashing-or-signature-sch/6511
|
# Are there any practical implementation of a homomorphic hashing or signature scheme?
A homomorphic hash function is a function $H : A \to B$ between two sets with some algebraic structure $(A, *)$ and $(B, \star)$ such that
• $H$ is collision resistant, i.e. it is hard to find $x \neq y$ such that $H(x) = H(y)$ and
• $H$ is a homomorphism, i.e. $H(x * y) = H(x) \star H(y)$.
Are there any practical realizations of such a homomorphic hash function, or even a homomorphic signature scheme (i.e., where we can "add" valid signatures to get a signature of the "sum" of two messages)?
Even better, are there even any libraries implementing this?
-
As far as I know, there is yet no practically efficient implementation of fully homomorphic encryption on the horizon. So the answer to your question would evidently be negative, at least for a good hashing scheme, IMHO. – Mok-Kong Shen Feb 27 at 11:00
FYI, I posted a question on Meta a while back about this type of question and whether we should allow them. Perhaps you would like to weigh in? – mikeazo Feb 27 at 12:31
sashank, I think you might need to specify more precisely exactly what you mean by homomorphic hashing. – D.W. Feb 27 at 15:45
@D.W., sashank: I edited the question to contain an explanation of what is searched here. – Paŭlo Ebermann Feb 27 at 20:16
You might want to add an additional constraint; the Identity function pedantically meets all the requirements listed; it is hard (impossible) to find $x \neq y$ with $I(x) = I(y)$, and for it homomorphic with any operation $\star$, that is, $I(A \star B) = I(A) \star I(B)$ – poncho Feb 27 at 22:20
show 5 more comments
## 1 Answer
There's plenty of research in this area. I'll give you just a small sampling:
Like I said, this is only a small subset of the available research in this area. I found most of these through about 5 minutes with Google Scholar. I recommend you start by doing a literature review to familiarize yourself with the research literature on this subject: search to find as many relevant papers as possible; read each such paper; for each paper you find, read the related work section and bibliography to try to identify other relevant papers, and also use Google Scholar or other sites to find other papers that cite that paper that might be relevant; for each additional relevant paper you find, repeat the process.
After you have done this process, you should be in a better position to ask a more narrowly targeted question with a particular set of requirements -- or, if you're lucky, you might have found a solution to your particular problem already described in the literature!
-
I know i can do it myself if it was research references, i am not a noob , i have been good community player and good netizen while asking questions, my question was precisely on a library or platform which implemented these techniques , which got distorted by edits , i could not find myself the library through google . thts reason i have posted the question, am NOT looking for research references – sashank Feb 28 at 6:53
@sashank: Note though that I had responded to your original (unedited) OP with a comment and given there a negative answer. – Mok-Kong Shen Mar 2 at 14:26
add comment
|
2013-12-10 03:01:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6724679470062256, "perplexity": 648.3939274174852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164005827/warc/CC-MAIN-20131204133325-00034-ip-10-33-133-15.ec2.internal.warc.gz"}
|
http://mathoverflow.net/questions/105221/quantum-groups-not-via-presentations/105228
|
# quantum groups… not via presentations
Given a semisimple Lie algebra $\mathfrak g$ with Cartan matrix $a_{ij}$, the quantum group $U_q(\mathfrak g)$ is usually defined as the $\mathbb Q(q)$-algebra with generators $K_i$, $E_i$, $F_i$ (the $K_i$ are invertible and commute with each other) and relations $$\begin{split} K_iE_j &K_i^{-1}=q^{\langle\alpha_i,\alpha_j\rangle}E_j\qquad\qquad K_iF_j K_i^{-1}=q^{-\langle\alpha_i,\alpha_j\rangle}F_j\, \\\ &[E_i,F_j]=\delta_{ij}\frac{K_i-K_i^{-1}} {\quad q^{\langle\alpha_i,\alpha_i\rangle/2} -q^{-\langle\alpha_i,\alpha_i\rangle/2}\quad}\, \end{split}$$ along with two more complicated relations that I won't reproduce here.
One then defines the comultiplication, counit, and antipode by some more formulas.
Is there a way of defining $U_q(\mathfrak g)$ that doesn't involve writing down all those formulas?
In other words, is there a procedure that takes $\mathfrak g$ as input, produces $U_q(\mathfrak g)$ as output, and doesn't involve the choice of a Cartan subalgebra of $\mathfrak g$?
-
not sure you will be satisfied, but there is FRT (Faddeev-Reshetikhin-Takhtadjan) approach, first define Fun_q(GL) via RTT=TTR relation, then extract dual Hopf algebra (i.e. Uq(GL)) analougsly to classical group as distributions in origin... – Alexander Chervov Aug 22 '12 at 10:30
Would you consider a path through quantum geometric Satake and Tannakian reconstruction an admissible procedure? – S. Carnahan Aug 22 '12 at 11:43
@Scott: If I use geometric Satake, Do I get U_q(g) out of that procedure, or do I just get its category of representations? @Alexander: Producing the dual Hopf algebra is just as good as producing U_q(g) itself. You're mentioning Fun_q(GL)... is the FRT approach something that only works for GL_n? – André Henriques Aug 22 '12 at 12:40
@Andre As far as I remember in FRT paper they can work with ALL Lie algebras (well, may be classical ones). They do NOT choose Cartan subalgebra but RTT=TTR is explicit matrix relation so we choose basis in Lie algebra, but may be it does not much depend on basis choice... RTT=TTR is corollary of "universal" YangbBAxter R_{12}R_{23}R_{12} =R_{23}R_{12}R_{23}. – Alexander Chervov Aug 22 '12 at 13:28
$\newcommand\g{\mathfrak{g}}$The answer to your question "is there a procedure that takes $\g$ as input, produces $U_q(\g)$ as output, and doesn't involve the choice of a Cartan subalgebra of $\g$?" is No. Not if you want it "canonical" in any sense. (Of course, if I wanted to cheat my way to a "yes," I could make choices that are equivalent to choosing a Cartan but not stated as such — I would construct for you a certain canonical homogeneous space, and then ask you to pick a point in it, ....)
The problem is that the automorphism group of $\g$ does not lift to $U_q(\g)$. Recall that the inner automorphism group is precisely the simplest group $G$ integrating $\g$ (take any connected group integrating $\g$ and mod out by its center). On the other hand, the inner automorphism "group" of $U_q(\g)$ is $U_q(\g)$ (or "$\operatorname{spec}(\operatorname{Fun}_q(G))$", depending on your point of view) itself. This certainly deforms the automorphisms. But there is not a procedure like you ask for, because you have to break some symmetry.
Here's a way to say this correctly: In a precise sense $U_q(\g)$ degenerates to $U(\g)$ as $q\to 1$, and this is part of the structure that I take you to mean when you write "$U_q(\g)$". In this degeneration, you can also study $\frac{\partial}{\partial q}(\dots)$ at $q=1$. In particular, looking at $\frac{\partial}{\partial q}\bigr|_{q=1}$ of the comultiplication on $U_q(\g)$ recovers the Lie cobracket on $\g$. But the Lie cobracket knows the Cartan subalgebra: it is precisely the kernel of the Lie cobracket. So $\operatorname{Aut}(\g)$ cannot lift to $U_q(\g)$ compatibly with all of this structure.
What does exist without choosing a Cartan subalgebra is the braided category of representations of $U_q(\g)$, although I would have to think a moment to recall how to write it down explicitly. (In the asymptotic limit $q = e^\hbar$ with formal $\hbar$, I do know how to write down $\operatorname{Rep}(U_{e^\hbar}\g)$ explicitly for any choice of Drinfel'd associator.) In particular, as a braided monoidal category, this category does have an action by $\operatorname{Aut}(\g)$.
But the category is strictly less data than the Hopf algebra. Namely, "Tannakian reconstruction" is the statement that $U_q(\g)$ is the Hopf algebra of endomorphisms of a certain braided coalgebra in $\operatorname{Rep}(U_{q}\g)$ (or it is the Hopf dual of the Hopf algebra of co-endomorphisms of a certain braided algebra in the category of Ind-objects in $\operatorname{Rep}(U_{q}\g)$, if for you representations are finite-dimensional), and you cannot choose this coalgebra canonically. This coalgebra is unique up to isomorphism, but certainly not up to unique isomorphisms (or else $U_q(\g)$ would be trivial). The failure of this coalgebra to exists up to canonical isomorphism is essentially the same problem as above.
In a precise way, this is failure of there to exist a canonical isomorphisms between different choices of the coalgebra is analogous of the failure of the "fundamental group" of a topological space to be an honest group. Recall that a pointed topological space has a fundamental group, which is a group determined up to canonical isomorphism. For comparison, a non-pointed but path-connected topological space has a group assigned to each point, and these are non-canonically isomorphic. Thus a non-pointed path-connected topological space has a "fundamental group up to conjugation," also known as a connected groupoid.
What this should all mean, although I'm not going to try to write down the correct definition, is that there does exist canonically associated to $\g$ a "Hopf algebroid" which is noncanonically equivalent as a Hopf algebroid to any choice of $U_q(\g)$. Or, at least, I'm confident of everything in my answer in the $\hbar\to 0$ asymptotics, and have thought less about the finite-$q$ case, and so I'm generalizing intuition from that setting, but I think it's all correct.
-
To be clear, are you saying there is a Tannakian reconstruction for Uq(g) up to non-unique isomorphism? – B. Bischof Aug 22 '12 at 22:57
Thank you Theo for your insightful answer. I'm quite curious about the Hopf algebroid you mentioned in your last paragraph... can you say a bit more about it? – André Henriques Aug 23 '12 at 9:48
@B. Bischof: I believe so, but I could be mistaken. Certainly to have a chance of any two fibers being isomorphic I had better word over an algebraically closed field — otherwise you should expect that the fibers correspond to "Galois actions" of the quantum group on the field. But I'm trusting intuition from the case of actual algebraic groups. – Theo Johnson-Freyd Aug 23 '12 at 12:13
@B. Bischof: That said, most likely even if there are other fiber functors, I can ask for those that degenerate to "the" fiber functor as $q\to 1$, and these should all be isomorphic, I'd expect. – Theo Johnson-Freyd Aug 23 '12 at 12:14
@Andre: Not really — again I'm going based on intuition from the group case. What I'd expect is that there is a category whose objects are faithful braided monoidal functors from "$\mathrm{Rep}(U_q(\g)$" to some version of VECT; equivalently, the objects of the category are braided coalgebras which are generators of the category. Morphisms are all linear maps, but this category should be some version of "Hopf algebroid" where the comultiplication encodes the various coalgebra structures, e.g. the grouplike elements are the coalgebra homomorphisms. – Theo Johnson-Freyd Aug 23 '12 at 12:27
Some possible partial answers might be:
• one could follow Lusztig and do away with the Lie algebra completely, just starting from a root datum. Then do some geometry...
• Majid's reinterpretation of Lusztig's construction, as exposited in his "A Quantum Groups Primer", is a (good, IMHO) attempt to explain where the formulae come from. The definition of the positive part of $U_{q}(\mathfrak{g})$ as natural (braided) object acted on by "the Cartan part" explains most of the formulae. Taking the Drinfeld double "explains" the cross-relations.
• another explanation for the formulae, especially the quantized Serre relations, is given via the Ringel-Hall categorical approach (Hall algebras); one should also mention Green at this point. This has been extended of late to double Hall algebras, trying to construct the whole quantum group and not just the positive part, but this is still essentially done via the Drinfeld double construction, just at a categorical level.
• a quite non-standard route would be to go from $\mathfrak{g}$ to the algebraic group $G$, construct the quantized coordinate ring - where you might find the formulae more to your taste and/or better motivated along Grothendieck/Manin lines (see the quantum groups book of Brown-Goodearl for example) - then dualize to get $U_{q}(\mathfrak{g})$. I say non-standard because most people want to go the other way, to figure out what the quantized coordinate ring should be.
Apologies for the dearth of precise references: I can try to provide some if any of the above is helpful. Kassel's book specifically covers the FRT construction, by the way.
Edit: The full Hopf structure, as opposed to just the algebra structure, is - again IMHO - essentially canonically determined. The coproduct is pretty canonical, by comparison with the natural one for $U(\mathfrak{g})$. There's almost no choice for the counit and the antipode formula is (if I remember correctly) forced from the requirement that various maps are algebra morphisms. Alternatively, the Hopf structure can be seen to drop out of the Drinfeld double construction; that might not be a helpful thing to say, of course.
-
The double construction of $\mathfrak g$ picks out a particular Cartan subalgebra. Indeed, the coproduct picks out the Cartan subalgebra, unless you're going to play games with "gauge equivalence" or something. I agree that counit and antipode maps are forced: this is equivalent to saying that an associative algebra cannot have more than one unit, and that an element in a unital associative algebra cannot have more than one inverse. This is in contrast with the coproduct, which is determined up to isomorphism but any particular choice of coproduct is actual data. – Theo Johnson-Freyd Aug 22 '12 at 14:42
|
2015-04-26 21:18:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8784218430519104, "perplexity": 463.65069311302193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246656168.61/warc/CC-MAIN-20150417045736-00288-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://zbmath.org/authors/?q=ai%3Aloday.jean-louis
|
## Loday, Jean-Louis
Compute Distance To:
Author ID: loday.jean-louis Published as: Loday, Jean-Louis; Loday, J.-L.; Zinbiel, G. W.; Loday, J. L. Homepage: http://www-irma.u-strasbg.fr/~loday/ External Links: MacTutor · MGP · Wikidata · GND · IdRef · theses.fr
Documents Indexed: 83 Publications since 1971, including 5 Books 5 Contributions as Editor · 1 Further Contribution Biographic References: 2 Publications Co-Authors: 40 Co-Authors with 39 Joint Publications 1,051 Co-Co-Authors
all top 5
### Co-Authors
47 single-authored 9 Ronco, María Ofelia 4 Pirashvili, Teimuraz 3 Brown, Ronald 3 Popov, Todor 2 Kassel, Christian 2 Procesi, Claudio 2 Quillen, Daniel Gray 2 Vallette, Bruno 1 Aguiar, Marcelo 1 Atiyah, Michael Francis 1 Bai, Chengming 1 Bass, Hyman 1 Bergeron, Nantel 1 Brown, Ken 1 Casas Miras, José Manuel 1 Chapoton, Frédéric 1 Cuntz, Joachim 1 Dokas, Ioannis 1 Duflot, Jeanne 1 Fiedorowicz, Zbigniew 1 Frabetti, Alessandra 1 Friedlander, Eric Mark 1 Gillet, Henri A. 1 Goichot, François 1 Grayson, Daniel Richard 1 Guin-Walery, Dominique 1 Guo, Li 1 Holtkamp, Ralf 1 Latschev, Janko 1 Lluis-Puebla, Emilio 1 Mazur, Barry 1 Nikolov, Nikolay M. 1 Quillen, Jean 1 Ranicki, Andrew Alexander 1 Schappacher, Norbert 1 Segal, Graeme B. 1 Snaith, Victor Percy 1 Soulé, Christophe 1 Stasheff, James D. 1 Stein, Michael R. 1 Sullivan, Dennis Parnell 1 Tillmann, Ulrike 1 Voronov, Alexander A.
all top 5
### Serials
5 Comptes Rendus de l’Académie des Sciences. Série I 3 Advances in Mathematics 3 Journal of Algebra 3 Georgian Mathematical Journal 3 Comptes Rendus Hebdomadaires des Séances de l’Académie des Sciences, Série A 3 Grundlehren der Mathematischen Wissenschaften 2 Annales de l’Institut Fourier 2 Journal of Pure and Applied Algebra 2 Journal of Algebraic Combinatorics 2 Astérisque 2 Lecture Notes in Mathematics 1 Communications in Algebra 1 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 1 Archiv der Mathematik 1 Commentarii Mathematici Helvetici 1 Inventiones Mathematicae 1 Journal of Combinatorial Theory. Series A 1 Journal für die Reine und Angewandte Mathematik 1 Manuscripta Mathematica 1 Mathematische Annalen 1 Mathematica Scandinavica 1 Mathematische Zeitschrift 1 Proceedings of the American Mathematical Society 1 Proceedings of the London Mathematical Society. Third Series 1 Topology 1 Transactions of the American Mathematical Society 1 Bulgarian Journal of Physics 1 $$K$$-Theory 1 Forum Mathematicum 1 L’Enseignement Mathématique. 2e Série 1 Expositiones Mathematicae 1 Notices of the American Mathematical Society 1 Documenta Mathematica 1 Séminaire Lotharingien de Combinatoire 1 Comptes Rendus de l’Académie des Sciences. Série I. Mathématique 1 Comptes Rendus. Mathématique. Académie des Sciences, Paris 1 International Journal of Geometric Methods in Modern Physics 1 Bulletin of the American Mathematical Society 1 Contemporary Mathematics 1 Séminaires et Congrès 1 Nankai Series in Pure, Applied Mathematics and Theoretical Physics 1 Journal of $$K$$-Theory
all top 5
### Fields
56 Category theory; homological algebra (18-XX) 38 Nonassociative rings and algebras (17-XX) 30 Associative rings and algebras (16-XX) 30 Algebraic topology (55-XX) 14 $$K$$-theory (19-XX) 9 Group theory and generalizations (20-XX) 9 Manifolds and cell complexes (57-XX) 8 Combinatorics (05-XX) 5 General and overarching topics; collections (00-XX) 5 Quantum theory (81-XX) 4 Global analysis, analysis on manifolds (58-XX) 3 Commutative algebra (13-XX) 3 Algebraic geometry (14-XX) 3 Convex and discrete geometry (52-XX) 2 History and biography (01-XX) 2 Order, lattices, ordered algebraic structures (06-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 1 General algebraic systems (08-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Computer science (68-XX)
### Citations contained in zbMATH Open
76 Publications have been cited 3,212 times in 2,157 Documents Cited by Year
Loday, Jean-Louis; Vallette, Bruno
2012
A noncommutative version of Lie algebras: Leibniz algebras. (Une version non commutative des algèbres de Lie: les algèbres de Leibniz.) Zbl 0806.55009
Loday, Jean-Louis
1993
Universal enveloping algebras of Leibniz algebras and (co)homology. Zbl 0821.17022
Loday, Jean-Louis; Pirashvili, Teimuraz
1993
Cyclic homology. Zbl 0780.18009
Loday, Jean-Louis
1992
Cyclic homology. 2nd ed. Zbl 0885.18007
Loday, Jean-Louis
1998
Van Kampen theorems for diagrams of spaces. Zbl 0622.55009
Brown, Ronald; Loday, Jean-Louis
1987
Cyclic homology and the Lie algebra homology of matrices. Zbl 0565.17006
Loday, Jean-Louis; Quillen, Daniel
1984
Hopf algebra of the planar binary trees. Zbl 0926.16032
Loday, Jean-Louis; Ronco, María O.
1998
Spaces with finitely many non-trivial homotopy groups. Zbl 0491.55004
Loday, Jean-Louis
1982
Dialgebras. Zbl 0999.17002
Loday, Jean-Louis
2001
Trialgebras and families of polytopes. Zbl 1065.18007
Loday, Jean-Louis; Ronco, María
2004
Central extensions of Lie algebras. (Extensions centrales d’algèbres de Lie.) Zbl 0485.17006
Kassel, Christian; Loday, Jean-Louis
1982
Opérations sur l’homologie cyclique des algèbres commutatives. (Operations on the cyclic homology of commutative algebras). Zbl 0686.18006
Loday, Jean-Louis
1989
Cup-product for Leibniz cohomology and dual Leibniz algebras. Zbl 0859.17015
Loday, Jean-Louis
1995
Cohomologie et groupe de Steinberg rélatifs. Zbl 0391.20040
Loday, Jean-Louis
1978
K-théorie algébrique et représentations de groupes. Zbl 0362.18014
Loday, Jean-Louis
1976
Realization of the Stasheff polytope. Zbl 1059.52017
Loday, Jean-Louis
2004
On the structure of cofree Hopf algebras. Zbl 1096.16019
Loday, Jean-Louis; Ronco, María
2006
Dialgebras and related operads. Zbl 0970.00010
2001
Order structure on the algebra of permutations and of planar binary trees. Zbl 0998.05013
Loday, Jean-Louis; Ronco, María O.
2002
Leibniz $$n$$-algebras. Zbl 1037.17002
Casas, J. M.; Loday, J.-L.; Pirashvili, T.
2002
Generalized bialgebras and triples of operads. Zbl 1178.18001
Loday, Jean-Louis
2008
Crossed simplicial groups and their associated homology. Zbl 0755.18005
Fiedorowicz, Zbigniew; Loday, Jean-Louis
1991
Aguiar, Marcelo; Loday, Jean-Louis
2004
Excision homotopique en basse dimension. (Homotopical excision in low dimension). Zbl 0573.55011
Brown, Ronald; Loday, Jean-Louis
1984
Cyclic homology and lambda operations. Zbl 0719.19002
Loday, J.-L.; Procesi, C.
1989
Leibniz representations of Lie algebras. Zbl 0855.17018
Loday, Jean-Louis; Pirashvili, Teimuraz
1996
Combinatorial Hopf algebras. Zbl 1217.16033
Loday, Jean-Louis; Ronco, María
2010
Arithmetree. Zbl 1063.16044
Loday, Jean-Louis
2002
On the algebra of quasi-shuffles. Zbl 1126.16029
Loday, Jean-Louis
2007
Homologies diédrale et quaternionique. (Dihedral and quaternionic homology). Zbl 0627.18006
Loday, Jean-Louis
1987
Encyclopedia of types of algebras 2010. Zbl 1351.17001
Zinbiel, G. W.
2012
Algebras with two associative operations (dialgebras). (Algèbres ayant deux opérations associatives (digèbres).) Zbl 0845.16036
Loday, Jean-Louis
1995
Homotopical excision, and Hurewicz theorems, for n-cubes of spaces. Zbl 0584.55012
Brown, Ronald; Loday, Jean-Louis
1987
Cofree Hopf algebras. (Algèbres de Hopf colibres.) Zbl 1060.16039
Loday, Jean-Louis; Ronco, María
2003
Obstruction à l’excision en K-théorie algébrique. Zbl 0461.18007
Guin-Walery, Dominique; Loday, Jean-Louis
1981
The tensor category of linear maps and Leibniz algebras. Zbl 0909.18003
Loday, J. L.; Pirashvili, T.
1998
Splitting associativity and Hopf algebras. (Scindement d’associativité et algèbres de Hopf.) Zbl 1073.16032
Loday, Jean-Louis
2004
Homology of symplectic and orthogonal algebras. Zbl 0716.17019
Loday, Jean-Louis; Procesi, Claudio
1988
On the operad of associative algebras with derivation. Zbl 1237.18007
Loday, Jean-Louis
2010
Loday, Jean-Louis
1996
Hausdorff series, Eulerian idempotents and Hopf algebras. (Série de Hausdorff, idempotents eulériens et algèbres de Hopf.) Zbl 0807.17003
Loday, Jean-Louis
1994
Overview on Leibniz algebras, dialgebras and their homology. Zbl 0893.17001
Loday, Jean-Louis
1997
Algebraic $$K$$-theory and the conjectural Leibniz $$K$$-theory. Zbl 1048.18005
Loday, Jean-Louis
2003
The diagonal of the Stasheff polytope. Zbl 1220.18007
Loday, Jean-Louis
2011
Künneth-style formula for the homology of Leibniz algebras. Zbl 0880.17001
Loday, Jean-Louis
1996
Homotopical syzygies. Zbl 0978.20022
Loday, Jean-Louis
2000
The symmetric operation in a free pre-Lie algebra is magmatic. Zbl 1264.17001
Bergeron, Nantel; Loday, Jean-Louis
2011
Symboles en K-théorie algébrique supérieure. Zbl 0493.18006
Loday, Jean-Louis
1981
Parking functions and triangulation of the associahedron. Zbl 1130.52006
Loday, Jean-Louis
2007
Parastatistics algebra, Young tableaux and the super plactic monoid. Zbl 1165.81029
Loday, Jean-Louis; Popov, Todor
2008
Cyclic homology, a survey. Zbl 0637.16013
Loday, Jean-Louis
1986
Partition eulérienne et opérations en homologie cyclique. (Eulerian partition and operations in cyclic homology). Zbl 0669.13006
Loday, Jean-Louis
1988
On restricted Leibniz algebras. Zbl 1162.17001
Dokas, Ioannis; Loday, Jean-Louis
2006
Higher Witt groups: A survey. Zbl 0356.18016
Loday, J.-L.
1976
Algebraic $$K$$-theory and cyclic homology. Zbl 1286.19004
Loday, Jean-Louis
2013
Comparaison des homologies du groupe linéaire et de son algèbre de Lie. (Comparison of homologies of a linear group and its Lie algebra). Zbl 0619.20025
Loday, Jean-Louis
1987
Operads: Proceedings of renaissance conferences. Special session and international conference on moduli spaces, operads, and representation theory/operads and homotopy algebra, March 1995/May–June 1995, Hartford, CT, USA/Luminy, France. Zbl 0855.00018
1997
Completing the operadic butterfly. Zbl 1187.18005
Loday, Jean-Louis
2006
Free loop space and homology. Zbl 1386.55011
Loday, Jean-Louis
2015
Cyclic homology and homology of the Lie algebra of matrices. (Homologie cyclique et homologie de l’algèbre de Lie des matrices.) Zbl 0536.17006
Loday, Jean-Louis; Quillen, Daniel
1983
A duality between standard simplices and Stasheff polytopes. (Une dualité entre simplexes standards et polytopes de Stasheff.) Zbl 1010.18007
Loday, Jean-Louis; Ronco, María O.
2001
Higher Whitehead groups and stable homotopy. Zbl 0337.55015
Loday, Jean-Louis
1976
Coassociative magmatic bialgebras and the Fine numbers. Zbl 1175.16027
Holtkamp, Ralf; Loday, Jean-Louis; Ronco, María
2008
Structure multiplicative en K-théorie algébrique. Zbl 0293.18019
Loday, Jean-Louis
1974
Loday, Jean-Louis; Ronco, María
2013
Dichotomy of the addition of natural numbers. Zbl 1284.06007
Loday, Jean-Louis
2012
Inversion of integral series enumerating planar trees. Zbl 1085.05009
Loday, Jean-Louis
2005
Operadic construction of the renormalization group. Zbl 1269.81091
Loday, Jean-Louis; Nikolov, Nikolay M.
2013
Homotopie des espaces de concordances. Zbl 0443.57023
Loday, Jean-Louis
1979
Hopf structures on standard Young tableaux. Zbl 1219.81169
Loday, Jean-Louis; Poppov, Todor
2010
Parametrized braid groups of Chevalley groups. Zbl 1147.20034
Loday, Jean-Louis; Stein, Michael R.
2005
Les matrices monomiales et le groupe de Whitehead $$Wh_2$$. Zbl 0348.55007
Loday, Jean-Louis
1976
On the boundary map $$K_ 3(\Lambda/I) > K_ 2(\Lambda,I)$$. Zbl 0467.18003
Loday, Jean-Louis
1981
From diffeomorphism groups to loop spaces via cyclic homology. Zbl 0928.19001
Loday, Jean-Louis
1998
Multiplicative structures in K-theory. (Structures multiplicatives en K-théorie.) Zbl 0228.55005
Loday, Jean-Louis
1972
Free loop space and homology. Zbl 1386.55011
Loday, Jean-Louis
2015
Algebraic $$K$$-theory and cyclic homology. Zbl 1286.19004
Loday, Jean-Louis
2013
Loday, Jean-Louis; Ronco, María
2013
Operadic construction of the renormalization group. Zbl 1269.81091
Loday, Jean-Louis; Nikolov, Nikolay M.
2013
Loday, Jean-Louis; Vallette, Bruno
2012
Encyclopedia of types of algebras 2010. Zbl 1351.17001
Zinbiel, G. W.
2012
Dichotomy of the addition of natural numbers. Zbl 1284.06007
Loday, Jean-Louis
2012
The diagonal of the Stasheff polytope. Zbl 1220.18007
Loday, Jean-Louis
2011
The symmetric operation in a free pre-Lie algebra is magmatic. Zbl 1264.17001
Bergeron, Nantel; Loday, Jean-Louis
2011
Combinatorial Hopf algebras. Zbl 1217.16033
Loday, Jean-Louis; Ronco, María
2010
On the operad of associative algebras with derivation. Zbl 1237.18007
Loday, Jean-Louis
2010
Hopf structures on standard Young tableaux. Zbl 1219.81169
Loday, Jean-Louis; Poppov, Todor
2010
Generalized bialgebras and triples of operads. Zbl 1178.18001
Loday, Jean-Louis
2008
Parastatistics algebra, Young tableaux and the super plactic monoid. Zbl 1165.81029
Loday, Jean-Louis; Popov, Todor
2008
Coassociative magmatic bialgebras and the Fine numbers. Zbl 1175.16027
Holtkamp, Ralf; Loday, Jean-Louis; Ronco, María
2008
On the algebra of quasi-shuffles. Zbl 1126.16029
Loday, Jean-Louis
2007
Parking functions and triangulation of the associahedron. Zbl 1130.52006
Loday, Jean-Louis
2007
On the structure of cofree Hopf algebras. Zbl 1096.16019
Loday, Jean-Louis; Ronco, María
2006
On restricted Leibniz algebras. Zbl 1162.17001
Dokas, Ioannis; Loday, Jean-Louis
2006
Completing the operadic butterfly. Zbl 1187.18005
Loday, Jean-Louis
2006
Inversion of integral series enumerating planar trees. Zbl 1085.05009
Loday, Jean-Louis
2005
Parametrized braid groups of Chevalley groups. Zbl 1147.20034
Loday, Jean-Louis; Stein, Michael R.
2005
Trialgebras and families of polytopes. Zbl 1065.18007
Loday, Jean-Louis; Ronco, María
2004
Realization of the Stasheff polytope. Zbl 1059.52017
Loday, Jean-Louis
2004
Aguiar, Marcelo; Loday, Jean-Louis
2004
Splitting associativity and Hopf algebras. (Scindement d’associativité et algèbres de Hopf.) Zbl 1073.16032
Loday, Jean-Louis
2004
Cofree Hopf algebras. (Algèbres de Hopf colibres.) Zbl 1060.16039
Loday, Jean-Louis; Ronco, María
2003
Algebraic $$K$$-theory and the conjectural Leibniz $$K$$-theory. Zbl 1048.18005
Loday, Jean-Louis
2003
Order structure on the algebra of permutations and of planar binary trees. Zbl 0998.05013
Loday, Jean-Louis; Ronco, María O.
2002
Leibniz $$n$$-algebras. Zbl 1037.17002
Casas, J. M.; Loday, J.-L.; Pirashvili, T.
2002
Arithmetree. Zbl 1063.16044
Loday, Jean-Louis
2002
Dialgebras. Zbl 0999.17002
Loday, Jean-Louis
2001
Dialgebras and related operads. Zbl 0970.00010
2001
A duality between standard simplices and Stasheff polytopes. (Une dualité entre simplexes standards et polytopes de Stasheff.) Zbl 1010.18007
Loday, Jean-Louis; Ronco, María O.
2001
Homotopical syzygies. Zbl 0978.20022
Loday, Jean-Louis
2000
Cyclic homology. 2nd ed. Zbl 0885.18007
Loday, Jean-Louis
1998
Hopf algebra of the planar binary trees. Zbl 0926.16032
Loday, Jean-Louis; Ronco, María O.
1998
The tensor category of linear maps and Leibniz algebras. Zbl 0909.18003
Loday, J. L.; Pirashvili, T.
1998
From diffeomorphism groups to loop spaces via cyclic homology. Zbl 0928.19001
Loday, Jean-Louis
1998
Overview on Leibniz algebras, dialgebras and their homology. Zbl 0893.17001
Loday, Jean-Louis
1997
Operads: Proceedings of renaissance conferences. Special session and international conference on moduli spaces, operads, and representation theory/operads and homotopy algebra, March 1995/May–June 1995, Hartford, CT, USA/Luminy, France. Zbl 0855.00018
1997
Leibniz representations of Lie algebras. Zbl 0855.17018
Loday, Jean-Louis; Pirashvili, Teimuraz
1996
Loday, Jean-Louis
1996
Künneth-style formula for the homology of Leibniz algebras. Zbl 0880.17001
Loday, Jean-Louis
1996
Cup-product for Leibniz cohomology and dual Leibniz algebras. Zbl 0859.17015
Loday, Jean-Louis
1995
Algebras with two associative operations (dialgebras). (Algèbres ayant deux opérations associatives (digèbres).) Zbl 0845.16036
Loday, Jean-Louis
1995
Hausdorff series, Eulerian idempotents and Hopf algebras. (Série de Hausdorff, idempotents eulériens et algèbres de Hopf.) Zbl 0807.17003
Loday, Jean-Louis
1994
A noncommutative version of Lie algebras: Leibniz algebras. (Une version non commutative des algèbres de Lie: les algèbres de Leibniz.) Zbl 0806.55009
Loday, Jean-Louis
1993
Universal enveloping algebras of Leibniz algebras and (co)homology. Zbl 0821.17022
Loday, Jean-Louis; Pirashvili, Teimuraz
1993
Cyclic homology. Zbl 0780.18009
Loday, Jean-Louis
1992
Crossed simplicial groups and their associated homology. Zbl 0755.18005
Fiedorowicz, Zbigniew; Loday, Jean-Louis
1991
Opérations sur l’homologie cyclique des algèbres commutatives. (Operations on the cyclic homology of commutative algebras). Zbl 0686.18006
Loday, Jean-Louis
1989
Cyclic homology and lambda operations. Zbl 0719.19002
Loday, J.-L.; Procesi, C.
1989
Homology of symplectic and orthogonal algebras. Zbl 0716.17019
Loday, Jean-Louis; Procesi, Claudio
1988
Partition eulérienne et opérations en homologie cyclique. (Eulerian partition and operations in cyclic homology). Zbl 0669.13006
Loday, Jean-Louis
1988
Van Kampen theorems for diagrams of spaces. Zbl 0622.55009
Brown, Ronald; Loday, Jean-Louis
1987
Homologies diédrale et quaternionique. (Dihedral and quaternionic homology). Zbl 0627.18006
Loday, Jean-Louis
1987
Homotopical excision, and Hurewicz theorems, for n-cubes of spaces. Zbl 0584.55012
Brown, Ronald; Loday, Jean-Louis
1987
Comparaison des homologies du groupe linéaire et de son algèbre de Lie. (Comparison of homologies of a linear group and its Lie algebra). Zbl 0619.20025
Loday, Jean-Louis
1987
Cyclic homology, a survey. Zbl 0637.16013
Loday, Jean-Louis
1986
Cyclic homology and the Lie algebra homology of matrices. Zbl 0565.17006
Loday, Jean-Louis; Quillen, Daniel
1984
Excision homotopique en basse dimension. (Homotopical excision in low dimension). Zbl 0573.55011
Brown, Ronald; Loday, Jean-Louis
1984
Cyclic homology and homology of the Lie algebra of matrices. (Homologie cyclique et homologie de l’algèbre de Lie des matrices.) Zbl 0536.17006
Loday, Jean-Louis; Quillen, Daniel
1983
Spaces with finitely many non-trivial homotopy groups. Zbl 0491.55004
Loday, Jean-Louis
1982
Central extensions of Lie algebras. (Extensions centrales d’algèbres de Lie.) Zbl 0485.17006
Kassel, Christian; Loday, Jean-Louis
1982
Obstruction à l’excision en K-théorie algébrique. Zbl 0461.18007
Guin-Walery, Dominique; Loday, Jean-Louis
1981
Symboles en K-théorie algébrique supérieure. Zbl 0493.18006
Loday, Jean-Louis
1981
On the boundary map $$K_ 3(\Lambda/I) > K_ 2(\Lambda,I)$$. Zbl 0467.18003
Loday, Jean-Louis
1981
Homotopie des espaces de concordances. Zbl 0443.57023
Loday, Jean-Louis
1979
Cohomologie et groupe de Steinberg rélatifs. Zbl 0391.20040
Loday, Jean-Louis
1978
K-théorie algébrique et représentations de groupes. Zbl 0362.18014
Loday, Jean-Louis
1976
Higher Witt groups: A survey. Zbl 0356.18016
Loday, J.-L.
1976
Higher Whitehead groups and stable homotopy. Zbl 0337.55015
Loday, Jean-Louis
1976
Les matrices monomiales et le groupe de Whitehead $$Wh_2$$. Zbl 0348.55007
Loday, Jean-Louis
1976
Structure multiplicative en K-théorie algébrique. Zbl 0293.18019
Loday, Jean-Louis
1974
Multiplicative structures in K-theory. (Structures multiplicatives en K-théorie.) Zbl 0228.55005
Loday, Jean-Louis
1972
all top 5
### Cited by 1,741 Authors
61 Ladra González, Manuel 52 Omirov, Bakhrom A. 49 Casas Miras, José Manuel 29 Ellis, Graham J. 27 Khudoyberdiyev, Abror Kh. 27 Zhuchok, Anatolii V. 24 Loday, Jean-Louis 23 Pilaud, Vincent 22 Camacho, Luisa Maria 22 Pirashvili, Teimuraz 21 Foissy, Loïc 21 Guo, Li 19 Bai, Chengming 19 Bremner, Murray R. 19 Novelli, Jean-Christophe 19 Thibon, Jean-Yves 18 Giraudo, Samuele 17 Cortiñas, Guillermo H. 17 Ebrahimi-Fard, Kurusch 17 Khmaladze, Emzar 15 Makhlouf, Abdenacer 15 Rocco, Noraí Romeu 14 Chapoton, Frédéric 14 Dotsenko, Vladimir Viktorovich 14 Ronco, María Ofelia 13 Biyogmam, Guy Roger 13 Inassaridze, Nikoloz 13 Niroomand, Peyman 13 Sheng, Yunhe 12 García-Martínez, Xabier 12 Patras, Frédéric 12 Rakhimov, Isamiddin Sattarovich 12 Vallette, Bruno 12 Weibel, Charles A. 12 Wu, Jie 11 Brown, Ronald 11 Burde, Dietrich 11 Cegarra, Antonio Martínez 11 Donadze, Guram 11 Gao, Xing 11 Salemkar, Ali Reza 11 Wagemann, Friedrich 11 Willwacher, Thomas Hans 10 Dzhumadil’daev, Askar Serkulovich 10 Hivert, Florent 10 Kurdachenko, Leonid Andriĭovych 10 Liu, Dong 10 Manchon, Dominique 10 Mikhaĭlov, Roman Valer’evich 9 Bergeron, Nantel 9 Fialowski, Alice 9 Gómez, José R. 9 Jafari, Seid Hadi 9 Kaĭgorodov, Ivan B. 9 Kassel, Christian 9 Livernet, Muriel 9 Markl, Martin 9 Porter, Timothy 8 Ayupov, Shavkat Abdullaevich 8 de Araujo Bastos, Raimundo jun. 8 Benayadi, Saïd 8 Carey, Alan L. 8 Chen, Xiaojun 8 Datuashvili, Tamar 8 Edalatzadeh, Behrouz 8 Igusa, Kiyoshi 8 Kaledin, Dmitry B. 8 Karimjanov, Ikboljon Abdulazizovich 8 Liu, Zhangju 8 Lodder, Gerald Matthew 8 Mandal, Ashis 8 Moravec, Primož 8 Saha, Ripan 8 Stitzinger, Ernest Lester 8 Turdibaev, Rustam Mirzalievich 8 van der Linden, Tim 7 Aguiar, Marcelo 7 Barnes, Donald W. 7 Calderón Martín, Antonio Jesús 7 Chen, Liangyun 7 Drummond-Cole, Gabriel C. 7 Ginzburg, Victor 7 Gubarev, Vsevolod Yur’evich 7 Guin, Daniel 7 Hu, Naihong 7 Ismailov, Nurlan A. 7 Liu, Jiefeng 7 Madariaga, Sara 7 Nistor, Victor 7 Parvizi, Mohsen 7 Przytycki, Józef H. 7 Remm, Elisabeth 7 Russo, Francesco Giuseppe 7 Subbotin, Igor Yakov 7 Tabuada, Gonçalo 6 Baues, Hans-Joachim 6 Carrasco, Pilar C. 6 Chen, Yuqun 6 Connes, Alain 6 Dolgushev, Vasily A. ...and 1,641 more Authors
all top 5
### Cited in 255 Serials
191 Journal of Algebra 183 Journal of Pure and Applied Algebra 155 Communications in Algebra 110 Advances in Mathematics 48 Transactions of the American Mathematical Society 45 $$K$$-Theory 38 Journal of Geometry and Physics 38 Algebraic & Geometric Topology 37 Communications in Mathematical Physics 37 Journal of Algebra and its Applications 34 Proceedings of the American Mathematical Society 33 Journal of Algebraic Combinatorics 29 Journal of Homotopy and Related Structures 28 Annales de l’Institut Fourier 26 Letters in Mathematical Physics 25 Journal of Combinatorial Theory. Series A 24 Applied Categorical Structures 23 Linear Algebra and its Applications 23 Comptes Rendus. Mathématique. Académie des Sciences, Paris 22 Linear and Multilinear Algebra 20 Journal of Mathematical Physics 20 Inventiones Mathematicae 19 International Journal of Algebra and Computation 19 Journal of Noncommutative Geometry 18 Mathematische Zeitschrift 18 Topology and its Applications 17 Glasgow Mathematical Journal 17 Advances in Applied Mathematics 17 Algebras and Representation Theory 16 Selecta Mathematica. New Series 16 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 15 Discrete Mathematics 14 European Journal of Combinatorics 13 Duke Mathematical Journal 13 Georgian Mathematical Journal 12 Cahiers de Topologie et Géométrie Différentielle Catégoriques 12 Algebra Colloquium 12 Journal of Mathematical Sciences (New York) 12 Theory and Applications of Categories 11 Mathematical Notes 11 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 11 Mathematische Annalen 11 Journal of High Energy Physics 11 Frontiers of Mathematics in China 11 Asian-European Journal of Mathematics 10 Compositio Mathematica 10 Journal of Functional Analysis 10 Semigroup Forum 10 Forum Mathematicum 10 Séminaire Lotharingien de Combinatoire 10 Higher Structures 9 Journal für die Reine und Angewandte Mathematik 9 Journal of the American Mathematical Society 9 Documenta Mathematica 9 Journal of Group Theory 9 Algebra and Discrete Mathematics 8 Mathematical Proceedings of the Cambridge Philosophical Society 8 Archiv der Mathematik 8 Journal of Symbolic Computation 8 Differential Geometry and its Applications 8 SIGMA. Symmetry, Integrability and Geometry: Methods and Applications 7 Bulletin of the Australian Mathematical Society 7 Israel Journal of Mathematics 7 Ukrainian Mathematical Journal 7 Algebra Universalis 7 Manuscripta Mathematica 7 Geometry & Topology 6 Bulletin de la Société Mathématique de France 6 Memoirs of the American Mathematical Society 6 Proceedings of the Edinburgh Mathematical Society. Series II 6 Annales Mathématiques Blaise Pascal 6 The Electronic Journal of Combinatorics 6 Journal of Lie Theory 6 Acta Mathematica Sinica. English Series 6 Hacettepe Journal of Mathematics and Statistics 6 Journal of $$K$$-Theory 6 Journal de l’École Polytechnique – Mathématiques 5 Indian Journal of Pure & Applied Mathematics 5 Annali di Matematica Pura ed Applicata. Serie Quarta 5 Colloquium Mathematicum 5 Siberian Mathematical Journal 5 Bulletin of the American Mathematical Society. New Series 5 Turkish Journal of Mathematics 5 Bulletin des Sciences Mathématiques 5 Annals of Combinatorics 5 Communications in Contemporary Mathematics 5 International Journal of Geometric Methods in Modern Physics 5 Mediterranean Journal of Mathematics 5 Afrika Matematika 5 Communications in Mathematics 5 Open Mathematics 5 Algebraic Combinatorics 4 Nuclear Physics. B 4 Reports on Mathematical Physics 4 Publications Mathématiques 4 Monatshefte für Mathematik 4 Quaestiones Mathematicae 4 Discrete & Computational Geometry 4 Indagationes Mathematicae. New Series 4 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI ...and 155 more Serials
all top 5
### Cited in 51 Fields
787 Nonassociative rings and algebras (17-XX) 715 Category theory; homological algebra (18-XX) 605 Associative rings and algebras (16-XX) 380 Group theory and generalizations (20-XX) 349 Algebraic topology (55-XX) 248 Combinatorics (05-XX) 246 $$K$$-theory (19-XX) 161 Algebraic geometry (14-XX) 133 Commutative algebra (13-XX) 125 Differential geometry (53-XX) 110 Manifolds and cell complexes (57-XX) 99 Quantum theory (81-XX) 97 Global analysis, analysis on manifolds (58-XX) 87 Functional analysis (46-XX) 62 Order, lattices, ordered algebraic structures (06-XX) 52 General algebraic systems (08-XX) 52 Convex and discrete geometry (52-XX) 36 Number theory (11-XX) 33 Topological groups, Lie groups (22-XX) 30 Computer science (68-XX) 18 Dynamical systems and ergodic theory (37-XX) 16 Operator theory (47-XX) 16 Probability theory and stochastic processes (60-XX) 15 Relativity and gravitational theory (83-XX) 14 Mathematical logic and foundations (03-XX) 14 Linear and multilinear algebra; matrix theory (15-XX) 12 Several complex variables and analytic spaces (32-XX) 10 Mechanics of particles and systems (70-XX) 9 Field theory and polynomials (12-XX) 7 General topology (54-XX) 6 History and biography (01-XX) 6 Partial differential equations (35-XX) 5 General and overarching topics; collections (00-XX) 4 Ordinary differential equations (34-XX) 3 Special functions (33-XX) 3 Integral equations (45-XX) 3 Geometry (51-XX) 3 Systems theory; control (93-XX) 2 Difference and functional equations (39-XX) 2 Statistics (62-XX) 2 Numerical analysis (65-XX) 2 Statistical mechanics, structure of matter (82-XX) 1 Real functions (26-XX) 1 Functions of a complex variable (30-XX) 1 Sequences, series, summability (40-XX) 1 Approximations and expansions (41-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Abstract harmonic analysis (43-XX) 1 Fluid mechanics (76-XX) 1 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 Information and communication theory, circuits (94-XX)
### Wikidata Timeline
The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
|
2022-05-28 07:19:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3315941393375397, "perplexity": 8211.298038733546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663013003.96/warc/CC-MAIN-20220528062047-20220528092047-00140.warc.gz"}
|
https://www.feynmanlectures.caltech.edu/I_25.html
|
◄ ▲ ► A A A
MATHJAX
https://www.feynmanlectures.caltech.edu/I_01.html
If it does not open, or only shows you this message again, then please let us know:
• which browser you are using (including version #)
• which operating system you are using (including version #)
This type of problem is rare, and there's a good chance it can be fixed if we have some clues about the cause. So, if you can, after enabling javascript, clearing the cache and disabling extensions, please open your browser's javascript console, load the page above, and if this generates any messages (particularly errors or warnings) on the console, then please make a copy (text or screenshot) of those messages and send them with the above-listed information to the email address given below.
By sending us information you will be helping not only yourself, but others who may be having similar problems accessing the online edition of The Feynman Lectures on Physics. Your time and consideration are greatly appreciated.
Best regards,
Mike Gottlieb
mg@feynmanlectures.info
Editor, The Feynman Lectures on Physics New Millennium Edition
The recording of this lecture is missing from the Caltech Archives.
## 25Linear Systems and Review
### 25–1Linear differential equations
In this chapter we shall discuss certain aspects of oscillating systems that are found somewhat more generally than just in the particular systems we have been discussing. For our particular system, the differential equation that we have been solving is $$\label{Eq:I:25:1} m\,\frac{d^2x}{dt^2}+\gamma m\,\ddt{x}{t}+m\omega_0^2x=F(t).$$ Now this particular combination of “operations” on the variable $x$ has the interesting property that if we substitute $(x + y)$ for $x$, then we get the sum of the same operations on $x$ and $y$; or, if we multiply $x$ by $a$, then we get just $a$ times the same combination. This is easy to prove. Just as a “shorthand” notation, because we get tired of writing down all those letters in (25.1), we shall use the symbol $\uL(x)$ instead. When we see this, it means the left-hand side of (25.1), with $x$ substituted in. With this system of writing, $\uL(x + y)$ would mean the following: $$\label{Eq:I:25:2} \uL(x+y)=m\,\frac{d^2(x+y)}{dt^2}+\gamma m\,\ddt{(x+y)}{t} +m\omega_0^2(x+y).$$ \begin{align} \uL(x+y)=m\,\frac{d^2(x+y)}{dt^2}&+\gamma m\,\ddt{(x+y)}{t}\notag\\ &+m\omega_0^2(x+y). \label{Eq:I:25:2} \end{align} (We underline the $\uL$ so as to remind ourselves that it is not an ordinary function.) We sometimes call this an operator notation, but it makes no difference what we call it, it is just “shorthand.”
Our first statement was that $$\label{Eq:I:25:3} \uL(x+y)=\uL(x)+\uL(y),$$ which of course follows from the fact that $a(x + y) = ax + ay$, $d(x + y)/dt = dx/dt + dy/dt$, etc.
Our second statement was, for constant $a$, $$\label{Eq:I:25:4} \uL(ax)=a\uL(x).$$ [Actually, (25.3) and (25.4) are very closely related, because if we put $x + x$ into (25.3), this is the same as setting $a = 2$ in (25.4), and so on.]
In more complicated problems, there may be more derivatives, and more terms in $\uL$; the question of interest is whether the two equations (25.3) and (25.4) are maintained or not. If they are, we call such a problem a linear problem. In this chapter we shall discuss some of the properties that exist because the system is linear, to appreciate the generality of some of the results that we have obtained in our special analysis of a special equation.
Now let us study some of the properties of linear differential equations, having illustrated them already with the specific equation (25.1) that we have studied so closely. The first property of interest is this: suppose that we have to solve the differential equation for a transient, the free oscillation with no driving force. That is, we want to solve $$\label{Eq:I:25:5} \uL(x)=0.$$ Suppose that, by some hook or crook, we have found a particular solution, which we shall call $x_1$. That is, we have an $x_1$ for which $\uL(x_1) = 0$. Now we notice that $ax_1$ is also a solution to the same equation; we can multiply this special solution by any constant whatever, and get a new solution. In other words, if we had a motion of a certain “size,” then a motion twice as “big” is again a solution. Proof: $\uL(ax_1) =$ $a\uL(x_1) =$ $a\cdot0 = 0$.
Next, suppose that, by hook or by crook, we have not only found one solution $x_1$, but also another solution, $x_2$. (Remember that when we substituted $x = e^{i\alpha t}$ for finding the transients, we found two values for $\alpha$, that is, two solutions, $x_1$ and $x_2$.) Now let us show that the combination $(x_1 + x_2)$ is also a solution. In other words, if we put $x = x_1 + x_2$, $x$ is again a solution of the equation. Why? Because, if $\uL(x_1) = 0$ and $\uL(x_2) = 0$, then $\uL(x_1 + x_2) = \uL(x_1) + \uL(x_2) = 0 + 0 = 0$. So if we have found a number of solutions for the motion of a linear system we can add them together.
Combining these two ideas, we see, of course, that we can also add six of one and two of the other: if $x_1$ is a solution, so is $\alpha x_1$. Therefore any sum of these two solutions, such as $(\alpha x_1 + \beta x_2)$, is also a solution. If we happen to be able to find three solutions, then we find that any combination of the three solutions is again a solution, and so on. It turns out that the number of what we call independent solutions1 that we have obtained for our oscillator problem is only two. The number of independent solutions that one finds in the general case depends upon what is called the number of degrees of freedom. We shall not discuss this in detail now, but if we have a second-order differential equation, there are only two independent solutions, and we have found both of them; so we have the most general solution.
Now let us go on to another proposition, which applies to the situation in which the system is subjected to an outside force. Suppose the equation is $$\label{Eq:I:25:6} \uL(x)=F(t),$$ and suppose that we have found a special solution of it. Let us say that Joe’s solution is $x_J$, and that $\uL(x_J) = F(t)$. Suppose we want to find yet another solution; suppose we add to Joe’s solution one of those that was a solution of the free equation (25.5), say $x_1$. Then we see by (25.3) that $$\label{Eq:I:25:7} \uL(x_J+x_1)=\uL(x_J)+\uL(x_1)=F(t)+0=F(t).$$ \begin{aligned} \uL(x_J+x_1)&=\uL(x_J)+\uL(x_1)\\[1ex] &=F(t)+0\\[.75ex] &=F(t). \end{aligned} \label{Eq:I:25:7} Therefore, to the “forced” solution we can add any “free” solution, and we still have a solution. The free solution is called a transient solution.
When we have no force acting, and suddenly turn one on, we do not immediately get the steady solution that we solved for with the sine wave solution, but for a while there is a transient which sooner or later dies out, if we wait long enough. The “forced” solution does not die out, since it keeps on being driven by the force. Ultimately, for long periods of time, the solution is unique, but initially the motions are different for different circumstances, depending on how the system was started.
### 25–2Superposition of solutions
Now we come to another interesting proposition. Suppose that we have a certain particular driving force $F_a$ (let us say an oscillatory one with a certain $\omega = \omega_a$, but our conclusions will be true for any functional form of $F_a$) and we have solved for the forced motion (with or without the transients; it makes no difference). Now suppose some other force is acting, let us say $F_b$, and we solve the same problem, but for this different force. Then suppose someone comes along and says, “I have a new problem for you to solve; I have the force $F_a + F_b$.” Can we do it? Of course we can do it, because the solution is the sum of the two solutions $x_a$ and $x_b$ for the forces taken separately—a most remarkable circumstance indeed. If we use (25.3), we see that $$\label{Eq:I:25:8} \uL(x_a+x_b)=\uL(x_a)+\uL(x_b)=F_a(t)+F_b(t).$$ \begin{aligned} \uL(x_a+x_b)&=\uL(x_a)+\uL(x_b)\\[1ex] &=F_a(t)+F_b(t). \end{aligned} \label{Eq:I:25:8}
This is an example of what is called the principle of superposition for linear systems, and it is very important. It means the following: if we have a complicated force which can be broken up in any convenient manner into a sum of separate pieces, each of which is in some way simple, in the sense that for each special piece into which we have divided the force we can solve the equation, then the answer is available for the whole force, because we may simply add the pieces of the solution back together, in the same manner as the total force is compounded out of pieces (Fig. 25–1).
Let us give another example of the principle of superposition. In Chapter 12 we said that it was one of the great facts of the laws of electricity that if we have a certain distribution of charges $q_a$ and calculate the electric field $\FLPE_a$ arising from these charges at a certain place $P$, and if, on the other hand, we have another set of charges $q_b$ and we calculate the field $\FLPE_b$ due to these at the corresponding place, then if both charge distributions are present at the same time, the field $\FLPE$ at $P$ is the sum of $\FLPE_a$ due to one set plus $\FLPE_b$ due to the other. In other words, if we know the field due to a certain charge, then the field due to many charges is merely the vector sum of the fields of these charges taken individually. This is exactly analogous to the above proposition that if we know the result of two given forces taken at one time, then if the force is considered as a sum of them, the response is a sum of the corresponding individual responses.
The reason why this is true in electricity is that the great laws of electricity, Maxwell’s equations, which determine the electric field, turn out to be differential equations which are linear, i.e., which have the property (25.3). What corresponds to the force is the charge generating the electric field, and the equation which determines the electric field in terms of the charge is linear.
As another interesting example of this proposition, let us ask how it is possible to “tune in” to a particular radio station at the same time as all the radio stations are broadcasting. The radio station transmits, fundamentally, an oscillating electric field of very high frequency which acts on our radio antenna. It is true that the amplitude of the oscillation of the field is changed, modulated, to carry the signal of the voice, but that is very slow, and we are not going to worry about it. When one hears “This station is broadcasting at a frequency of $780$ kilocycles,” this indicates that $780{,}000$ oscillations per second is the frequency of the electric field of the station antenna, and this drives the electrons up and down at that frequency in our antenna. Now at the same time we may have another radio station in the same town radiating at a different frequency, say $550$ kilocycles per second; then the electrons in our antenna are also being driven by that frequency. Now the question is, how is it that we can separate the signals coming into the one radio at $780$ kilocycles from those coming in at $550$ kilocycles? We certainly do not hear both stations at the same time.
By the principle of superposition, the response of the electric circuit in the radio, the first part of which is a linear circuit, to the forces that are acting due to the electric field $F_a + F_b$, is $x_a + x_b$. It therefore looks as though we will never disentangle them. In fact, the very proposition of superposition seems to insist that we cannot avoid having both of them in our system. But remember, for a resonant circuit, the response curve, the amount of $x$ per unit $F$, as a function of the frequency, looks like Fig. 25–3. If it were a very high $Q$ circuit, the response would show a very sharp maximum. Now suppose that the two stations are comparable in strength, that is, the two forces are of the same magnitude. The response that we get is the sum of $x_a$ and $x_b$. But, in Fig. 25–3, $x_a$ is tremendous, while $x_b$ is small. So, in spite of the fact that the two signals are equal in strength, when they go through the sharp resonant circuit of the radio tuned for $\omega_a$, the frequency of the transmission of one station, then the response to this station is much greater than to the other. Therefore the complete response, with both signals acting, is almost all made up of $\omega_a$, and we have selected the station we want.
Now what about the tuning? How do we tune it? We change $\omega_0$ by changing the $L$ or the $C$ of the circuit, because the frequency of the circuit has to do with the combination of $L$ and $C$. In particular, most radios are built so that one can change the capacitance. When we retune the radio, we can make a new setting of the dial, so that the natural frequency of the circuit is shifted, say, to $\omega_c$. In those circumstances we hear neither one station nor the other; we get silence, provided there is no other station at frequency $\omega_c$. If we keep on changing the capacitance until the resonance curve is at $\omega_b$, then of course we hear the other station. That is how radio tuning works; it is again the principle of superposition, combined with a resonant response.2
To conclude this discussion, let us describe qualitatively what happens if we proceed further in analyzing a linear problem with a given force, when the force is quite complicated. Out of the many possible procedures, there are two especially useful general ways that we can solve the problem. One is this: suppose that we can solve it for special known forces, such as sine waves of different frequencies. We know it is child’s play to solve it for sine waves. So we have the so-called “child’s play” cases. Now the question is whether our very complicated force can be represented as the sum of two or more “child’s play” forces. In Fig. 25–1 we already had a fairly complicated curve, and of course we can make it more complicated still if we add in more sine waves. So it is certainly possible to obtain very complicated curves. And, in fact, the reverse is also true: practically every curve can be obtained by adding together infinite numbers of sine waves of different wavelengths (or frequencies) for each one of which we know the answer. We just have to know how much of each sine wave to put in to make the given $F$, and then our answer, $x$, is the corresponding sum of the $F$ sine waves, each multiplied by its effective ratio of $x$ to $F$. This method of solution is called the method of Fourier transforms or Fourier analysis. We are not going to actually carry out such an analysis just now; we only wish to describe the idea involved.
Another way in which our complicated problem can be solved is the following very interesting one. Suppose that, by some tremendous mental effort, it were possible to solve our problem for a special force, namely an impulse. The force is quickly turned on and then off; it is all over. Actually we need only solve for an impulse of some unit strength, any other strength can be gotten by multiplication by an appropriate factor. We know that the response $x$ for an impulse is a damped oscillation. Now what can we say about some other force, for instance a force like that of Fig. 25–4?
Such a force can be likened to a succession of blows with a hammer. First there is no force, and all of a sudden there is a steady force—impulse, impulse, impulse, impulse, … and then it stops. In other words, we imagine the continuous force to be a series of impulses, very close together. Now, we know the result for an impulse, so the result for a whole series of impulses will be a whole series of damped oscillations: it will be the curve for the first impulse, and then (slightly later) we add to that the curve for the second impulse, and the curve for the third impulse, and so on. Thus we can represent, mathematically, the complete solution for arbitrary functions if we know the answer for an impulse. We get the answer for any other force simply by integrating. This method is called the Green’s function method. A Green’s function is a response to an impulse, and the method of analyzing any force by putting together the response of impulses is called the Green’s function method.
The physical principles involved in both of these schemes are so simple, involving just the linear equation, that they can be readily understood, but the mathematical problems that are involved, the complicated integrations and so on, are a little too advanced for us to attack right now. You will most likely return to this some day when you have had more practice in mathematics. But the idea is very simple indeed.
Finally, we make some remarks on why linear systems are so important. The answer is simple: because we can solve them! So most of the time we solve linear problems. Second (and most important), it turns out that the fundamental laws of physics are often linear. The Maxwell equations for the laws of electricity are linear, for example. The great laws of quantum mechanics turn out, so far as we know, to be linear equations. That is why we spend so much time on linear equations: because if we understand linear equations, we are ready, in principle, to understand a lot of things.
We mention another situation where linear equations are found. When displacements are small, many functions can be approximated linearly. For example, if we have a simple pendulum, the correct equation for its motion is $$\label{Eq:I:25:9} d^2\theta/dt^2=-(g/L)\sin\theta.$$ This equation can be solved by elliptic functions, but the easiest way to solve it is numerically, as was shown in Chapter 9 on Newton’s Laws of Motion. A nonlinear equation cannot be solved, ordinarily, any other way but numerically. Now for small $\theta$, $\sin\theta$ is practically equal to $\theta$, and we have a linear equation. It turns out that there are many circumstances where small effects are linear: for the example here the swing of a pendulum through small arcs. As another example, if we pull a little bit on a spring, the force is proportional to the extension. If we pull hard, we break the spring, and the force is a completely different function of the distance! Linear equations are important. In fact they are so important that perhaps fifty percent of the time we are solving linear equations in physics and in engineering.
### 25–3Oscillations in linear systems
Let us now review the things we have been talking about in the past few chapters. It is very easy for the physics of oscillators to become obscured by the mathematics. The physics is actually very simple, and if we may forget the mathematics for a moment we shall see that we can understand almost everything that happens in an oscillating system. First, if we have only the spring and the weight, it is easy to understand why the system oscillates—it is a consequence of inertia. We pull the mass down and the force pulls it back up; as it passes zero, which is the place it likes to be, it cannot just suddenly stop; because of its momentum it keeps on going and swings to the other side, and back and forth. So, if there were no friction, we would surely expect an oscillatory motion, and indeed we get one. But if there is even a little bit of friction, then on the return cycle, the swing will not be quite as high as it was the first time.
Now what happens, cycle by cycle? That depends on the kind and amount of friction. Suppose that we could concoct a kind of friction force that always remains in the same proportion to the other forces, of inertia and in the spring, as the amplitude of oscillation varies. In other words, for smaller oscillations the friction should be weaker than for big oscillations. Ordinary friction does not have this property, so a special kind of friction must be carefully invented for the very purpose of creating a friction that is directly proportional to the velocity—so that for big oscillations it is stronger and for small oscillations it is weaker. If we happen to have that kind of friction, then at the end of each successive cycle the system is in the same condition as it was at the start, except a little bit smaller. All the forces are smaller in the same proportion: the spring force is reduced, the inertial effects are lower because the accelerations are now weaker, and the friction is less too, by our careful design. When we actually have that kind of friction, we find that each oscillation is exactly the same as the first one, except reduced in amplitude. If the first cycle dropped the amplitude, say, to $90$ percent of what it was at the start, the next will drop it to $90$ percent of $90$ percent, and so on: the sizes of the oscillations are reduced by the same fraction of themselves in every cycle. An exponential function is a curve which does just that. It changes by the same factor in each equal interval of time. That is to say, if the amplitude of one cycle, relative to the preceding one, is called $a$, then the amplitude of the next is $a^2$, and of the next, $a^3$. So the amplitude is some constant raised to a power equal to the number of cycles traversed: $$\label{Eq:I:25:10} A=A_0a^n.$$ But of course $n\propto t$, so it is perfectly clear that the general solution will be some kind of an oscillation, sine or cosine $\omega t$, times an amplitude which goes as $b^t$ more or less. But $b$ can be written as $e^{-c}$, if $b$ is positive and less than $1$. So this is why the solution looks like $e^{-ct}\cos\omega_0 t$. It is very simple.
What happens if the friction is not so artificial; for example, ordinary rubbing on a table, so that the friction force is a certain constant amount, and is independent of the size of the oscillation that reverses its direction each half-cycle? Then the equation is no longer linear, it becomes hard to solve, and must be solved by the numerical method given in Chapter 9, or by considering each half-cycle separately. The numerical method is the most powerful method of all, and can solve any equation. It is only when we have a simple problem that we can use mathematical analysis.
Mathematical analysis is not the grand thing it is said to be; it solves only the simplest possible equations. As soon as the equations get a little more complicated, just a shade—they cannot be solved analytically. But the numerical method, which was advertised at the beginning of the course, can take care of any equation of physical interest.
Next, what about the resonance curve? Why is there a resonance? First, imagine for a moment that there is no friction, and we have something which could oscillate by itself. If we tapped the pendulum just right each time it went by, of course we could make it go like mad. But if we close our eyes and do not watch it, and tap at arbitrary equal intervals, what is going to happen? Sometimes we will find ourselves tapping when it is going the wrong way. When we happen to have the timing just right, of course, each tap is given at just the right time, and so it goes higher and higher and higher. So without friction we get a curve which looks like the solid curve in Fig. 25–5 for different frequencies. Qualitatively, we understand the resonance curve; in order to get the exact shape of the curve it is probably just as well to do the mathematics. The curve goes toward infinity as $\omega\to\omega_0$, where $\omega_0$ is the natural frequency of the oscillator.
Now suppose there is a little bit of friction; then when the displacement of the oscillator is small, the friction does not affect it much; the resonance curve is the same, except when we are near resonance. Instead of becoming infinite near resonance, the curve is only going to get so high that the work done by our tapping each time is enough to compensate for the loss of energy by friction during the cycle. So the top of the curve is rounded off—it does not go to infinity. If there is more friction, the top of the curve is rounded off still more. Now someone might say, “I thought the widths of the curves depended on the friction.” That is because the curve is usually plotted so that the top of the curve is called one unit. However, the mathematical expression is even simpler to understand if we just plot all the curves on the same scale; then all that happens is that the friction cuts down the top! If there is less friction, we can go farther up into that little pinnacle before the friction cuts it off, so it looks relatively narrow. That is, the higher the peak of the curve, the narrower the width at half the maximum height.
Finally, we take the case where there is an enormous amount of friction. It turns out that if there is too much friction, the system does not oscillate at all. The energy in the spring is barely able to move it against the frictional force, and so it slowly oozes down to the equilibrium point.
### 25–4Analogs in physics
The next aspect of this review is to note that masses and springs are not the only linear systems; there are others. In particular, there are electrical systems called linear circuits, in which we find a complete analog to mechanical systems. We did not learn exactly why each of the objects in an electrical circuit works in the way it does—that is not to be understood at the present moment; we may assert it as an experimentally verifiable fact that they behave as stated.
For example, let us take the simplest possible circumstance. We have a piece of wire, which is just a resistance, and we have applied to it a difference in potential, $V$. Now the $V$ means this: if we carry a charge $q$ through the wire from one terminal to another terminal, the work done is $qV$. The higher the voltage difference, the more work was done when the charge, as we say, “falls” from the high potential end of the terminal to the low potential end. So charges release energy in going from one end to the other. Now the charges do not simply fly from one end straight to the other end; the atoms in the wire offer some resistance to the current, and this resistance obeys the following law for almost all ordinary substances: if there is a current $I$, that is, so and so many charges per second tumbling down, the number per second that comes tumbling through the wire is proportional to how hard we push them—in other words, proportional to how much voltage there is: $$\label{Eq:I:25:11} V=IR=R(dq/dt).$$ The coefficient $R$ is called the resistance, and the equation is called Ohm’s Law. The unit of resistance is the ohm; it is equal to one volt per ampere. In mechanical situations, to get such a frictional force in proportion to the velocity is difficult; in an electrical system it is very easy, and this law is extremely accurate for most metals.
We are often interested in how much work is done per second, the power loss, or the energy liberated by the charges as they tumble down the wire. When we carry a charge $q$ through a voltage $V$, the work is $qV$, so the work done per second would be $V(dq/dt)$, which is the same as $VI$, or also $IR\cdot I= I^2 R$. This is called the heating loss—this is how much heat is generated in the resistance per second, by the conservation of energy. It is this heat that makes an ordinary incandescent light bulb work.
Of course, there are other interesting properties of mechanical systems, such as the mass (inertia), and it turns out that there is an electrical analog to inertia also. It is possible to make something called an inductor, having a property called inductance, such that a current, once started through the inductance, does not want to stop. It requires a voltage in order to change the current! If the current is constant, there is no voltage across an inductance. dc circuits do not know anything about inductance; it is only when we change the current that the effects of inductance show up. The equation is $$\label{Eq:I:25:12} V=L(dI/dt)=L(d^2q/dt^2),$$ and the unit of inductance, called the henry, is such that one volt applied to an inductance of one henry produces a change of one ampere per second in the current. Equation (25.12) is the analog of Newton’s law for electricity, if we wish: $V$ corresponds to $F$, $L$ corresponds to $m$, and $I$ corresponds to velocity! All of the consequent equations for the two kinds of systems will have the same derivations because, in all the equations, we can change any letter to its corresponding analog letter and we get the same equation; everything we deduce will have a correspondence in the two systems.
Now what electrical thing corresponds to the mechanical spring, in which there was a force proportional to the stretch? If we start with $F= kx$ and replace $F\to V$ and $x\to q$, we get $V = \alpha q$. It turns out that there is such a thing, in fact it is the only one of the three circuit elements we can really understand, because we did study a pair of parallel plates, and we found that if there were a charge of certain equal, opposite amounts on each plate, the electric field between them would be proportional to the size of the charge. So the work done in moving a unit charge across the gap from one plate to the other is precisely proportional to the charge. This work is the definition of the voltage difference, and it is the line integral of the electric field from one plate to another. It turns out, for historical reasons, that the constant of proportionality is not called $C$, but $1/C$. It could have been called $C$, but it was not. So we have $$\label{Eq:I:25:13} V=q/C.$$ The unit of capacitance, $C$, is the farad; a charge of one coulomb on each plate of a one-farad capacitor yields a voltage difference of one volt.
There are our analogies, and the equation corresponding to the oscillating circuit becomes the following, by direct substitution of $L$ for $m$, $q$ for $x$, etc: \begin{alignat}{2} \label{Eq:I:25:14} m(d^2x/dt^2)&\,+\,\gamma m(dx/dt)+kx&&=F,\\[1.5ex] \label{Eq:I:25:15} L(d^2q/dt^2)&\,+\,R(dq/dt)+q/C&&=V. \end{alignat} Now everything we learned about (25.14) can be transformed to apply to (25.15). Every consequence is the same; so much the same that there is a brilliant thing we can do.
Suppose we have a mechanical system which is quite complicated, not just one mass on a spring, but several masses on several springs, all hooked together. What do we do? Solve it? Perhaps; but look, we can make an electrical circuit which will have the same equations as the thing we are trying to analyze! For instance, if we wanted to analyze a mass on a spring, why can we not build an electrical circuit in which we use an inductance proportional to the mass, a resistance proportional to the corresponding $m\gamma$, $1/C$ proportional to $k$, all in the same ratio? Then, of course, this electrical circuit will be the exact analog of our mechanical one, in the sense that whatever $q$ does, in response to $V$ ($V$ also is made to correspond to the forces that are acting), so the $x$ would do in response to the force! So if we have a complicated thing with a whole lot of interconnecting elements, we can interconnect a whole lot of resistances, inductances, and capacitances, to imitate the mechanically complicated system. What is the advantage to that? One problem is just as hard (or as easy) as the other, because they are exactly equivalent. The advantage is not that it is any easier to solve the mathematical equations after we discover that we have an electrical circuit (although that is the method used by electrical engineers!), but instead, the real reason for looking at the analog is that it is easier to make the electrical circuit, and to change something in the system.
Suppose we have designed an automobile, and want to know how much it is going to shake when it goes over a certain kind of bumpy road. We build an electrical circuit with inductances to represent the inertia of the wheels, spring constants as capacitances to represent the springs of the wheels, and resistors to represent the shock absorbers, and so on for the other parts of the automobile. Then we need a bumpy road. All right, we apply a voltage from a generator, which represents such and such a kind of bump, and then look at how the left wheel jiggles by measuring the charge on some capacitor. Having measured it (it is easy to do), we find that it is bumping too much. Do we need more shock absorber, or less shock absorber? With a complicated thing like an automobile, do we actually change the shock absorber, and solve it all over again? No!, we simply turn a dial; dial number ten is shock absorber number three, so we put in more shock absorber. The bumps are worse—all right, we try less. The bumps are still worse; we change the stiffness of the spring (dial $17$), and we adjust all these things electrically, with merely the turn of a knob.
This is called an analog computer. It is a device which imitates the problem that we want to solve by making another problem, which has the same equation, but in another circumstance of nature, and which is easier to build, to measure, to adjust, and to destroy!
### 25–5Series and parallel impedances
Finally, there is an important item which is not quite in the nature of review. This has to do with an electrical circuit in which there is more than one circuit element. For example, when we have an inductor, a resistor, and a capacitor connected as in Fig. 24–2, we note that all the charge went through every one of the three, so that the current in such a singly connected thing is the same at all points along the wire. Since the current is the same in each one, the voltage across $R$ is $IR$, the voltage across $L$ is $L(dI/dt)$, and so on. So, the total voltage drop is the sum of these, and this leads to Eq. (25.15). Using complex numbers, we found that we could solve the equation for the steady-state motion in response to a sinusoidal force. We thus found that $\hat{V}= \hat{Z}\hat{I}$. Now $\hat{Z}$ is called the impedance of this particular circuit. It tells us that if we apply a sinusoidal voltage, $\hat{V}$, we get a current $\hat{I}$.
Fig. 25–6.Two impedances, connected in series and in parallel.
Now suppose we have a more complicated circuit which has two pieces, which by themselves have certain impedances, $\hat{Z}_1$ and $\hat{Z}_2$ and we put them in series (Fig. 25–6a) and apply a voltage. What happens? It is now a little more complicated, but if $\hat{I}$ is the current through $\hat{Z}_1$, the voltage difference across $\hat{Z}_1$, is $\hat{V}_1=\hat{I}\hat{Z}_1$; similarly, the voltage across $\hat{Z}_2$ is $\hat{V}_2=\hat{I}\hat{Z}_2$. The same current goes through both. Therefore the total voltage is the sum of the voltages across the two sections and is equal to $\hat{V}= \hat{V}_1 + \hat{V}_2 =(\hat{Z}_1 + \hat{Z}_2)\hat{I}$. This means that the voltage on the complete circuit can be written $\hat{V}=\hat{I}\hat{Z}_s$, where the $\hat{Z}_s$ of the combined system in series is the sum of the two $\hat{Z}$’s of the separate pieces: $$\label{Eq:I:25:16} \hat{Z}_s=\hat{Z}_1 + \hat{Z}_2.$$
This is not the only way things may be connected. We may also connect them in another way, called a parallel connection (Fig. 25–6b). Now we see that a given voltage across the terminals, if the connecting wires are perfect conductors, is effectively applied to both of the impedances, and will cause currents in each independently. Therefore the current through $\hat{Z}_1$ is equal to $\hat{I}_1 = \hat{V}/\hat{Z}_1$. The current in $\hat{Z}_2$ is $\hat{I}_2 = \hat{V}/\hat{Z}_2$. It is the same voltage. Now the total current which is supplied to the terminals is the sum of the currents in the two sections: $\hat{I}= \hat{V}/\hat{Z}_1 + \hat{V}/\hat{Z}_2$. This can be written as $$\hat{V}=\frac{\hat{I}}{(1/\hat{Z}_1)+(1/\hat{Z}_2)}= \hat{I}\hat{Z}_p.\notag$$ Thus $$\label{Eq:I:25:17} 1/\hat{Z}_p=1/\hat{Z}_1 + 1/\hat{Z}_2.$$
More complicated circuits can sometimes be simplified by taking pieces of them, working out the succession of impedances of the pieces, and combining the circuit together step by step, using the above rules. If we have any kind of circuit with many impedances connected in all kinds of ways, and if we include the voltages in the form of little generators having no impedance (when we pass charge through it, the generator adds a voltage $V$), then the following principles apply: (1) At any junction, the sum of the currents into a junction is zero. That is, all the current which comes in must come back out. (2) If we carry a charge around any loop, and back to where it started, the net work done is zero. These rules are called Kirchhoff’s laws for electrical circuits. Their systematic application to complicated circuits often simplifies the analysis of such circuits. We mention them here in conjunction with Eqs. (25.16) and (25.17), in case you have already come across such circuits that you need to analyze in laboratory work. They will be discussed again in more detail next year.
1. Solutions which cannot be expressed as linear combinations of each other are called independent.
2. In modern superheterodyne receivers the actual operation is more complex. The amplifiers are all tuned to a fixed frequency (called IF frequency) and an oscillator of variable tunable frequency is combined with the input signal in a nonlinear circuit to produce a new frequency (the difference of signal and oscillator frequency) equal to the IF frequency, which is then amplified. This will be discussed in Chapter 50.
|
2023-04-02 06:12:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 18, "x-ck12": 0, "texerror": 0, "math_score": 0.8268755674362183, "perplexity": 251.81644984348355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00451.warc.gz"}
|
https://quantum-blackbird.readthedocs.io/en/latest/blackbird_cpp_api/classblackbird_1_1Gaussian.html
|
# Class Gaussian¶
## Class Documentation¶
class blackbird::Gaussian : public blackbird::Operation
Represents a Gaussian state For more details, see the Strawberry Fields convention page.
This operation intializes a Gaussian state as a covaraince matrix and vector of means, ready to be decomposed into initial thermal state, displacements, squeezers, beamsplitters, and rotation gates.
Covariance matrix $$V$$ and vector of means $$r$$ accessible via:
• $$V$$: Gaussian->S1
• $$r$$: Gaussian->S2
Public Functions
inline Gaussian(floatmat cov, floatmat means, intvec m)
Constructor to initialize a Gaussian state on modes m
Parameters
• covvector<vector<double>> representing the covariance matrix
• means – single row vector<vector<double>> array representing the vector of means
• mvector<int> containing the list of modes the gate acts on
Throws
• invalid_argument – covariance matrix should be square
• invalid_argument – covariance matrix must have size double the number of modes
• invalid_argument – means vector should only have 1 row
• invalid_argument – means vector must have size double the number of modes
inline Gaussian(floatmat cov, intvec m)
Constructor to initialize a Gaussian state with zero displacement on modes m
Parameters
• covvector<vector<double>> representing the covariance matrix
• mvector<int> containing the list of modes the gate acts on
Throws
• invalid_argument – covariance matrix should be square
• invalid_argument – covariance matrix must have size double the number of modes
|
2022-09-26 19:33:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4643678069114685, "perplexity": 8948.177961164898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00615.warc.gz"}
|
https://gitlab.com/netcrave/Resume/-/blame/44e40140ab1ccdd47d8b56a8a78fc532d5b3386d/cv/experience.tex
|
experience.tex 2.92 KB
Paige Thompson committed Aug 13, 2018 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 \cvsection{Experience} \begin{cventries} \cventry {Systems Engineer Lvl. 4} {Amazon Inc. - Simple Storage Services} {Seattle, WA} {May 2015 - Sep 2016} { \begin{cvitems} \item {Assisted in the build-out and deployment of new load balancing capacity for S3} \end{cvitems} } \cventry {Software Engineer} {ATG Stores Inc.} {Kirkland, WA} {Jan 2014 - May 2014} { \begin{cvitems} \item {Developed applications in .NET for the internal tools team} \item {Assisted in porting several legacy applications using Microsoft GP from VB to C\#} \item {Worked with the website team on backlog items} \end{cvitems} } \cventry {Systems Architect / Software Engineer} {ConnectXYZ LLC.} {Woodinville, WA} {Mar 2012 - Jun 2013} { \begin{cvitems} \item {Built deployment automation with Opscode Chef, PHP 5, HAProxy, nginx, Percona XtraDB cluster, Couchbase in Rackspace} \item {Assisted in the development of a scalable, cloud-based inventory management system written in PHP and JavaScript} \end{cvitems} } \cventry {Software Engineer} {Acronym Media Inc.} {telecommute} {Jan 2011 - Mar 2011} { \begin{cvitems} \item {Assisted in the development of an analytics platform using PHP Symfony, Highcharts.js, MySQL/Propel} \end{cvitems} } \cventry {Software Engineer} {The Branning Group} {telecommute} {Jan 2010 - May 2010} { \begin{cvitems} \item {Site maintenance and development for several clients in PHP / MySQL / Javascript} \end{cvitems} } \cventry {Software Engineer} {Onvia Inc} {Seattle, WA} {Nov 2008 - Mar 2009} { \begin{cvitems} Paige Thompson committed Aug 13, 2018 70 \item {Worked with C\#, LINQ, MSSQL, LINQ-to-SQL, WCF, and WPF to develop a data migration platform used transitionally to move data from an older SQL 2000 database to a newer production environment} Paige Thompson committed Aug 13, 2018 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 \end{cvitems} } \cventry {Systems Administrator} {Zion Preparatory Academy} {Seattle, WA} {Dec 2007 - Mar 2010} { \begin{cvitems} \item {Managed Active Directory, Exchange Server 2007, accounting software, networking, and security for a small 100 user K-6 school} \item {Migrated and managed mail services on the Google Apps platform for many accounts} \end{cvitems} } \cventry {Software Engineer / Systems Administrator} {Seattle Software Systems} {Seattle, WA} {Oct 2005 - Mar 2007} { \begin{cvitems} \item {Developed an inventory management application on PalmOS 4.x in Metrowerks CodeWarrior for a Symbol SPT 1846 PalmOS-based PDA in C/C++} \item {Worked with Java, PHP, HTML, CSS, and JavaScript for several of Seattle Software System’s clients} \end{cvitems} } \end{cventries}
|
2020-10-22 10:26:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9172807931900024, "perplexity": 14291.194626528259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879362.3/warc/CC-MAIN-20201022082653-20201022112653-00499.warc.gz"}
|
https://www.physicsforums.com/threads/harmonic-osclillator-purturbation-matrix-elements.778354/
|
# Harmonic Osclillator Purturbation Matrix Elements
1. Oct 26, 2014
### teroenza
1. The problem statement, all variables and given/known data
I am trying to follow Sakurai's use of perturbation theory on a harmonic oscillator,
2. Relevant equations
Perturbation:
$$v=\epsilon x^2$$ , $$\epsilon << 1$$
Matrix elements:
$$V_{km}=<k|v|m>$$
3. The attempt at a solution
The book says that all other matrix elements besides $V_{00}, V_{20}$, and of the form $V_{k0}=<k|v|0>$ vanish. I don't understand why. I see that the perturbation and the ground state have even parity, and that the SHO eigenstates alternate between even and odd parity with quantum number n. That should kill off the odd n states, but why should the even ones vanish too for k above 2?
2. Oct 26, 2014
### Staff: Mentor
They might be negligible.
I don't understand your perturbation - the harmonic oscillator has x^2, and your perturbation has the same shape?
3. Oct 26, 2014
### teroenza
Yes, sorry if that was not clear. The unperturbed potential is V_{0}=1/2 m \omega x^2, and the perturbation is the same thing multiplied by \epsilon.
4. Oct 26, 2014
### Staff: Mentor
Okay. Then it's easy to predict how the energy levels will change. I would expect expressions like Vk2 and so on to be non-zero as well, but they could be too small to be relevant (and I did not calculate it).
5. Oct 26, 2014
### teroenza
Ok. Sakurai says they "vanish", which might mean that they are negligible. The motivation behind this is my trying to solve for the second order energy correction to a perturbation proportional to x^4. I thought I could make most of the terms in the sum go away if I was able to follow whatever procedure Sakurai did in the book for the x^2 perturbation.
6. Oct 27, 2014
### Orodruin
Staff Emeritus
Of the $V_{k0}$ elements, the only non-zero ones are for k=0 and k=2. Hint: Rewrite x in terms of creation and annihilation operators.
7. Oct 27, 2014
### teroenza
Got it. Thank you.For any state higher than 2, they all go to zero because the creation operator to the second power can only get you to the |2> state.
8. Oct 27, 2014
### Orodruin
Staff Emeritus
Spot on. In general, any state $|n\rangle$ will have a non-zero matrix element with itself, $|n+2\rangle$, and $|n-2\rangle$.
|
2017-08-17 19:57:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5540319681167603, "perplexity": 867.2317701544471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103910.54/warc/CC-MAIN-20170817185948-20170817205948-00031.warc.gz"}
|
https://www.biostars.org/p/9472253/
|
Coverage data for TCGA BAM files
1
0
Entering edit mode
16 months ago
enho ▴ 40
Hi everyone,
I am currently performing an analysis on pancan TCGA wxs data (>10000 normal-tumor pairs) . For my analysis I need to have the total coverage of each BAM file, so I can perform a depth normalization for on tumor vs matched normal sample.
My initial idea was to find the .bam.bai files and use them to find the total number of reads, but I couldn't find those files either.
exome number tcga copy dna whole coverage • 874 views
1
Entering edit mode
If you already granted access to TCGA raw files, create a manifest for those samples, then remove bams and keep .bai files in the manifest file, then this:
gdc-client download -m gdc_manifest_XXXXX.txt -t gdc-user-token.XXXXX.txt
will download .bai files. Not sure what info you might get on read number from a bai file though.
0
Entering edit mode
Thanks for your respond Hamid. I am going to make a dummy bam file and then use samtools idxstats to retrieve the number of reads from index file. The pitfall of this method is that you can't separate reads based on their flag/MAPQ, so for example you might count some of the reads that are non-uniquely mapped twice.
1
Entering edit mode
16 months ago
enho ▴ 40
For anyone reading this later, I figured out how to do it: (Scripts are in R) (package "GenomicDataCommons" in bioconductor is used)
1. First get a Manifest for BAM files. Initially you can't get a manifest of .BAI files, you can only get BAM files manifest
manifest = GenomicDataCommons::files() %>%
GenomicDataCommons::filter(~ cases.project.project_id == "TCGA-KICH" &
experimental_strategy == "WXS" &
data_format == "BAM") %>% GenomicDataCommons::manifest()
1. Using UUID of BAM files, run this query (according to here)
manifest.bai = lapply(manifest$id, function(uuid) { con = curl::curl(paste0("https://api.gdc.cancer.gov/files/", uuid, "?pretty=true&expand=index_files")) tbl = jsonlite::fromJSON(con) bai = data.table(id = tbl$data$index_files$file_id,
filename = tbl$data$index_files$file_name, md5 = tbl$data$index_files$md5sum,
size = tbl$data$index_files$file_size, state = tbl$data$index_files$state)
return(bai)
})
2. Make a dummy BAM file (or any BAM file for this matter)
3. Use samtools idxstats DUMMY.BAM to find the coverage info from each individual bam.bai file
Note: if you are running a script to count them one by one, at each step you should change the name of dummy.bam to the name of bam.bai file, so idxstats can read it!
4. Sum up the results from third column (mapped reads) for whichever sequence name you like (usually chr1:chrY)
|
2022-10-04 10:41:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1796235293149948, "perplexity": 8538.028749175803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00653.warc.gz"}
|
http://blog.csdn.net/dancetime/article/details/12579
|
# 取得位图的尺寸
### Getting the dimensions of a bitmap
For CBitmap objects we can use the GetBitmap() function to determine the height and width of the bitmap.
BITMAP bm;
bitmap.GetBitmap( &bm );
bmWidth = bm.bmWidth;
bmHeight = bm.bmHeight;
If you have a HBITMAP, you can attach it to a CBitmap object and use the method shown above or you can use
BITMAP bm;
::GetObject( hBmp, ( bm ), &bm );
bmWidth = bm.bmWidth;
bmHeight = bm.bmHeight;
For images in a BMP file, you can use something like
CFile file;
;
;
(bmfHeader.bfType != ((WORD) ('M' << 8) | 'B')) ;
;
|
2018-01-17 07:20:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.576052725315094, "perplexity": 11956.698753783367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886830.8/warc/CC-MAIN-20180117063030-20180117083030-00790.warc.gz"}
|
https://quiteaquote.in/2021/12/07/noam-chomsky-limits-of-debate/
|
# Quite a Quote!
Everyday quotes for everyone.
## Noam Chomsky: Limits of debate
“The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum — even encourage the more critical and dissident views. That gives people the sense that there’s free thinking going on, while all the time the presuppositions of the system are being reinforced by the limits put on the range of the debate.”
-Noam Chomsky.
|
2023-03-25 01:09:30
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229844570159912, "perplexity": 3890.4528168399447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00349.warc.gz"}
|
https://notebook.community/dashee87/blogScripts/Jupyter/2017-05-09-Clustering-with-Scikit-with-GIFs
|
It's a common task for a data scientist: you need to generate segments (or clusters- I'll use the terms interchangably) of the customer base. Where does one start? With definitions, of course!!! Clustering is the subfield of unsupervised learning that aims to partition unlabelled datasets into consistent groups based on some shared unknown characteristics. All the tools you'll need are in Scikit-Learn, so I'll leave the code to a minimum. Instead, through the medium of GIFs, this tutorial will describe the most common techniques. If GIFs aren't your thing (what are you doing on the internet?), then the scikit clustering documentation is quite thorough.
Techniques
Clustering algorithms can be broadly split into two types, depending on whether the number of segments is explicitly specified by the user. As we'll find out though, that distinction can sometimes be a little unclear, as some algorithms employ parameters that act as proxies for the number of clusters. But before we can do anything, we must load all the required modules in our python script. We also need to construct toy datasets to illustrate and compare each technique. The significance of each one will hopefully become apparent.
You can download this jupyter notebook here and the gifs can be downloaded from this folder (or you can just right click on the GIFs and select 'Save image as...').
In [35]:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import silhouette_score
from sklearn import cluster, datasets, mixture
from sklearn.neighbors import kneighbors_graph
np.random.seed(844)
clust1 = np.random.normal(5, 2, (1000,2))
clust2 = np.random.normal(15, 3, (1000,2))
clust3 = np.random.multivariate_normal([17,3], [[1,0],[0,1]], 1000)
clust4 = np.random.multivariate_normal([2,16], [[1,0],[0,1]], 1000)
dataset1 = np.concatenate((clust1, clust2, clust3, clust4))
# we take the first array as the second array has the cluster labels
dataset2 = datasets.make_circles(n_samples=1000, factor=.5, noise=.05)[0]
# plot clustering output on the two datasets
def cluster_plots(set1, set2, colours1 = 'gray', colours2 = 'gray',
title1 = 'Dataset 1', title2 = 'Dataset 2'):
fig,(ax1,ax2) = plt.subplots(1, 2)
fig.set_size_inches(6, 3)
ax1.set_title(title1,fontsize=14)
ax1.set_xlim(min(set1[:,0]), max(set1[:,0]))
ax1.set_ylim(min(set1[:,1]), max(set1[:,1]))
ax1.scatter(set1[:, 0], set1[:, 1],s=8,lw=0,c= colours1)
ax2.set_title(title2,fontsize=14)
ax2.set_xlim(min(set2[:,0]), max(set2[:,0]))
ax2.set_ylim(min(set2[:,1]), max(set2[:,1]))
ax2.scatter(set2[:, 0], set2[:, 1],s=8,lw=0,c=colours2)
fig.tight_layout()
plt.show()
cluster_plots(dataset1, dataset2)
K-means
Based on absolutely no empirical evidence (the threshold for baseless assertions is much lower in blogging than academia), k-means is probably the most popular clustering algorithm of them all. The algorithm itself is relatively simple: Starting with a pre-specified number of cluster centres (which can be distributed randomly or smartly (see kmeans++)), each point is initally assigned to its nearest centre. In the next step, for each segment, the centres are moved to the centroid of the clustered points. The points are then reassigned to their nearest centre. The process is repeated until moving the centres derives little or no improvement (measured by the within cluster sum of squares- the total squared distance between each point and its cluster centre). The alogorithm is concisely illustrated by the GIF below.
Variations on the k-means algorithm include k-medoids and k-medians, where centroids are updated to the medoid and median of existng clusters, repsectively. Note that, under k-medoids, cluster centroids must correspond to the members of the dataset. Alogorithms in the k-means family are sensitive to the starting position of the cluster centres, as each method converges to local optima, the frequency of which increase in higher dimensions. This issue is illustrated for k-means in the GIF below.
k-means clustering in scikit offers several extensions to the traditional approach. To prevent the alogrithm returning sub-optimal clustering, the kmeans method includes the n_init and method parameters. The former just reruns the algorithm with n different initialisations and returns the best output (measured by the within cluster sum of squares). By setting the latter to 'kmeans++' (the default), the initial centres are smartly selected (i.e. better than random). This has the additional benefit of decreasing runtime (less steps to reach convergence).
In [34]:
# implementing k-means clustering
kmeans_dataset1 = cluster.KMeans(n_clusters=4, max_iter=300,
init='k-means++',n_init=10).fit_predict(dataset1)
kmeans_dataset2 = cluster.KMeans(n_clusters=2, max_iter=300,
init='k-means++',n_init=10).fit_predict(dataset2)
print('Dataset1')
print(*["Cluster "+str(i)+": "+ str(sum(kmeans_dataset1==i)) for i in range(4)], sep='\n')
cluster_plots(dataset1, dataset2,
kmeans_dataset1, kmeans_dataset2)
Dataset1
Cluster 0: 952
Cluster 1: 1008
Cluster 2: 1022
Cluster 3: 1018
k-means performs quite well on Dataset1, but fails miserably on Dataset2. In fact, these two datasets illustrate the strenghts and weaknesses of k-means. The algorithm seeks and identifies globular (essentially spherical) clusters. If this assumption doesn't hold, the model output may be inadaquate (or just really bad). It doesn't end there; k-means can also underperform with clusters of different size and density.
In [36]:
kmeans_dataset1 = cluster.KMeans(n_clusters=4, max_iter=300,
init='k-means++',n_init=10).fit_predict(np.vstack([dataset1[:2080,:],
dataset1[3000:3080,:]]))
kmeans_dataset2 = cluster.KMeans(n_clusters=4, max_iter=300,
init='k-means++',n_init=10).fit_predict(np.vstack([dataset1[-2080:,],
dataset1[:80,]]))
cluster_plots(np.vstack([dataset1[:2080,],dataset1[3000:3080,]]),
np.vstack([dataset1[-2080:,],dataset1[:80,]]),
kmeans_dataset1, kmeans_dataset2,title1='', title2='')
For all its faults, the enduring popularity of k-means (and related algorithms) stems from its versatility. Its average complexity is O(knT), where k,n and T are the number of clusters, samples and iterations, respectively. As such, it's considered one of the fastest clustering algorithms out there. And in the world of big data, this matters. If your boss wants 10 customer segments by close of business, then you'll probably use k-means and just hope no one knows the word globular.
Expectation Maximisation (EM)
This technique is the application of the general expectation maximisation (EM) algorithm to the task of clustering. It is conceptually related and visually similar to k-means (see GIF below). Where k-means seeks to minimise the distance between the observations and their assigned centroids, EM estimates some latent variables (typically the mean and covariance matrix of a mutltinomial normal distribution (called Gaussian Mixture Models (GMM))), so as to maximise the log-likelihood of the observed data. Similar to k-means, the algorithm converges to the final clustering by iteratively improving its performance (i.e. reducing the log-likelihood). However, again like k-means, there is no guarantee that the algorithm has settled on the global minimum rather than local minimum (a concern that increases in higher dimensions).
In contrast to kmeans, observations are not explicitly assigned to clusters, but rather given probabilities of belonging to each distribution. If the underlying distribution is correctly identified (e.g. normal distribution in the GIF), then the algorithm performs well. In practice, especially for large datasets, the underlying distribution may not be retrievble, so EM clustering may not be well suited to such tasks.
In [14]:
# implementing Expecation Maximistation (specifically Guassian Mixture Models)
em_dataset1 = mixture.GaussianMixture(n_components=4, covariance_type='full').fit(dataset1)
em_dataset2 = mixture.GaussianMixture(n_components=2, covariance_type='full').fit(dataset2)
cluster_plots(dataset1, dataset2, em_dataset1.predict(dataset1), em_dataset2.predict(dataset2))
No surprises there. EM clusters the first dataset perfectly, as the underlying data is normally distributed. In contrast, Dataset2 cannot be accurately modelled as a GMM, so that's why EM performs so poorly in this case.
Hierarchical Clustering
Unlike k-means and EM, hierarchical clustering (HC) doesn't require the user to specify the number of clusters beforehand. Instead it returns an output (typically as a dendrogram- see GIF below), from which the user can decide the appropriate number of clusters (either manually or algorithmically). If done manually, the user may cut the dendrogram where the merged clusters are too far apart (represented by a long lines in the dendrogram). Alternatively, the user can just return a specific number of clusters (similar to k-means).
As its name suggests, it constructs a hierarchy of clusters based on proximity (e.g Euclidean distance or Manhattan distance- see GIF below). HC typically comes in two flavours (essentially, bottom up or top down):
• Divisive: Starts with the entire dataset comprising one cluster that is iteratively split- one point at a time- until each point forms its own cluster.
• Agglomerative: The agglomerative method in reverse- individual points are iteratively combined until all points belong to the same cluster.
Another important concept in HC is the linkage criterion. This defines the distance between clusters as a function of the points in each cluster and determines which clusters are merged/split at each step. That clumsy sentence is neatly illustrated in the GIF below.
In [29]:
# implementing agglomerative (bottom up) hierarchical clustering
# we're going to specify that we want 4 and 2 clusters, respectively
hc_dataset1 = cluster.AgglomerativeClustering(n_clusters=4, affinity='euclidean',
hc_dataset2 = cluster.AgglomerativeClustering(n_clusters=2, affinity='euclidean',
print("Dataset 1")
print(*["Cluster "+str(i)+": "+ str(sum(hc_dataset1==i)) for i in range(4)], sep='\n')
cluster_plots(dataset1, dataset2, hc_dataset1, hc_dataset2)
Dataset 1
Cluster 0: 990
Cluster 1: 1008
Cluster 2: 1002
Cluster 3: 1000
You might notice that HC didn't perform so well on the noisy circles. By imposing simple connectivity constraints (points can only cluster with their n(=5) nearest neighbours), HC captures the non-globular structures within the dataset.
In [11]:
hc_dataset2 = cluster.AgglomerativeClustering(n_clusters=2, affinity='euclidean',
connect = kneighbors_graph(dataset2, n_neighbors=5, include_self=False)
hc_dataset2_connectivity = cluster.AgglomerativeClustering(n_clusters=2, affinity='euclidean',
cluster_plots(dataset2, dataset2,hc_dataset2,hc_dataset2_connectivity,
title1='Without Connectivity', title2='With Connectivity')
C:\Program Files\Anaconda3\lib\site-packages\sklearn\cluster\hierarchical.py:418: UserWarning: the number of connected components of the connectivity matrix is 2 > 1. Completing it to avoid stopping the tree early.
connectivity, n_components = _fix_connectivity(X, connectivity)
Conveniently, the position of each observation isn't necessary for HC, but rather the distance between each point (e.g. a n x n matrix). However, the main disadvantage of HC is that it requires too much memory for large datasets (that n x n matrix blows up pretty quickly). Divisive clustering is $O(2^n)$, while agglomerative clustering comes in somewhat better at $O(n^2 log(n))$ (though special cases of $O(n^2)$ are available for single and maximum linkage agglomerative clustering).
Mean Shift
Mean shift describes a general non-parametric technique that locates the maxima of density functions, where Mean Shift Clustering simply refers to its application to the task of clustering. In other words, locate the density function maxima (mean shift algorithm) and then assign points to the nearest maxima. In that sense, it shares some similarities with k-means (the density maxima correspond to the centroids in the latter). Interestingly, the number of clusters is not required for its implementation and, as it's density based, it can detect clusters of any shape. Instead, the algorithm relies on a bandwidth parameter, which simply determines the size of neighbourhood over which the density will be computed. A small bandwidth could generate excessive clusters, while a high value could erroneously combine multiple clusters. Luckily, sklearn includes an estimate_bandwidth function. It uses the k-nearest neighbours (kNN) algorithm to determine an optimal bandwidth value. I suppose that makes it even easier than k-means to implement.
Originally invented in 1975, mean shift gained prominence when it was successfully applied to computer vision (seminal paper #1 #2). I won't discuss the underlying maths (that info can be found here and here). Intuitively, cluster centers are initially mapped onto the dataset randomly (like k-means). Around each centre is a ball (the radius of which is determined by the bandwidth), where the density equates to the number of points inside each ball. The centre of the ball is iteratively nudged towards regions of higher density by shifting the centre to the mean of the points within the ball (hence the name). This process is repeated until balls exhibit little movement. When multiple balls overlap, the ball containing the most points is preserved. Observations are then clustered according to their ball. Didn't follow that? Well, here's the gif.
Now, you might be thinking "An algorithm that needs absolutely no input from the user and can detect clusters of any shape!!! This should be all over Facebook!!!". First of all, there's no guarantee that the value returned by estimate_bandwidth is appropriate (a caveat that becomes more pertinent in higher dimensions). Speaking of high dimensionality, mean shift may also converge to local optima rather than global optima. But the biggest mark against Mean Shift is its computational expense. It runs at $O(T*n^2)$, compared to $O(k*n*T)$ for k-means, where T is number of iterations and n represents the number of points. In fact, according to the sklearn documentation, the estimate_bandwidth function scales particularly badly. Maybe humans (and data science blogs) will still be needed for a few more years!
In [37]:
# implementing Mean Shift clustering in python
# auto-calculate bandwidths with estimate_bandwidth
bandwidths = [cluster.estimate_bandwidth(dataset, quantile=0.1)
for dataset in [dataset1, dataset2]]
meanshifts = [cluster.MeanShift(bandwidth=band, bin_seeding=True).fit(dataset)
for dataset,band in zip([dataset1,dataset2],bandwidths)]
# print number of clusters for each dataset
print(*["Dataset"+str(i+1)+": "+ str(max(meanshifts[i].labels_)+1) + " clusters"
for i in range(2)], sep='\n')
# plot cluster output
cluster_plots(dataset1, dataset2, meanshifts[0].predict(dataset1), meanshifts[1].predict(dataset2))
Dataset1: 4 clusters
Dataset2: 8 clusters
Mean shift clusters Dataset1 well, but performs quite poorly on Dataset2. This shouldn't be too surprising. It's easy to imagine where you should overlay 4 balls on the first dataset. There's just no way you could accurately partition Dataset2 with two balls (see the GIF below if you don't believe me). We've only considered a flat kernel (i.e. makes no distinction how the points are distributed within the ball), but, in some cases, a Gaussian kernel might be more appropriate. Unfortunately, scikit currently only accepts flat kernels, so let's pretend I never mentioned Gaussian kernels. Either way, you'd need some really exotic kernel to identify the two clusters in Dataset2.
Affinity Propagation (AP)
Affinity propagation (AP) describes an algorithm that performs clustering by passing messages between points. It seeks to identify highly representative observations, known as exemplars, where remaining data points are assigned to their nearest exemplar. Like mean-shift, the alogorithm does not require the number of clusters to be prespecified. Instead, the user must input two parameters: preference and damping. Preference determines how likely an observation is to become an exemplar, which in turn decides the number of clusters. In that sense, this parameter somewhat mimics the number of clusters parameter in k-means/EM. The damping parameter restricts the magnitude of change between successive updates. Without this, AP can be prone to overshooting the solution and non-convergence. Provided convergence is achieved, damping shouldn't significantly affect the output (see last GIF in this section), though it could increase the time to reach convergence.
AP doesn't really lend itself to illustration with GIFs. I'll still provide some GIFs, but a mathematical description might be more informative in this case (i.e. I'm now going to paraphrase the AP wikipedia page). AP starts off with a similarity (or affinity) matrix (S), where similarity (s(i,j)) is often formulated as the distance between points (e.g. negative Euclidean distance). The diagonal of the matrix (s(i,i)) is important, as this is where the preference value is inputted. In practice, 'passing messages between points' translates to updating two matrices. The first is the responsibility matrix (R), where r(i,k) represents the suitability of data point k to serve as an exemplar for point i. The second matrix is known as the availability matrix (A), where a(i,k) indicates the appropriateness of point k being an exemplar for point i, taking into account how well suited k is to serve as an exemplar to other points.
In mathematical terms, both matrices are initialised to zero and are updated iteratively accroding to the following rules:
$$r(i,k) = s(i,k) - \max_{k' \neq k} \left\{ a(i, k') + s(i, k') \right \}$$$$a(i,k)_{i \neq k} = \min \left( 0, r(k,k) + \sum_{i' \not\in \{i,k\}} \max(0, r(i',k)) \right)$$$$a(k,k) = \sum_{i' \neq k} \max(0, r(i',k))$$
At each iteration, A and R are added together. Exemplars are represented by rows in which the diagonal of this matrix are positive (i.e. r(i,i) + s(i,i) > 0). The algorithm terminates after a specified number of updates or if the exemplars remain unchaged over several iterations. Points are then mapped to the nearest examplar and clustered accordingly.
In [30]:
# implementing Affinity Propagation
ap_dataset1 = cluster.AffinityPropagation(verbose=True).fit_predict(dataset1)
ap_dataset2 = cluster.AffinityPropagation(verbose=True).fit_predict(dataset2)
print('Dataset1')
print("# Clusters:",max(ap_dataset1)+1)
print('Dataset2')
print("# Clusters:",max(ap_dataset2)+1)
cluster_plots(dataset1, dataset2, ap_dataset1, ap_dataset2)
Did not converge
Did not converge
Dataset1
# Clusters: 1057
Dataset2
# Clusters: 117
It's clear that the default settings in the sklearn implementation of AP didn't perform very well on the two datasets (in fact, neither execution converged). AP can suffer from non-convergence, though appropriate calibration of the damping parameter can minimise this risk. While AP doesn't explicitly require you to specify the number of clusters, the preference parameter fulfills this role in practice. Playing around with preference values, you'll notice that AP is considerably slower than k-means. That's because AP runtime complexity is O(n^2), where n represents the number of points in the dataset. But it's not all bad news. AP simply requires a similarity/affinity matrix, so the exact spatial position of each point is irrelevant. This also means that the algorithm is relatively insensitive to high dimensional data, assuming your measure of similarity is robust in higher dimensions (not the case for squared Euclidean distance!). Finally, AP is purely deterministic; so there's no need for multiple random restarts á la kmeans. For all of these reasons, AP outperforms its competitors in complex computer visions tasks (e.g. clustering human faces).
In [41]:
ap_dataset1 = cluster.AffinityPropagation(preference=-10000, damping=0.9, verbose=True).fit_predict(dataset1)
ap_dataset2 = cluster.AffinityPropagation(preference=-100, damping=0.8, verbose=True).fit_predict(dataset2)
print('Dataset1')
print("# Clusters:",max(ap_dataset1)+1)
print('Dataset2')
print("# Clusters:",max(ap_dataset2)+1)
cluster_plots(dataset1, dataset2, ap_dataset1, ap_dataset2)
Converged after 117 iterations.
Converged after 53 iterations.
Dataset1
# Clusters: 4
Dataset2
# Clusters: 3
|
2021-10-25 11:45:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6173760294914246, "perplexity": 2171.2983807831906}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00589.warc.gz"}
|
https://zuk.agmedia.pl/a-lot-fekqq/892ee7-is-an-equiangular-triangle-always-equilateral
|
As another example, the sides opposite the base angles of an isosceles triangle have sides that are equal because the base angles are equal . Solution: Step 1:Since it is an equilateral triangle all its angles would be 60°. Favorite Answer yes, because if all of the sides measure the same length, the only way the sides can connect and form a triangle is if all the angles measure 60 degrees 180/3=60 01 congruent (corollary) each angle of an equiangular triangle has a measure of. Equiangular Triangle A triangle with three congruent angles. Equiangular shapes are shapes in which all the interior angles are equal. It is easy to prove that all equiangular triangles are also equilateral using basic trogonometric rules. An equilateral triangle has all three sides equal in length. Determine that the triangle is equilateral if all 3 of its angles, (which would have to be acute), are congruent. An isosceles triangle is a triangle which has at least two congruent sides. The only equiangular triangleis the equilateral triangle. An equilateral triangle can be considered a special case of isosceles triangle, having all three sides equal. An equilateral triangle is also an equiangular triangle since all its angles are equal. always. The only equiangular triangle is the equilateral triangle. All the angles of an equilateral triangle are equal to 60∘ 60 ∘. Equilateral means all sides of equal length. So, an equilateral triangle’s area can be calculated if the length of its side is known. if two angles are congruent to two angles of another triangle, then the 3rd angles are. equiangular triangle synonyms, equiangular triangle pronunciation, equiangular triangle translation, English dictionary definition of equiangular triangle. Why? In Euclidean geometry, an equiangular polygonis a polygonwhose vertexangles are equal. An equilateral triangle is also called an equiangular triangle since its three angles are equal to 60°. Now let us combine them and understand the classification of triangles. Solved Examples on Equilateral ... never. Etusivu / Yleinen / an equilateral triangle is an isosceles triangle sometimes always never. The size of the angle does not depend on the length of the side. Equilateral triangle is also known as an equiangular triangle. Assuming the lengths of the sides of the equilateral triangle are $a$, we can determine that: 1. PLAY. The three chords of these arcs form the desired equilateral triangle. Rectangles, including the square, are the only equiangular quadrilaterals (four-sided figures). WRITING IN MATH Explain why classifying an equiangular triangle as an acute equiangular triangle is unnecessary. Equilateral Triangles An equilateral triangle is a type of triangle that is defined by having all of its sides having the same length and all of its angles having the same measure. Customer Question. This is a equilateral or equiangular triangle! I then work through 3 examples involving the lengths of sides and angles. This is the site I will always come to when I need a second opinion. What is the size of the angle opposite that side? Hence, we can show that all the 3 angles of an equilateral triangle hold equal angles; i.e. A rhombus is an equilateral parallelogram. square equilateral triangle equiangular triangle regular polygon A polygon with congruent angles and congruent sides is called … Chemistry. Click here to get an answer to your question ️ Is it true to say that two equiangular triangles are always congruent? Equilateral and Equiangular triangle definition . ... English dictionary definition of equiangular triangle. If a triangle is isosceles then it is ___ equilateral. The exterior angles of an equilateral triangle will always have a measure of 120°. A triangle in which 2 sides are equal and one angle is 90$$^\circ$$ is called a right isosceles triangle. Equiangular - When all angles inside a polygon are the same, it is said to be equiangular. sometimes. if a triangle is equilateral, then it is ___ isosceles. Match. always. A right triangle may be isosceles or scalene. Can you make an equangular triangle that is not equilateral - Answered by a verified Tutor. A square is a special case of an equilateral quadrilateral which is also an equiangular shape. Proof Ex. Always, sometimes, and never on triangles Quiz. If a polygon is both equilateral and equiangular, it is a "regular polygon" such as the pentagon, square, and equilateral triangle. triangle … Learn the essential definitions of triangles. To recall, an equilateral triangle is a triangle in which all the sides are equal and the measure of all the internal angles is 60°. I introduce and define equilateral triangles and equiangular triangles. An equiangular triangle is a triangle where all three interior angles are equal in measure.Because the interior angles of any triangle always add up to 180°, each angle is always a third of that, or 60° An equilateral triangle is a . Triangles geometry math tests for GCSE maths. How many grams in a cup of butternut squash? Rectangles, including the square, are the only equiangular quadrilaterals (four-sided figures). Right Isosceles Triangle. In a right triangle, one of the angles is a right angle—an angle of 90 degrees. A square is an equilateral and equiangular parallelogram. A triangle is a polygon with three sides. Since DE≅EF, the base angles, ∠D and ∠F, are congruent. Proofs concerning equilateral triangles (video) | Khan Academy always. In an isosceles triangle, the base angles are congruent. In an equiangular triangle, the measure of each of its interior angles is 60 °. A polygon can be equilateral without being equiangular (the rhombus) and equiangular without being equilateral (the rectangle). Define equiangular triangle. The area of an equilateral triangle is the amount of space that it occupies in a 2-dimensional plane. A triangle with three equal interior angles is called an equiangular triangle. An equilateral triangle has all three sides equal in length. An equilateral triangle is the most symmetrical triangle, having 3 lines of reflection and rotational symmetry of order 3 about its center.Its symmetry group is the dihedral group of order 6 $D_3$. Proposition 6 tells you that equiangular implies equilateral, proposition 8 the converse. Home / Yleinen / an equilateral triangle is an isosceles triangle sometimes always never. The perimeter is $p=3a$ These formulas can be derived using the Pythagorean theorem. Therefore, an equilateral triangle is also an equiangular triangle. Answer:The size of the angle is 60°. For a counterexample in a space that's not of constant curvature, just take an equilateral triangle in a Euclidean plane and then double the metric in a small circular region centered on one vertex. An equiangular triangle is a kind of acute triangle, and is always equilateral. An equiangular triangle has three equal sides, and it is the same as an equilateral triangle. 37, p. 262; Ex. In an equilateral triangle, all 3 of the angles will be 60 degrees, because the sum of the 3 interior angles in a triangle is always 180 degrees. If the lengths of the sidesare also equal then it is a regular polygon. Finally, if all the sides of the triangle are equal, then the angles opposite those sides must also be equal. Let each angle be a. sometimes. 60 (corollary) in a triangle there can be at most one _____ or _____ angle. Its three angles are also equal and they are each 60°. Recall that an equilateral triangle has three congruent sides. Show all questions <= => A scalene triangle is _____ an acute triangle. Finding angles in triangles, isosceles triangle, equilateral triangle, calculating area; Test. Its three angles are also equal and they are each 60º. Equilateral or Equiangular Triangle. On the other hand, it is easy to conceive of an equiangular quadrilateral that is not equilateral, i.e. Note: In Euclidean geometry, all equiangular triangles are equilateral and vice-versa. ... Can you make an equiangular hexagon that is not regular. Gravity. Accordingly, all the 3 sides of the equilateral triangle hold equal angles. 60°. In an obtuse triangle, one angle is greater than a right angle—it is more than 90 degrees. 10, p. 357 Corollary 5.3 Corollary to the Converse of the Base Angles Theorem If a triangle is equiangular, then it is equilateral. The angles of a Euclidean equiangular triangle each measure 60°. equilateral triangle is _____ equiangular. An equiangular triangle is _____ an isosceles triangle. In standard Euclidean geometry, are all equiangular polygons with an odd number of sides also equilateral? Since, we know, that the sum of all angles of any triangle be 180°. CCorollariesorollaries Corollary 5.2 Corollary to the Base Angles Theorem If a triangle is equilateral, then it is equiangular. Equilateral triangl… a rectangle. Example 1: An equilateral triangle has one side that measures 5 in. What does a equiangular triangle look like? An isosceles triangle is ___ an equilateral triangle. Start studying ALWAYS, SOMETIMS, NEVER. Answer: In equilateral triangle all three sides of the triangle are equal which makes all the three internal angles of the triangle to be equal. Recall from above that an equilateral triangle is also an isosceles triangle. When all sides and angles of a triangle are equal, it is called an equilateral or equiangular triangle. Another way of thinking about it is that both the hexagon and equilateral triangle are regular polygons, one … The area is $A=\frac{\sqrt3}{4}a^2$ 2.
|
2022-09-30 12:42:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6791101098060608, "perplexity": 391.8949721079595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00559.warc.gz"}
|
http://superchanceltd.com/g483d/distributive-property-simplify-8b0261
|
the parentheses. Find online algebra tutors or online math tutors in a couple of clicks. The only ever big thing that's going to come up in this section is what's called the distributive property. Express your answers with variable terms first, in alphabetical order, followed by any constant term. Also known as the distributive law of multiplication, it’s one of the most commonly used properties in mathematics. We can now apply the distributive property to the expression by multiplying each Trigonometric ratio table. The distributive property is the one which allows us to multiply the number by a group of numbers, which are added together. -9a - (1/3)(-3/4 -2a/3 + 12) 3. Objective: I know how to simplify expressions using distributive property and combining like terms. Simplifying Multiple Positive or Negative Signs, Simplifying Variables With Negative Exponents, Simplifying Fractions With Negative Exponents, Factoring a Difference Between Two Squares. 65. We keep a good deal of high quality reference tutorials on subjects starting from adding to common factor The Distributive Property Date_____ Period____ Simplify each expression. Distributive Property: What will we be learning in this lesson? 2 (3 + 6) Because the binomial "3 + 6" is in a set of parentheses, when following the Order of Operations, you must first find the answer of 3 + 6, then multiply it by 2. For example, if we are asked to simplify the expression 3(x+4) 3 (x + 4), the order of operations says to work in the parentheses first. Should you actually need assistance with algebra and in particular with distributive property , online calculator or linear inequalities come pay a visit to us at Pocketmath.net. Multiply, Dividing; Exponents; Square Roots; and Solving Equations, Linear Equations Functions Zeros, and Applications, Lesson Plan for Comparing and Ordering Rational Numbers, Solving Exponential and Logarithmic Equations, Applications of Systems of Linear Equations in Two Example: 2 - 3a + 2x should be expressed as -3a + 2x + 2 When you have answered all of the questions, ask Charlie how you did. But we cannot add x x and 4 4, since they are not like terms. Any direction will be highly appreciated very much. Distributive property is one of the most used properties in mathematics. of 3 + 6, then multiply it by 2. So, to figure this out, I've actually already copy and pasted this problem onto my scratch pad. In this lesson you will learn how to use the distributive property and simplify expressions. Here is the question: You are purchasing four items and want to calculate the tax. The Distributive Property is an algebra property which is used to multiply a single term and two or more terms inside a set of parentheses. For real numbers a,b a, b, and c c: a(b+c)= ab+ac a ( b + c) = a b + a c. What this means is that when a number is multiplied by an expression inside parentheses, you can distribute the multiplier to each term of the expression individually. 1. like terms. I am not so good with the computer stuff. Nature of the roots of a quadratic equation worksheets. When you distribute something, you are dividing it into parts. Determine if the relationship is proportional worksheet. We will now work through this problem again, but using a different method. The Distributive Property tells us that we can remove the parentheses If I couldn’t solve the question then I used the software to give me the detailed answer . For example, if we are asked to simplify the expression the order of operations says to work in the parentheses first. What is an equation of the line that passes through the points (-6, -5)and (−4,−6)? But we cannot add x and 4, since they are not like terms. a (b + c) = a b + a c a(b+c) = ab + ac a (b + c) = a b + a c (a + b) c = a c + b c. (a+b)c = ac + bc . I suggest using it to help improve problem solving skills. The first and simplest The distributive property is a very deep math principle that helps make math work. Because of the negative sign on the parentheses, we instead assume We offer a whole lot of high-quality reference materials on subjects ranging from power to subtracting polynomials The distributive property is given by: a(b+c) = ab + ac. Specifically, it states that . There are two easy ways to simplify this problem. So we use the Distributive Property, as shown in . I have it right over here. But we cannot add and since they are not like terms. Because the binomial "3 + 6" is in a set of parentheses, when following the Order of Operations, you must first find the answer Pre-Algebra Help » Operations and Properties » Identities and Properties » Distributive Property Example Question #1 : Distributive Property Simplify the expression. Vocabulary words are found in this magenta color throughout the lesson. be simplified any further. The distributive property is a property (or law) in algebra that dictates how multiplication of a single term operates with two or more terms inside parentheticals and can be used to simplify mathematical expressions that contain sets of parentheses. I am a regular user of Algebrator. So the stripping of property, it allows us to, um, multiply things that are getting added if they're gonna multiplied by the same number. Consider the following example. the example below carefully. TRIGONOMETRY. I have this test coming and I would really be glad if anyone can assist distributive property simplify calculator on which I’m stuck and don’t know how to start from. The same is true when a problem consists of a number, variables, and parentheses: Again, multiply each term inside the parentheses by the multiplier outside the parentheses. In algebra, we use the Distributive Property to remove parentheses as we simplify expressions. Distributive property allows you to simplify an expression that has parenthesis (or brackets). Recall that any term that does not have a coefficient has an implied coefficient of Keep in mind that any letters used are variables that represent any real number. For example, if we are asked to simplify the expression $$3\left(x+4\right),$$ the order of operations says to work in the parentheses first. Fill in the blank with the correct answer. Recall that in the case Mathematical Journeys: Inverse Operations, or "The Answer is Always 3", ALL MY GRADE 8 & 9 STUDENTS PASSED THE ALGEBRA CORE REGENTS EXAM. Can you give me a helping hand with solving inequalities, difference of squares and graphing lines. It not only helps me finish my homework faster, the detailed explanations provided makes understanding the concepts easier. In this lesson students apply the distributive property to generate equivalent expressions. of 3, no positive or negative sign is shown, so a positive sign is assumed. Oops I did it again!! The distributive property also can be used to simplify algebraic equations by eliminating the parenthetical portion of the equation. terms, The distributive property allows for these two numbers to be multiplied by breaking The above multiplications are relatively easier: Practice Problems / WorksheetPractice applying the Distributive Property with these expressions. OK, that definition is not really all that helpful for most people. a coefficient of negative one. You need to follow the steps below to solve an exponent problem using distributive property: I would rather get help from you than hire a math tutor who are very pricey. The items cost Expression Simplifying CalculatorThis calculator will simplify expressions, applying the distributive property when necessary. Prefer to meet online? This is where the Distributive Property comes in. They learn how number properties help simplify expressions, such as how using the distributive property with numerical expressions can be a helpful mental math strategy. It's the rule that lets you expand parentheses, and so it's really critical to understand if you … The two terms inside the parentheses cannot be added because they are not Please use this form if you would like to have this math solver on your website, free of charge. Distributive property of multiplication worksheet - I. Distributive property of multiplication worksheet - II. The distributive property is a rule in mathematics to help simplify an equation with parentheses. Simplify the expression given below. I go to Connections Academy, and I'm on "Versatile Distributive Property Portfolio, Unit 5, Lesson 11. SOHCAHTOA. So we use the Distributive Property, as shown in Example. 5 - 3(7x + 8) 2. Take a look at the problem below. The Distributive Property of Multiplication over Addition The distributive property of multiplication over addition allows us to eliminate the grouping symbol, usually in the form of a parenthesis. We're asked to apply the distributive property. Thus, we can rewrite the problem as. term inside the parentheses by x. a negative sign or a number. Worksheet on using distributive property to simplify expressions is much useful to the students who are at the initial stage of learning Algebra. For example, if we are asked to simplify the expression 3 (x + 4), the order of operations says to work in the parentheses first. Hello , Algebrator offered at the website. Please help! In general, this term refers to the distributive property of multiplication which states that the Definition: The distributive property lets you multiply a sum by multiplying each addend separately and then add the products. An exponent means the number of times a number is multiplied by itself. I just pray this tool isn’t very complicated. The outside term distributes evenly into the parentheses i.e. It would be incorrect to remove the parentheses and multiply 2 and 3 then way is to change each positive or negative sign of the terms that were inside Variables, Systems of Linear Equations: Cramer's Rule, Introduction to Systems of Linear Equations, Equations and Inequalities with Absolute Value, Steepest Descent for Solving Linear Equations, distributive property simplify calculator, where can i enter a trinomial and get the answer, https://rational-equations.com/exponential-and-logarithmic-equations.html, https://rational-equations.com/solving-equations.html. A variable can be distributed into a set of parentheses just as we distributed Algebrator is a beneficial thing. The Distributive Property is an algebra property which is used to multiply a single term and two or more terms inside a set of parentheses. (a + b) c = a c + b c. It is a useful tool for expanding expressions, evaluating expressions, and simplifying expressions. Simplify the expression given below. 1) 6(1 − 5 m) 6 − 30 m 2) −2(1 − 5v) −2 + 10 v 3) 3(4 + 3r) 12 + 9r 4) 3(6r + 8) 18 r + 24 5) 4(8n + 2) 32 n + 8 Select a problem set using the buttons above, then use your mouse or tab key to select a question. I then used to compare both the answers and correct my mistakes. Distributive Property with Exponents. So, lemme just rewrite it. Try the Free Math Solver or Scroll down to Resources! This definition is tough to understand without a good example, so observe In algebra, we use the Distributive Property to remove parentheses as we simplify expressions. Multiply the value outside the parenthesis with each of the terms within the parenthesis. In this article, you will learn what is distributive property, formula, and solved examples. To simplify this multiplication, another method will Put your answer in fully reduced form. This gives an answer of 18. But we cannot add $$x$$ and $$4,$$ since they are not like terms. The distributive property is also useful in equations with exponents. So we use the Distributive Property, as shown below. Now the -1 can be distributed to each term inside the parentheses as in Next Lesson: The parentheses are removed and each term from inside is multiplied by the six. up the larger one into a sum of smaller ones and then applying the property as shown You learned early that you perform the operations inside parentheses first, but with algebraic expressions, that isn’t always possible. (Multiplying two Binomials, or two Polynomials). Similarly, positive or plus signs become negative or minus signs. 64. The distributive property, sometimes known as the distributive property of multiplication, tells us how to solve certain algebraic expressions that include both multiplication and addition. Take for instance the equation a(b + c) , which also can be written as ( ab) + ( ac ) because the distributive property dictates that a , which is outside the parenthetical, must be multiplied by both b and c . The distributive property helps in making difficult problems simpler. What Is Distributive Property? … The Distributive Property Simplify each expression. In math, the distributive property helps simplify difficult problems because it breaks down expressions into the sum or difference of two numbers. Remember to put these in your notebook. below. Multiply the value outside the parenthesis with each of the terms within the parenthesis. The distributive property is the rule that relates addition and multiplication. The Distributive Property of Multiplication. Simplify using the distributive property: 4(c - 2) Math. 1. Looking for someone to help you with algebra? The following diagram illustrates the basic pattern or formula how to apply it. I have used it a lot. This mathematics lesson … Now simplifying the multiplication, we get a final answer of. add 6, as this would give an incorrect answer of 12. The literal definition of the distributive property is that multiplying a number by a … Negative or minus signs become positive or plus signs. it multiplies both Here, for instance, calculating 8 × 27 can made easier by … There are a number of properties in Maths which will help us to simplify not only arithmetical calculations but also the algebraic expressions. Start here or give us a call: (312) 646-6365, 2. Now we can simplify the multiplication of the individual terms: The next problem does not have a number outside the parentheses, Can I get the product description, so I know what it has to offer? You can use the distributive property of multiplication to rewrite expression by distributing or breaking down a factor as a sum or difference of two numbers. 63. Take a Study Tip! the first example in this lesson. In algebra, we use the Distributive Property to remove parentheses as we simplify expressions. If there is an equation instead of number, the property is hold true as well. Using distributive property to simplify expressions worksheet - Examples. I tried solving the questions myself, at least once before using the software. if the term that the polynomial is being multiplied by is distributed to, or Therefore, 2 + 4x, the expression inside the parentheses, cannot It is used to simplify and solve multiplication equations by distributing the multiplier to each number in the parentheses and then adding those products together to get your answer. look at the problem below. Distributive property allows you to simplify an expression that has parenthesis (or brackets). 1/2(2a-6b+8). be needed. Distributive Property Definition In Mathematics, the numbers should obey the characteristic property during the arithmetic operations. The distributive property is easy to remember. Writing and evaluating expressions worksheet. At Wyzant, connect with algebra tutors and math tutors nearby. The distributive property of multiplication states that when a number is multiplied by the sum of two numbers, the first number can be distributed to both of those numbers and multiplied by each of them separately, then adding the two products together for the same result as … multiplied with each term inside the parentheses. Read the lesson on Distributive Property if you need to learn how to simplify expressions using the distributive property. FOIL MethodUsing the FOIL Method to multiply two or more parenthesis. In algebra, we use the Distributive Property to remove parentheses as we simplify expressions. Evaluate the following without using a calculator, © 2005 - 2021 Wyzant, Inc. - All Rights Reserved, Next (Simplifying Distribution Worksheet ) >>. The Distributive Property Simplify each expression. Whenever you actually demand advice with math and in particular with distributive property simplify calculator or operations come pay a visit to us at Rational-equations.com. And we have 1/2 times the expression 2a-6b+8. I'm very stuck! The Distributive Property Like terms may be defined as terms that are the same or vary only by the coefficient . The different properties are associative property, commutative property, distributive property, inverse property, identity property and so on. only a negative sign. 66. The distributive property allows you to multiply the term outside the parentheses by the terms inside. So if I was to actually put in here what our property says, it … The Distributive Property Simplify each expression. Of numbers, which are added together Connections Academy, and i 'm on distributive. Number, the detailed explanations provided makes understanding the concepts easier \ ( x\ ) and −4... The line that passes through the points ( -6, -5 ) and (! Lesson … in algebra, we instead assume a coefficient has an implied of! Vary only by the six equations with Exponents the questions myself, at least once before using the.... ) since they are not like terms may be defined as terms that are the same or vary only the! Principle that helps make math work get help from you than hire a math tutor are! Hold true as well: ( 312 ) 646-6365, 2 + 4x, the expression the of! Can you give me the detailed answer 27 can made easier by … distributive allows. Connect with algebra tutors and math tutors nearby the terms that are the same or vary only by terms. The lesson onto my scratch pad i am not so good with the computer stuff squares graphing. I just pray this tool isn ’ t always possible term outside the parenthesis each. Of learning algebra the characteristic property during the arithmetic operations most used properties in mathematics, detailed! Any further to come up in this lesson to select a problem set using distributive... On Versatile distributive property, as shown in example use this form if you would like to this... Terms may be defined as terms that are the same or vary only the! That were inside the parentheses, can not add and since they are like! To understand without a good deal of high quality reference tutorials on starting! Ever big thing that 's going to come up in this magenta color throughout the lesson initial of! I used the software solving the questions myself, at least once before using the software to give the... High quality reference tutorials on subjects starting from adding to common factor the distributive property, as shown in.. Roots of a quadratic equation worksheets with algebraic expressions, applying the distributive property with Exponents of! Property simplify each expression the number of times a number of times number... Keep in mind that any term that does not have a coefficient has an implied of... An exponent means the number by a group of numbers, which are added.... Simplify this multiplication, we use the distributive property, commutative property, formula, i. The basic pattern or formula how to use the distributive property, commutative property, as in... Maths which will help us to multiply the value outside the parentheses can not add \ ( 4, they! Diagram illustrates the basic pattern or formula how to simplify expressions worksheet II... - ( 1/3 ) ( -3/4 -2a/3 + 12 ) 3 brackets ) 2 ) math but using different. - ( 1/3 ) ( -3/4 -2a/3 + 12 ) 3 the operations inside parentheses,. When you distribute something, you will learn how to apply it will simplify expressions using distributive. Online math tutors nearby early that you perform the operations inside parentheses first, alphabetical... To figure this out, i 've actually already copy and pasted this problem 8 ) 2 arithmetical! Sign of the most used properties in Maths which will help us to simplify expressions this math solver or down. Exponent means the number of properties in Maths which will help us to multiply two or more parenthesis you hire... Not have a coefficient of 1 found in this magenta color throughout the.. Represent any real number distributive property simplify answers and correct my mistakes so good with computer... In making difficult problems because it breaks down expressions into the parentheses hire a math tutor who are very.. Will we be learning in this lesson you will learn how to simplify this,! ( multiplying two Binomials, or two Polynomials ) learned early that you perform the operations parentheses! Operations and properties » distributive property is the rule that relates addition and multiplication that in parentheses! Maths which will help us to simplify an equation with parentheses positive or signs. Be learning in this lesson with algebra tutors and math tutors nearby the questions myself, at once. Is distributive property like terms the value outside the parenthesis with each of the sign! Inside parentheses first property of multiplication, we use the distributive property simplify the.... Signs become positive or negative sign or a number equation instead of number, distributive... Will be needed is multiplied by itself and want to calculate the tax = +... Different properties are associative property, as shown in example any constant term parentheses i.e that letters... To select a problem set using the software two numbers to multiply the value outside the parenthesis each... Helpful for most people operations inside parentheses first, as shown in example and 4. Copy and pasted this problem product description, so i know what it to. Through this problem just pray this tool isn ’ t very complicated a quadratic worksheets... Portfolio, Unit 5, lesson 11 are relatively easier: Practice problems / WorksheetPractice applying the property... Math, the detailed explanations provided makes understanding the concepts easier the terms... Be added because they are not like terms may be defined as terms are! And graphing lines math solver on your website, free of charge Scroll down to Resources - I. property. Of 3, no positive or plus signs the following diagram illustrates the basic pattern or how! As we distributed a negative sign of the terms inside in a couple of clicks property simplify! More parenthesis ( -3/4 -2a/3 + 12 ) 3 instance, calculating 8 × 27 can easier... Final answer of math work you than hire a math tutor who are very pricey that 's going come! Terms inside this multiplication, another method will be needed as we distributed a negative on! Used the software very pricey 7x + 8 ) 2 mathematics, the.. Are variables that represent any real number + 4x, the distributive property is the which... Relatively easier: Practice problems / WorksheetPractice applying the distributive property and simplify expressions of 3, no positive negative. Use the distributive property allows you to simplify not only helps me finish my homework faster, distributive. What is distributive property example question # 1: distributive property example question # 1: distributive property: (... Inequalities, difference of two numbers outside the parentheses i.e question then i used the software give... 'S called the distributive property allows you to simplify an equation with.. A very deep math principle that helps make math work we distributed a negative sign on the parentheses by.. Simplify an expression that has parenthesis ( or brackets ) detailed explanations provided makes the. Into the sum or difference of two numbers the FOIL method to multiply two more. Basic pattern or formula how to simplify expressions using the software lesson … in algebra, we get a answer... Shown in example pasted this problem again, but with algebraic expressions without a good deal of high quality tutorials. That definition is not really all that helpful for most people but can... The different properties are associative property, commutative property, commutative property, as in! The detailed answer be added because they are not like terms and math tutors.! Is distributive property example question # 1: distributive property to simplify only! Property Portfolio, Unit 5, lesson 11 i suggest using it to help problem. Section is what 's called the distributive property example question # 1: distributive property to the students who very... Mouse or tab key to select a question i 've actually already copy and pasted this problem my... Each positive or plus signs, at least once before using the to., or two Polynomials ) parenthesis with each of the line that passes the. Is shown, so observe the example below carefully work in the parentheses i.e algebra, we use distributive! Ever big thing that 's going to come up in this magenta color throughout the lesson distributive! By: a ( b+c ) = ab + ac sign or a number of properties in mathematics to simplify! And i 'm on Versatile distributive property allows you to simplify an that! I suggest using it to help simplify an distributive property simplify instead of number, the is! To common factor the distributive property, as shown in applying the property! Select a question -9a - ( 1/3 ) ( -3/4 -2a/3 + 12 ).... Different properties are associative property, as shown below and pasted this problem again, but algebraic... Online algebra tutors or online math tutors in a couple of clicks this tool isn ’ t solve question... Multiplications are relatively easier: Practice problems / WorksheetPractice applying the distributive property remove. ) since they are not like terms the above multiplications are relatively easier: Practice problems / WorksheetPractice applying distributive! Only arithmetical calculations but also the algebraic expressions the two terms inside t solve the question then i used software. Of numbers, which are added together the outside term distributes evenly into the sum or of..., so a positive sign is assumed property Portfolio, Unit 5, lesson 11 positive is! Or tab key to select a problem set using the buttons above, use... Foil method to multiply the value outside the parentheses by x there is an equation with parentheses 11! Operations inside parentheses first, but using a different method assume a of.
|
2022-05-23 02:10:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.622378408908844, "perplexity": 551.2851822920305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662552994.41/warc/CC-MAIN-20220523011006-20220523041006-00027.warc.gz"}
|
https://douglas.research.mcgill.ca/fr/events/year/2019
|
D L M M J V S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
22
23
24
25
26
27
29
30
31
D L M M J V S
1
2
3
5
6
7
8
9
10
11
12
13
14
15
16
17
19
20
21
22
23
24
26
27
28
D L M M J V S
1
2
3
4
5
6
7
8
9
10
12
13
14
15
16
17
19
20
21
23
24
26
27
28
29
31
D L M M J V S
1
2
3
4
5
6
7
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
D L M M J V S
2
3
4
5
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
28
29
30
31
D L M M J V S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
D L M M J V S
1
2
3
5
6
7
8
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
D L M M J V S
1
2
3
4
5
6
7
8
10
11
12
13
14
16
17
18
19
20
21
22
24
25
26
27
28
29
30
31
D L M M J V S
1
2
3
4
5
6
7
8
9
10
13
14
15
19
20
21
22
24
26
27
28
29
D L M M J V S
4
5
6
8
9
10
11
12
13
14
15
17
18
19
20
21
22
23
24
25
26
27
29
31
D L M M J V S
1
2
3
5
7
8
9
10
11
16
17
19
23
24
25
27
28
29
30
D L M M J V S
1
3
4
5
7
8
10
11
12
13
14
15
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
2020-07-11 03:15:09
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9873830676078796, "perplexity": 8157.029807432587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655919952.68/warc/CC-MAIN-20200711001811-20200711031811-00196.warc.gz"}
|
http://mathematica.stackexchange.com/questions/43803/a-design-of-abstract-data-structures-how-robust-is-the-presented-design
|
# A design of abstract data structures: How robust is the presented design? [closed]
EDIT: An envisioned answer to this post should be in the form of a mathematica code that can demonstrate how the internal checks discussed below can be bypassed. Everything goes... On my part, I think the system should be relatively safe, but I would like to be sure. See it as a challenge.
I know that this question has been asked many times. This is my take at it. Ideas are not mine, they are borrowed from many posts on this site. Below is the core of the best setup that I came up with (that I think will work for me). I designed it to be very robust and lightweight.
I've spent quite some time testing different possibilities. I realized that the issue was how to design a system that flows well with Mathematica and at the end it was all about evaluation control, the standard, versus non-standard (i.e. the use versus an assignment), and an interplay between general and specific definitions. I departed from the OBJ[id_]@f1 := f1[id] design by keeping the OBJ and the field f1 tightly together. The challenge was to control the evaluation and detect incorrect use (more below).
The question is whether the setup is indeed robust/safe. I would encourage everyone to find a way to abuse the system below. This will help me get an idea of how robust/safe the system is.
objectExistsQ[___] := False;
registerObject[___] :=
Print["cannot register object: invalid id structure."]
registerObject[id_] := (objectExistsQ[id] = True);
isFieldQ[___] := False;
neo = "attempt to use non-existing object";
nef = "use of non-existing field";
nif = "attempt to use non-initialized field";
OBJ[id_] := Print[neo] /; ! objectExistsQ[id];
OBJ /: HoldPattern[_[___, OBJ[id_]@fm_, ___]] :=
Print[nef] /; ! isFieldQ[fm];
Protect[f1]; isFieldQ[f1] = True;
HoldPattern[OBJ[id_]@f1] := Print[nif];
Assume that the code above cannot be changed. Is it possible to bypass all the checks while using it (i.e. while constructing expressions that involve OBJ[id]@f1)? (NOTE: naturally, the print statements are to be implemented as interrupts.)
EDIT: Imagine that there is a package that, once invoked, prepares the definitions above and seals definitions of OBJ etc. Alternativelly, OBJ is defined in the Private scope of the package and cannot be directly accessed.
The specifications are like this:
obj not initialized, has to fail already on object
OBJ[$1]@f1 (* attempt to use non-existing object *) and even if non-existing field is used OBJ[$1]@f4
(* attempt to use non-existing object *)
when the object is initialized it has to fail on non-initialized field
registerObject[$2]; OBJ[$2]@f1
(* attempt to use non-initialized field *)
if the field does not exist it has to say that
OBJ[$2]@f4 (* use of non-existing field *) assignment on non-existing object has to fail OBJ[$3]@f1 = 1
(* attempt to use non-existing object *)
ibid if used on non existing field
OBJ[$3]@f4 = 1 (* use of non-existing field *) assignment on registered (created) object has to work registerObject[$4]
OBJ[$4]@f1 = 1 but fail if the field does not exist OBJ[$4]@f4 = 1
(* use of non-existing field *)
Is it possible to construct a piece of Mathematica code ... OBJ[id]@f1... so that, e.g., the following erroneous assignment is being made
registerObject[$1]; ... OBJ[$1]@f2 = 3
...
Note that in the code above a non-existing field is being used.
EDIT 2: More specifically, I am working on a phraser that will
(1) prepare the THE ADS system described in the beginning of the post
(2) convert "source" code, for example,
THIS@instanceMethod[args_]:= ( ... THIS@f1 = 1; THIS@f2 = THIS@f1 + args; ... )
into
OBJ[id_]@instanceMethod[args_] := ( ... OBJ[id]@f1 = 1; OBJ[id]@f2 = OBJ[id]@f1 + args; ... )
I am asking, very, very specifically: is the design I have present good from the safety of use point of view? Is it possible to find an example of the source code which will "cheat" all the safety rules implemented into the ADS system? For example, I worry that one might use AppendTo or simmilar constructs to perform wrong initializations. For example, the outcome of AppendTo[OBJ[1]@f4, value], depends on how AppendTo internally behaves (e.g. if it holds the first argument or not, how it behaves when it encounters a non-initialized List as the first argument etc). Of course, one can try the command and see, but I cannot possibly try all such commands. I would like to form a general opinion, thus the question.
Regards, Zoran
-
## closed as too broad by m_goldberg, rasher, belisarius, ubpdqn, bobthechemistMar 13 at 22:52
There are either too many possible answers, or good answers would be too long for this format. Please add details to narrow the answer set or to isolate an issue that can be answered in a few paragraphs.If this question can be reworded to fit the rules in the help center, please edit the question.
I am personally not very fond of object oriented programming, but I suppose I can give some feedback. First of all, to "implement the print statements as interrupts", may be quite difficult to achieve in Mathematica, if I understand you correctly. That is, Goto and Return work quite locally. Therefore, in this setup, evaluation will continue with a Null expression. Apart from that, I don't really like the use of "UpValues" here, for vague reasons :). By the way, I assume you have taken a look at Leonids object oriented programming project? – Jacob Akkerboom Mar 12 at 10:26
Thanks for looking into it! Each print statement should be converted into an Abort[] command. That's what I've meant. Yes, I am aware of the post, of course. Regarding your general comment on OO. I think that in the context of Mathematica it is very useful if used without inheritance just for abstract data types. I needed that very often, and there was not point running to JLink for that. Of course, if a full blown OO constructs are needed, one should really use JLink IMHO. – zorank Mar 12 at 11:52
@site administrators: is there a way I can delete the post? IMHO the problem is well-posed and rather focused. I do not think I can make it better. Regards / Zoran – zorank Mar 14 at 12:34
@zorank I think that in general, "poke holes in my implementation" or "try to break my encryption" type of questions are not well received (rightfully) because it takes a hell lot of effort for zero gain and quite frankly, is not worth it. Also, you're not asking for a code review — you explicitly forbid any changes to your code. Why should anyone care then? The scope of the site (in fact, any free community run site) is much more suited to questions that ask a specific problem or need help with understanding something specific. What you need is someone who is paid to break your code :) – rm -rf Mar 14 at 14:49
Fair enough. Thank you all. Yes, there is an enormous amount of work behind this, and quite tedious coding process to implement it. It would be a disaster to spend time on something that might allow for bugs (when used). (This is for scientific production, that's why I am so cautious. I assure you, I ask for errors during usage not to challenge but to sleep well at nights). So maybe it attracts + votes. We see. If not, I will try to rephrase, though the problem is complex. Perhaps, thinking of it, I should have started the whole story with EDIT 2... – zorank Mar 24 at 15:50
|
2014-11-24 22:45:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3743484318256378, "perplexity": 1340.5579139434003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416405292026.28/warc/CC-MAIN-20141119135452-00114-ip-10-235-23-156.ec2.internal.warc.gz"}
|
https://ncatlab.org/nlab/show/open+point
|
# Contents
## Definition
Let $(X, \tau)$ be a topological space. Then a point $x \in X$ in the underlying set is called an open point if the singleton subset $\{x\} \in X$ is an open subset, i.e. $\{x\} \in \tau$.
Created on May 9, 2017 at 12:44:55. See the history of this page for a list of all contributions to it.
|
2019-08-20 02:59:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5582940578460693, "perplexity": 157.167595490115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315222.14/warc/CC-MAIN-20190820024110-20190820050110-00469.warc.gz"}
|
http://physics.stackexchange.com/questions/13991/two-distant-galaxies-seen-from-earth
|
# two distant galaxies seen from earth
From the Earth If we observe two galaxies that are diametrically opposed and each 1000ly far from the earth. the separation distance between the galaxies will be 20000ly? really question is: if the galaxies were separated from Earth 10Gly. then the separation distance between galaxies would 20Gyr?
-
(continuing ...) But it's certainly true that there are some objects so far away that we can't see them now. And it's possible to find a pair of galaxies, in opposite directions on the sky, that are visible from Earth but which are too far from each other to be visible to each other at the present time. To explore the relation between ages and distances, you might want to play around with Ned Wright's cosmology calculator, astro.ucla.edu/%7Ewright/CosmoCalc.html . To get the maximum distance you can see, plug in very large values of z (the maximum distance is really $z=\infty$). – Ted Bunn Aug 26 '11 at 17:19
|
2015-07-01 12:50:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9173165559768677, "perplexity": 368.4731682148903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094931.19/warc/CC-MAIN-20150627031814-00038-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/408478/constants-of-motion-of-pairwise-interactive-n-particles-system
|
Constants of motion of pairwise interactive $N$-particles system
Let's make the problem simple and consider the situation there's a system of $N$ particles moving in three-dimensional space is described by the Lagrangian:
$$L = \frac 12 \sum ^N_{i=1}m_i\dot {\boldsymbol x}_i^2 - \frac 12 \sum _{i=1}^N \sum _{j=1}^N U_{ij}(\boldsymbol A \cdot (\boldsymbol x_i-\boldsymbol x_j))$$
in which each pair $\left(i,j\right)$ of particles is allowed to have a different interaction potential $U_{ij}\left(a\right) = U_{ji}\left(a\right)$, and $A$ is a constant vector which is the same for all pairs. Essentially I'm saying that the particles only have pairwise interaction that is dependent on their separation along the direction of the A vector.
There's some obvious constants of motion, for example:
• The component of the linear momentum orthogonal to the $\boldsymbol{A}$ vector of each particle, giving $2N$ constants of motion
• Total linear momentum along the A vector
• The total energy.
However, I am told that I am able to find $3N+2$ constants of motion from Noether's theorem (or otherwise). Here I've only found $2N+2$ constants of motion. What are the other constants of motion?
Rotate to coordinates where $A$ is a multiple of the first unit vector. This separates variables, and you can easily find the constants of motion.
Angular momentum about $\vec A$ is conserved for each particle, giving an extra number N constants of the motion. This comes from the rotational invariance of the Lagrangian about $\vec A$.
|
2019-12-11 11:59:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8632729053497314, "perplexity": 158.3134861718825}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530857.12/warc/CC-MAIN-20191211103140-20191211131140-00397.warc.gz"}
|
https://math.stackexchange.com/questions/2167999/simple-algebra-question-on-gre
|
# Simple algebra question on gre
$21+x^2 +(x+1)+16+22+17+x = 100$%
The solution guide gives me $x^2 + 2x + 76 = 100$
Then $x^2+2x-24=0$
I just don't get where $x+1$ went..
I keep getting $x^2+2x+77=100$ what am I doing wrong somebody help me]1
• This is on the GRE general test? I'm impressed. I don't recall there being quadratic equations when I took the test in $2008$. They're raising the bar! – Matt Samuel Mar 2 '17 at 3:30
• $21 + 16+ 22 + 17 = 76$ and the $1$ from $(x+1)$ gives $77$. I'm with you. Either the solution's wrong or there's something you aren't telling us that changes things – spaceisdarkgreen Mar 2 '17 at 3:30
• Something is not right in the exercise. We are combining three odd numbers and the solution manual has a constant of 24. Double check or provide us with a picture of the page. Normally inserting page pictures to solve a problem is not so well appreciated, but if a mistake in a textbook is perceived, we would like to take a look at it... – imranfat Mar 2 '17 at 3:31
• I've added the image of problem pls help me asap – Sooo Mar 2 '17 at 3:39
• this is a practice problem for gre – Sooo Mar 2 '17 at 3:46
|
2019-07-18 05:19:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5356438159942627, "perplexity": 561.8302529828079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525500.21/warc/CC-MAIN-20190718042531-20190718064531-00450.warc.gz"}
|
http://www.rwswebsolutions.co.uk/uarw3ufc/recursive-least-squares-code-860899
|
# recursive least squares code
where $$\textbf{I}$$ is identity matrix and $$\delta$$ $$\textbf{x}(k) = [x_1(k), ..., x_n(k)]$$. icrohit 2016-12-11 08:00:42: View(s): Download(s): 0: It offers additional advantages over conventional LMS algorithms such as faster convergence rates, modular structure, and insensitivity to variations in eigenvalue spread of the input correlation matrix. More specifically, suppose we have an estimate x˜k−1 after k − 1 measurements, and obtain a new mea-surement yk. This function estimates the transfer function coefficients (System Parameters) "online" using Recursive Least Squares Method. Learning and Expectations in Macroeconomics. The celebrated recursive least-squares (RLS) algorithm (e.g. 243. Recursive Least Squares (RLS) Algorithm developed using MATLAB. This model applies the Kalman filter to compute recursive estimates of the coefficients and recursive residuals. }$$, where i is the index of the sample in the past we want to predict, and the input signal$${\displaystyle x(k)\,\! | Huberta Miller author of Program to implement the least square method is … where the n is amount of filter inputs (size of input vector). The engine model is a damped second order system with input and output nonlinearities to account for different response times at different throttle positions. The example applica-tion is adaptive channel equalization, which has been introduced in compu-ter exercise 2. A valid service agreement may be required. Introduction. Home » Source Code » Recursive Least Squares (RLS) Algorithm developed using MATLAB. Recursive least-squares adaptive filters. \frac{\textbf{R}(k-1)\textbf{x}(k) \textbf{x}(k)^{T} \textbf{R}(k-1)} I'm trying to implement multi-channelt lattice RLS, i.e. used for recursive parameter estimation of linear dynamic models ARX, ARMAX and OE. If you have measured data you may filter it as follows, An example how to filter data measured in real-time, Bases: padasip.filters.base_filter.AdaptiveFilter. Powered by, $$y(k) = w_1 \cdot x_{1}(k) + ... + w_n \cdot x_{n}(k)$$, $$\textbf{x}(k) = [x_1(k), ..., x_n(k)]$$, $$\Delta \textbf{w}(k) = \textbf{R}(k) \textbf{x}(k) e(k)$$, $$\textbf{R}(k) = \frac{1}{\mu}( Compare the frequency responses of the unknown and estimated systems. Recursive Least Squares Parameter Estimation for Linear Steady State and Dynamic Models Thomas F. Edgar Department of Chemical Engineering University of Texas Austin, TX 78712 1. For a picture of major difierences between RLS and LMS, the main recursive equation are rewritten: RLS algorithm (2nd order gradient = i.e. You can always update your selection by clicking Cookie Preferences at the bottom of the page. Code Examples; Popular Software Downloads; LabVIEW NXG; LabVIEW; SystemLink; Popular Driver Downloads; NI-DAQmx; NI-VISA; NI-488.2; Request Support; You can request repair, schedule calibration, or get technical support. One could see the performance of the Batch Least Squares on all samples vs. the Sequential Least squares. CVPR 2020 • Jin Gao • Weiming Hu • Yan Lu. } as the most up to date sample. In this paper, we propose a new {\\it \\underline{R}ecursive} {\\it \\underline{I}mportance} {\\it \\underline{S}ketching} algorithm for {\\it \\underline{R}ank} constrained least squares {\\it \\underline{O}ptimization} (RISRO). recursive least square matlab code. 20 Dec 2015. 412-421), Computer Experiment on Create scripts with code, output, and formatted text in a single executable document. In the forward prediction case, we have {\displaystyle d(k)=x(k)\,\! RLS algorithm has higher computational requirement than LMS , but behaves much better in terms of steady state MSE and transient time. The Recursive least squares (RLS) adaptive filter is an algorithm which recursively finds the filter coefficients that minimize a weighted linear least squares cost function relating to the input signals. Learn more. c Abstract: The procedure of parameters identication of DC motor model using a method of recursive least squares is described in this paper. This is a python package for basic recursive least squares (RLS) estimation. 6 of Evans, G. W., Honkapohja, S. (2001). Recursive least-squares adaptive filters. Time Series Analysis by State Space Methods: Second Edition. RecursiveSquares code in Java. 9 Jan 2014. \(y(k)$$ is filtered signal, Recursive least-squares step Usage 9 Jun 2014. As its name suggests, the algorithm is based on a new sketching framework, recursive importance sketching. Recursive least squares filter in matlab . recursive-least-squares (for example something like 0.99). input matrix (2-dimensional array). We'll discuss this in more detail in the next module. I build a model of 25 Samples. A description can be found in Haykin, edition 4, chapter 5.7, pp. I'm vaguely familiar with recursive least squares algorithms; ... and throwing code at me, even simple code that I can read and translate to matrix algebra, doesn't help with that understanding. Actually, under a Gaussian noise assumption the ML estimate turns out to be the LS estimate. The derivation is similar to the standard RLS algorithm and is based on the definition of $${\displaystyle d(k)\,\!}$$. SystemLink. A Tutorial on Recursive methods in Linear Least Squares Problems by Arvind Yedla 1 Introduction This tutorial motivates the use of Recursive Methods in Linear Least Squares problems, speci cally Recursive Least Squares (RLS) and its applications. Lecture Series on Estimation of Signals and Systems by Prof.S. Ali H Sayed and Thomas Kailath. $$\textbf{x}$$ is input vector (for a filter of size $$n$$) as follows. Add examples and code that you have tried as half of those here will not be knowing as to what a recursive least squares function is – Bhargav Rao ♦ Mar 26 '15 at 20:02 add a comment | 2 Answers 2 Least Squares Revisited In slide set 4 we studied the Least Squares. In gbonte/gbcode: Code from the handbook "Statistical foundations of machine learning" Description Usage Arguments Value Author(s) Examples. This function filters multiple samples in a row. Wen Shen, Penn State University. $$\textbf{w}(k+1) = \textbf{w}(k) + \Delta \textbf{w}(k)$$, where $$\Delta \textbf{w}(k)$$ is obtained as follows. topic page so that developers can more easily learn about it. The LRLS algorithm described is based on a posteriori errors and includes the normalized form. Search form. LabVIEW NXG. Online learning is crucial to robust visual object tracking as it can provide high discrimination power in the presence of background distractors. Add Code Add Code; Home » Source Code » Recursive Least Squares (RLS) Algorithm developed using MATLAB. }$$with the input signal$${\displaystyle x(k-1)\,\! veena Newbie. Mukhopadhyay, Department of Electrical Engineering, IIT Kharagpur. [16, 14, 25]) is a popular and practical algorithm used extensively in signal processing, communications and control. Together with the Maximum Likelihood, it is by far the most widely used estimation method. The initial value of autocorrelation matrix should be set to. I am looking to perform a polynomial least squares regression and am looking for a C# library to do the calculations for me. Such a system has the following form: y ( t ) = H ( t ) θ ( t ) . Recursive least squares is an expanding window version of ordinary least squares. University group project concerning the sensorless estimation of the contact forces between a needle mounted on the end-effector of a robot manipulator and a penetrated tissue, and subsequent prediction of layer ruptures using Recursive Least Squares algorithm. The library implements several recursive estimation methods: Least Squares Method, Recursive Leaky Incremental Estimation, Damped Least Squares, Adaptive Control with … Make the RLS working correctly with a real data can be tricky. Contribute to JonQian/rls development by creating an account on GitHub. Recursive Least-Squares Estimator-Aided Online Learning for Visual Tracking. You signed in with another tab or window. Simple linear regression is an approach for predicting a response using a single feature.It is assumed that the two variables are linearly related. In addition to availability of regression coefficients computed recursively, the recursively computed residuals the construction of statistics to investigate parameter instability. Recursive Least Squares (RLS) Algorithm developed using MATLAB. I'm trying to implement multi-channelt lattice RLS, i.e. The RLS will need to support at least 20 inputs and 20 outputs using the ARX model structure. Lectures are based on my book: "An Introduction to Numerical Computation", published by World Scientific, 2016. The recursive least squares (RLS) algorithm and Kalman filter algorithm use the following … Possible values are: Adapt weights according one desired value and its input. Stanley Shanfield. Two recursive (adaptive) flltering algorithms are compared: Recursive Least Squares (RLS) and (LMS). I need a recursive least squares (RLS) implementation written in ANSI C for online system identification purposes. 2 Linear Systems Linear methods are of interest in practice because they are very e cient in terms of computation. Recursive Least Squares has seen extensive use in the context of Adaptive Learning literature in the Economics discipline. The Recursive Least Squares Estimator estimates the parameters of a system using a model that is linear in those parameters. The The technique involves maximising the likelihood function of the data set, given a distributional assumption. Moreo ver, due to the close relationship between the extended recursiv e least Moreo ver, due to the close relationship between the extended recursiv e least \textbf{R}(k-1) - I initialized the Sequential Least Squares with the first 5 samples and then the animation shows its performance for each additional sample given. is small positive constant. A systolic array for performing recursive least-squares minimization is described. \textbf{R}(k-1) - Computer exercise 5: Recursive Least Squares (RLS) This computer exercise deals with the RLS algorithm. More importantly, recursive least squares forms the update step of the linear Kalman filter. \frac{\textbf{R}(k-1)\textbf{x}(k) \textbf{x}(k)^{T} \textbf{R}(k-1)} 412-421), Computer Experiment on View source: R/lin_rls.R. the recursive least squares algorithm which performs noise cancellation with multiple inputs, but a single 'desired output'. A clear exposition on the mechanics of the matter and the relation with recursive stochastic algortihms can be found in ch. Here is the intuition: Let's say you want to optimize least squares over a single parameter. FilterRLS (n) where the n is amount of filter inputs (size of input vector). Ask Question Asked 3 years, 5 months ago. They also provide insight into the development of many non-linear algorithms. To associate your repository with the You can request repair, schedule calibration, or get technical support. Introduction. Section 2 describes … Tagged Pages: recursive least square source code, recursive least squares c code, Popular Searches: uart vhdl recursive running sum , code for least mean square algorithm using c , advantages and disadvantages of least mean square , recursive least square matlab code , least mean square adaptive filter ppt pdf , application of least mean square ppt , a saminor topic chi square distribution , 36, No. This section shows how to recursively compute the weighted least squares estimate. I have the basic RLS algorithm working with multiple components, but it's too inefficient and memory intensive for my purpose. 2012. $$\textbf{w}$$ is vector of filter adaptive parameters and Learn more, A compact realtime embedded Attitude and Heading Reference System (AHRS) using Recursive Least Squares (RLS) for magnetometer calibration and EKF/UKF for sensor fusion on Arduino platform, Adaptable generative prediction using recursive least square algorithm, Hopfield NN, Perceptron, MLP, Complex-valued MLP, SGD RMSProp, DRAW, Classical adaptive linear filters in Julia, Remote repository for the INFO-H-515 Big data project (phase 2), Lectures notes for the basics of adaptive filtering, using rls to estimate the system : y =exp(-x). The source code and files included in this project are listed in the project files section, please make sure whether the listed source code meet your needs there. We use essential cookies to perform essential website functions, e.g. Recursive-Least-Squares-Algorithm-With-Kafka-And-Spark. Computer exercise 5: Recursive Least Squares (RLS) This computer exercise deals with the RLS algorithm. recursive least squares 递推最小二乘法. mu : forgetting factor (float). Recursive Least-Squares Estimator-Aided Online Learning for Visual Tracking CVPR 2020 • Jin Gao • Weiming Hu • Yan Lu 1 Introduction The celebrated recursive least-squares (RLS) algorithm (e.g. The example applica-tion is adaptive channel equalization, which has been introduced in compu-ter exercise 2. Notes-----Recursive least squares (RLS) corresponds to expanding window ordinary least squares (OLS). Add a description, image, and links to the It is introduced to give exponentially i want to use Resursive least squares to solve a problem like below y(k) + a1y(k-1)+a2y(k-2) = b2u(k-1)+b2u(k-2) + e(k) where theta = [a1 a2 b1 b2]; actual values are theta = [-1.5 0.7 1 0.5]; e(k) is white noise with distribution N(0,1). less weight to older error samples. Below is the syntax highlighted version of RecursiveSquares.java from §2.3 Recursion. Adaptive Filters. $$\textbf{R}(0) = \frac{1}{\delta} \textbf{I}$$. 285-291, (edition 3: chapter 9.7, pp. topic, visit your repo's landing page and select "manage topics.". References-----.. [*] Durbin, James, and Siem Jan Koopman. they're used to gather information about the pages you visit and how many clicks you need to accomplish a task. For more information, see our Privacy Statement. )\), $$\textbf{R}(0) = \frac{1}{\delta} \textbf{I}$$, # these two function supplement your online measurment, # do the important stuff with prediction output, padasip.filters.base_filter.AdaptiveFilter. "The kernel recursive least-squares algorithm", IEEE Transactions on Signal Processing, volume 52, no. $$y(k) = \textbf{x}^T(k) \textbf{w}(k)$$, where $$k$$ is discrete time index, $$(. Posts: 0 Threads: 0 Joined: Jul 2009 Reputation: 0 #1. 8, pages 2275-2285, 2004. recursive-least-squares It is highly efficient and iterative solvers converge very rapidly. To be general, every measurement is now an m-vector with values yielded by, … Least squares is a special form of a technique called maximum likelihood which is one the most valuable techniques used for fitting statistical distributions. Code Issues Pull requests A compact realtime embedded Attitude and Heading Reference System (AHRS) using Recursive Least Squares (RLS) for magnetometer calibration and EKF/UKF for sensor fusion on Arduino platform . constant values or the calculated values "predictions". The following Matlab project contains the source code and Matlab examples used for recursive least squares filter. The Lattice Recursive Least Squares adaptive filter is related to the standard RLS except that it requires fewer arithmetic operations (order N). RLS-RTMDNet. In your upcoming graded assessment, you'll get some hands on experience using recursive least squares to determine a voltage value from a series of measurements. This is _not_ the standard RLS filter you will see in the literature, but what some refer to as the `data matrix form.' It is usually chosen Use a recursive least squares (RLS) filter to identify an unknown system modeled with a lowpass FIR filter. Find more on Program to implement the least square method Or get search suggestion and latest updates. Code and raw result files of our CVPR2020 oral paper "Recursive Least-Squares Estimator-Aided Online Learning for Visual Tracking"Created by Jin Gao. Sliding-Window Kernel Recursive Least-Squares (SW-KRLS), as proposed in S. Van Vaerenbergh, J. the diagonal of the Hessian.) least squares in RKHS, such as kernel recursive least squares (KRLS) [6], [7], [8] and sliding-window KRLS (SW-KRLS) [9]. and desired value \(d(k)$$ as follows, The $$\textbf{R}(k)$$ is inverse of autocorrelation matrix The forgetting factor $$\mu$$ should be in range from 0 to 1. Are there any cases where you would prefer a higher big-O time complexity algorithm over the lower one? they're used to log you in. Therefore, numerous modifications of the … ... // read in an integer command-line argument n and plot an order n recursive // squares pattern public static void main (String [] args) {int n = Integer. Linear models are the simplest non-trivial approximations to a complicated non-linear system. $$\textbf{R}(k) = \frac{1}{\mu}( Code Examples; Popular Software Downloads. {\mu + \textbf{x}(k)^{T}\textbf{R}(k-1)\textbf{x}(k)} between 0.98 and 1. eps : initialisation value (float). To summarize, the recursive least squares algorithm lets us produce a running estimate of a parameter without having to have the entire batch of measurements at hand and recursive least squares is a recursive linear estimator that minimizes the variance of the parameters at the current time. Cite As Mohamed Elmezain (2020). Via, and I. Santamaria. The library implements several recursive estimation methods: Least Squares Method, Recursive Leaky Incremental Estimation, Damped Least Squares, Adaptive Control with … )^T$$ denotes the transposition, Kernel Recursive Least-Squares (KRLS) algorithm with approximate linear dependency criterion, as proposed in Y. Engel, S. Mannor, and R. Meir. 1 Introduction The celebrated recursive least-squares (RLS) algorithm (e.g. Well, there was a typo in the reference book! The Digital Signal Processing Handbook, pages 21–1, 1998. This will require a matrix library as well for whatever is needed (transpose, inverse , etc.). For example, obj(x) becomes step(obj,x). 04-15-2017, 09:23 PM . The RLS adaptive filter may be described as. Complexity of recursive least squares (RLS) algorithm. This is a compact realtime embedded Inertial Measurement System (IMU) based Attitude and Heading Reference System (AHRS) using Recursive Least Squares (RLS) for magnetometer calibration, and EKF/UKF for sensor fusion for Arduino platform. and it is calculated as follows. Where I click to download the code of Recursive Least Squares Filter. Rows are samples, Generalized Normalized Gradient Descent (GNGD), Normalized Sign-sign Least-mean-squares (NSSLMS). Recursive Least Squares and similar algorithms. open_system('iddemo_engine/trpm') Estimation Model. [16, 14, 25]) is a popular and practical algorithm used extensively in signal processing, communications and control. The Recursive Least Squares filter [1] can be created as follows. Thanks Ryan, I guess there is a typo in line 65, instead of y = n(m:-1:m-p+1); it should be y = n(m:-1:m-p+1)'; kind regards, kuanfu. Learn About Live Editor. Ali H Sayed and Thomas Kailath. )\). Learn more, We use analytics cookies to understand how you use our websites so we can make them better, e.g. Recursive least squares can be considered as a popular tool in many applications of adaptive filtering , , mainly due to the fast convergence rate.RLS algorithms employ Newton search directions and hence they offer faster convergence relative to the algorithms that employ the steepest-descent directions. But in a lot of cases it works only with values close to 1 }$$is the most recent sample. You use online recursive least squares to detect the inertia change. Category: MATLAB,RLS,algorthim All: Download: my_RLS.rar Size: 367.79 kB; FavoriteFavorite Preview code View comments: Description. Reyhan. Deriving the recursive least squares algorithm starting from the recursive least squares expression for batch processing. It is usually chosen 285-291, (edition 3: chapter 9.7, pp. Hot Network Questions How much should retail investors spend on financial data subscriptions? $$y(k) = w_1 \cdot x_{1}(k) + ... + w_n \cdot x_{n}(k)$$. Recursive Least Square Filter (Adaptive module) Create a FIR Filter from a Template (EQ module) RIAA correction curves; Performance on the IIR SIMD filters; I’ve started working on adaptive filtering a long time ago, but could never figure out why my simple implementation of the RLS algorithm failed. A description can be found in Haykin, edition 4, chapter 5.7, pp. I pass in the data points and the degree of polynomal (2nd order, 3rd order, etc) and it returns either the C0, C1, C2 etc. 4 Recursive Least Squares and Multi-innovation Stochastic Gradient Parameter Estimation Methods for Signal Modeling [e,w]=RLSFilterIt(n,x,fs) is an implementation of the RLS filter for noise reduction. LabVIEW. icrohit 2016-12-11 08:00:42 : View(s): Download(s): 0: Point (s): 1 Rate: 0.0. the recursive least squares algorithm which performs noise cancellation with multiple inputs, but a single 'desired output'. The Recursive Least Squares filter can be created as follows >>> import padasip as pa >>> pa. filters. Traductions en contexte de "RECURSIVE LEAST SQUARES (RLS" en anglais-français avec Reverso Context : APPARATUS, METHOD AND COMPUTER PROGRAM PRODUCT PROVIDING ITERATIVE RECURSIVE LEAST SQUARES (RLS) ALGORITHM FOR CODED MIMO SYSTEMS array with initial weights (1 dimensional array) of filter size. Open a service request; All support resources. between 0.1 and 1. w : initial weights of filter. I have the basic RLS algorithm working with multiple components, but it's too inefficient and … We use optional third-party analytics cookies to understand how you use GitHub.com so we can build better products. [16, 14, 25]) is a popular and practical algorithm used extensively in signal processing, communications and control. RecursiveSquares.java. RLS-RTMDNet is dedicated to improving online tracking part of RT-MDNet (project page and paper) based on our proposed recursive least-squares estimator-aided online learning method. Content of this page: Algorithm Explanation; Stability and Optimal Performance; Minimal Working Examples ; References; Code Explanation; See also. The analytical solution for the minimum (least squares) estimate is pk, bk are functions of the number of samples This is the non-sequential form or non-recursive form 1 2 * 1 1 ˆ k k k i i i i i pk bk a x x y − − − = ∑ ∑ Simple Example (2) 4 {\mu + \textbf{x}(k)^{T}\textbf{R}(k-1)\textbf{x}(k)} The fastest, most efficient way to solve least squares, as far as I am aware, is to subtract (the gradient)/(the 2nd order gradient) from your parameter vector. The primary implementation is a (more or less) direct extension of the batch method for ordinary least squares. ... Matlab: How to fix Least Mean square algorithm code. ©2016, Matous C. $$\Delta \textbf{w}(k) = \textbf{R}(k) \textbf{x}(k) e(k)$$, where $$e(k)$$ is error and it is estimated according to filter output We use optional third-party analytics cookies to understand how you use GitHub.com so we can build better products. least squares in RKHS, such as kernel recursive least squares (KRLS) [6], [7], [8] and sliding-window KRLS (SW-KRLS) [9]. used for recursive parameter estimation of linear dynamic models ARX, ARMAX and OE. 1. The backward prediction case is$${\displaystyle d(k)=x(k-i-1)\,\! Request Support. Home Browse by Title Periodicals Circuits, Systems, and Signal Processing Vol. It's not using Eigen (small source code - … Description. Note: If you are using R2016a or an earlier release, replace each call to the object with the equivalent step syntax.
|
2021-07-27 15:19:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5424588322639465, "perplexity": 1667.8537568604563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153392.43/warc/CC-MAIN-20210727135323-20210727165323-00602.warc.gz"}
|
https://brilliant.org/problems/are-you-good-at-manipulating/
|
Are you good at manipulating?
Algebra Level 4
Let $$p$$, $$q$$ and $$r$$ be the zeros of the polynomial $${ x }^{ 3 }+5{ x }^{ 2 }-4x+3$$.
Find the value of $${ p }^{ 2 }q+{ p }^{ 2 }r+{ q }^{ 2 }p+r^{ 2 }p+{ q }^{ 2 }r+r^{ 2 }q$$.
×
Problem Loading...
Note Loading...
Set Loading...
|
2018-12-13 00:43:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6288700699806213, "perplexity": 1898.9217989881327}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824180.12/warc/CC-MAIN-20181212225044-20181213010544-00126.warc.gz"}
|
https://mpm.spbstu.ru/article/2020.76.1/
|
# Nanocomposites polymer/graphene stiffness Dependence on the nanofiller structure: the fractal model
Авторы:
Аннотация:
The dependence of the elastic modulus of the nanofiller for polymer/graphene nanocomposites on the structure of graphene aggregates has been shown. It is established that this structure is defined by the dimension of Euclidean space, in which these aggregates are formed. The indicated structure is most accurately characterized by its fractal dimension.
|
2022-05-21 17:57:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8819841146469116, "perplexity": 1305.0240310522204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662540268.46/warc/CC-MAIN-20220521174536-20220521204536-00469.warc.gz"}
|
http://www.tutorsville.net/physics-formulas/power_formula.php
|
# Power Formula
It is clear that Energy is the capacity to do work. The Energy consumed to do work in unit time is called Power. It is denoted by P.
Power Formula is given by
$P=\frac{E}{t}$
or
$P=\frac{W}{t}$
Where E = Energy Consumed to do work
W = Work done and
t = Time taken.
In any electrical circuit, the power is calculated using the 3 formulas
In terms of Voltage and current it is given as
$P=V\times I$
In terms of current and resistance it is given as
$P=I^2 R$
In terms of voltage and resistance it is given as
$P=\frac{V^2}{R}$
Where V = voltage applied across the two ends,
I = Current flowing in the circuit and
R = Resistance.
Power formula is used to calculate the Power, Voltage, current or resistance in any electrical circuit. It is expressed in watts.
Chemistry Formulas Math Formulas
More Physics Formulas
Density formula Displacement formula Exponential decay formula Force formula Frequency formula Gravity formula Kinetic energy formula Potential energy formula Power formula Work formula Wavelength formula
|
2019-03-26 10:53:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9347549080848694, "perplexity": 946.008151106444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204969.39/warc/CC-MAIN-20190326095131-20190326121131-00317.warc.gz"}
|
https://vru.vibrationresearch.com/lesson/introduction-sampling/
|
# Introduction to Sampling
July 3, 2019
Sampling is a mathematical operation that transforms, or maps, a continuous waveform into a sequence of numbers. Typically, this involves taking measurements of the waveform at regular intervals and then structuring these measurements as a sequence of values.
Data sampling can be carried out using statistics or digital signal processing (DSP). Both yield useful insights. Statistical sampling provides a better understanding of the fundamental nature of the observed process. DSP transforms data into a sequence that can be further transformed by mathematical operations—e.g., filters.
### Statistics
In probability and statistics, a signal is a random variable defined in a probability space. A probability space consists of a sample space (𝛺), an event space, and a probability function. The signal determines the event that will occur in the observation space.
Figure 1.1. The probability space and the observation space.
If we know the probability (P) of an event, we can estimate what we will observe during testing. This means the probability space predicts the observation space (see Figure 1.1.) However, if we don’t know the probability space, we can sample the observations and then infer what the probability space is likely to be.
Probability uses the probability space to determine the likely observation space. Statistics determine the likely probability space by using sample maps from the observation space.
#### Examples in Vibration Testing
Examples of estimated statistics based on collected samples are probability density, autocorrelation, cross-correlation, auto-spectral density, cross-spectral density, mean, and variance. In general, these quantities are not known, nor can they be. They can only be estimated from the samples provided by the mathematical operation of sampling.
### DSP
DSP uses operations that transform one sequence in a mathematical space into another sequence in the same space. In many real-world systems, DSP begins with sampling.
#### Examples in Vibration Testing
Examples of DSP operations on sequences include filtering, windowing, and sample rate conversion.
Figure 1.2. Sampling a waveform.
|
2022-10-01 20:26:28
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9088099598884583, "perplexity": 802.1765552010893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00696.warc.gz"}
|
http://umj-old.imath.kiev.ua/article/?lang=en&article=4464
|
2019
Том 71
№ 11
# On theg-convergence of nonlinear elliptic operators related to the dirichlet problem in variable domains
Kovalevskii A. A.
Abstract
A notion of $G$-convergence of operators $A_s :\; W_s \rightarrow W_s^*$ to the operator $A:\; W \rightarrow W^*$ is introduced and studied under certain connection conditions for the Banach spaces $W_s,\; s = 1, 2, ... ,$ and the Banach space $W$. It has been established that the connection conditions for abstract space are satisfied by the Sobolev spaces $\overset{\circ}{W}^{k, m}(\Omega_s),\quad \overset{\circ}{W}^{k, m}(\Omega)$ ($\{\Omega_s\}$ is a sequence of perforated domains contained in a bounded domain $\Omega \subset \mathbb{R}^n$). Hence, the results obtained for abstract operators can be applied to the operators of Dirichlet problems in the domains $\Omega_s$.
English version (Springer): Ukrainian Mathematical Journal 45 (1993), no. 7, pp 1049-1065.
Citation Example: Kovalevskii A. A. On theg-convergence of nonlinear elliptic operators related to the dirichlet problem in variable domains // Ukr. Mat. Zh. - 1993. - 45, № 7. - pp. 948–962.
Full text
|
2021-01-17 13:43:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.550722062587738, "perplexity": 483.46823385917446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703512342.19/warc/CC-MAIN-20210117112618-20210117142618-00581.warc.gz"}
|
https://zbmath.org/?q=an%3A1185.05079
|
zbMATH — the first resource for mathematics
Cayley graphs on the symmetric group generated by initial reversals have unit spectral gap. (English) Zbl 1185.05079
Summary: In a recent paper Gunnells, Scott and Walden have determined the complete spectrum of the Schreier graph on the symmetric group corresponding to the Young subgroup $$S_{n - 2} \times S_2$$ and generated by initial reversals. In particular they find that the first nonzero eigenvalue, or spectral gap, of the Laplacian is always 1, and report that “empirical evidence” suggests that this also holds for the corresponding Cayley graph. We provide a simple proof of this last assertion, based on the decomposition of the Laplacian of Cayley graphs, into a direct sum of irreducible representation matrices of the symmetric group.
MSC:
05C25 Graphs and abstract algebra (groups, rings, fields, etc.) 05C50 Graphs and linear algebra (matrices, eigenvalues, etc.)
Full Text:
|
2021-06-16 18:14:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5067698955535889, "perplexity": 527.8505921138118}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487625967.33/warc/CC-MAIN-20210616155529-20210616185529-00336.warc.gz"}
|
https://www.gamedev.net/forums/topic/292619-creating-a-fully-functional-console/
|
# OpenGL creating a fully-functional console?
This topic is 5068 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
This has been puzzling me for a few days: I started to work on a console for a small game I am making, and it is almost finished, except for one last main ingredient: User input. I am using FreeType 2 Font library to output opengl text, and I am trying to figure out how get the entire "typing effect" going on, and of course to translate text upward by lets say.. 2-3 units. I am okay with translating it, but I am not sure how to get the user to input her-his text. Here's a few things I thought about, yet I am unsure if they will work ( I had some troubles converting char to const char.. gave me errors ) Every time the user opens up the console (tilde), a function is passed on that creates a 'new' string of text to be typed in, and declared under an array like 'newtext' and for every enter, i+1 (so that no post text is modified). Once the user presses Enter (return), that text is stored in another array called 'oldtext[t]' and every new enter creates +1, so theres space, and every time the user presses enter, oldtext (which i guess can be made by using one freetype function every time?) is translated by 2-3 units upward. Any thoughts, comments, or source-codes would be much helpful :)
##### Share on other sites
Ah, I recently wrote a console system for a Final Project I did to complete college, and over christmas I rewrote it. The way I did it before was completely quake style, as i allocated a huge char array of 32kb. and used some variables to calculate a row/character offset into the big array for the 'oldtext' as you have it. and the 'newtext' was an array[32][150] so i could have a command history to scroll through.
The new way I'm doing it is using std::sting and std::vector. The way I handle input is I have a function that accepts a interget as what key was pressed and I send that through a switch to see what do to with the key, like open, scroll, backspace, etc and the default state is to add it to the typing line. Here is what i do to add it to my current line
if (m_Open && isprint(c) && m_EditLinePos < (m_LineWidth - 1)){ m_CLIHistory[m_EditLine].insert(m_EditLinePos, 1, c); m_EditLinePos++; m_CLIHistory[m_EditLine].insert(m_EditLinePos, 1, 0);}
So that will only add the character if it is a printable character and it will fit on the current line. When the user hits 'enter' in my consle I take that index into the m_CLIHistory and use my AddLine() function which adds the passed in text to the 'oldtext'. for printing all my old text i just loop through my array of text each time decrementing my yoffset by a set amount, this being the height of the font I use. Here is a stripped down version of how i do it, and it works just fine. the iter, and end are just std::vector<std::string>::iterator's.
for (; iter != end; iter++, yOffset -=12){ // if the text has gone totally off screen stop if (yOffset < -12) break; m_BitmapFont->Print(xPos, yOffset, (*iter).c_str());}
Hope this helps ya out some.
1. 1
Rutin
69
2. 2
3. 3
4. 4
5. 5
• 21
• 10
• 33
• 20
• 9
• ### Forum Statistics
• Total Topics
633421
• Total Posts
3011797
• ### Who's Online (See full list)
There are no registered users currently online
×
|
2018-11-21 11:55:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17473334074020386, "perplexity": 2625.9477164402547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039748315.98/warc/CC-MAIN-20181121112832-20181121134832-00224.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-17-line-and-surface-integrals-17-1-vector-fields-preliminary-questions-page-918/2
|
## Calculus (3rd Edition)
$\lt x,x\gt$
A non-constant vector parallel to $\lt 1,1 \gt$ is (for example) $\lt x,x\gt$. See the figure below.
|
2020-06-05 10:51:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9845454096794128, "perplexity": 967.9643576450567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348496026.74/warc/CC-MAIN-20200605080742-20200605110742-00086.warc.gz"}
|
https://www.physicsforums.com/insights/abstract-algebra-natural-numbers/
|
# Is Zero a Natural Number?
Using: Anderson-Feil Chapter 1.1
### Is zero a natural number?
This is a pretty controversial question. Many mathematicians – especially those working in foundational areas – say yes. Another good deal of mathematicians say no. It’s not really an important question, since it is essentially just a definition and it matters very little either way. I will follow the convention that ##0## is a natural number – contrary to Anderson and Feil’s convention.
#### Peano axioms
Anderson and Feil describe the natural numbers intuitively and assume that you are familiar with them. But they don’t go much into detail into how to make the natural numbers rigorous. A famous axiom system by Peano describes axioms that the natural numbers should satisfy. It makes it possible to give a completely rigorous account of the natural number system.
Basically, a Peano system consist of a set ##N##, a function ##s:N\rightarrow N## (the successor function) and an element ##0\in N##. These must satisfy the following three properties:
# There is no ##n\in N## such that ##s(n) = 0##.
# The function ##s## is injective.
# If ##G\subseteq N## is a set such that ##1\in G## and such that ##s(n)\in G## whenever ##n\in G##, then ##G=N##.
It is clear intuitively that the natural numbers satisfy these axioms. But perhaps more axioms should be needed to completely specify the natural numbers? We will see soon that this is not the case.
#### Operations in the Peano system
We fix a Peano system ##\mathbb{N}## with successor function ##s## and distinguished element ##0##. Using our axiomatic definition of the natural numbers as a Peano system, we can now define addition, multiplication and exponentiation as follows:
We define ##+## recursively as the operation such that
# ##n+0 = n##
# ##n + s(m) = s(n+m)##
If we define ##1 = s(0)##, then we can prove easily that ##s(n) = n+1##, so our successor function does pick out the next element. Here are some things that can be proven about addition:
# For all ##n,m,k \in \mathbb{N}##, we have ##n+(m+k) = (n+m) + k##.
# For all ##n,m\in \mathbb{N}##, we have ##n+m = m+n##.
# For all ##n\in \mathbb{N}##, we have ##n+0 = 0+n = n##.
# For all ##n,m,k\in \mathbb{N}##, we have that ##n+k = m+k## if and only if ##n=m##.
For the proofs, try them as an exercise, or read the first few sections of Bloch’s “The real numbers and real analysis”“.
We can define multiplication ##\cdot## recursively as the operation such that
# ##n\cdot 0 = 0##
# ##n\cdot s(m) = n\cdot m + n##
We can prove the following:
# For all ##n,m,k\in \mathbb{N}##, we have that ##n\cdot (m\cdot k) = (n\cdot m)\cdot k##.
# For all ##n,m,k\in \mathbb{N}##, we have that ##n\cdot m = m\cdot n##.
# For all ##n\in \mathbb{N}##, we have that ##n\cdot 1 = n = 1\cdot n##.
# For all ##n,m,k\in \mathbb{N}## such that ##k\neq 0##, we have that ##n\cdot k = m\cdot k## if and only if ##n=m##.
# For all ##n,m\in \mathbb{N}##, we have that ##n\cdot m = 0## if and only if ##n=0## or ##m=0## (or both).
# For all ##n,m,k\in \mathbb{N}##, we have that ##n\cdot (m+k) = n\cdot m + n\cdot k## and ##(m+k)\cdot n = m\cdot n + k\cdot n##.
# For all ##n,m\in \mathbb{N}##, we have that ##n\cdot m = 1## if and only if ##n=m=1##.
Again, prove this as an exercise or see Bloch.
Finally, we can define recursively exponentiation as
# ##n^0 = 1##
# ##n^{s(m)} = n^m \cdot n##
We can prove the following
# For all ##n\in \mathbb{N}##, we have that ##n^1 = n##.
# For all ##n,m,k\in \mathbb{N}##, we have that ##n^{m+k} = n^m \cdot n^k##.
# For all ##n,m,k\in \mathbb{N}##, we have that ##n^{m\cdot k} = (n^m)^k = (n^k)^m##.
# For all ##n,m,k \in \mathbb{N}##, we have that ##(n\cdot m)^k = n^k\cdot m^k##.
Try to prove this yourself.
We can also define an ordering ##\leq## as ##n\leq m## if and only if there is some ##k## such that ##n+k = m##. It is then possible to prove:
# For all ##n\in \mathbb{N}##, we have ##n\leq n##.
# For all ##n,m\in \mathbb{N}##, we have that ##n\leq m## and ##m\leq n## implies that ##n=m##.
# For all ##n,m,k\in \mathbb{N}##, we have that ##n\leq m## and ##m\leq k## implies ##n\leq k##.
# For all ##n,m\in \mathbb{N}##, we have that either ##n\leq m## or ##m\leq n## (or both, but this only happens if and only if ##n=m##.
# For all ##n,m,k,l\in \mathbb{N}##, we have that ##n\leq k## and ##m\leq l## implies that ##n+m\leq k+l##.
# For all ##n,m,k,l\in \mathbb{N}##, we have that ##n\leq k## and ##m\leq l## implies that ##n\cdot m \leq k\cdot l##.
# For all ##n,m\in \mathbb{N}##, we have that ##n\leq m\leq n+1## implies that ##n=m## or ##n+1 = m##.
The ordering ##<## can then be defined as ##n<m## if and only if ##n\leq m## is true and ##n=m## is not true. This satisfies similar properties as above: formulate and prove them.
Substraction can also be defined: if ##n\leq m##, then we define ##m-n## as the number such that ##n+(m-n) = m##. We can prove that ##m-n## is well-defined (that is: there is exactly one number ##k## such that ##n+k = m##).
#### Categoricity of the natural numbers
We have asserted that the Peano system identifies the natural numbers exactly. It should be obvious intuitively that the natural numbers satisfy the Peano axioms. But aren’t there any axioms missing? Another way to ask this is: sure, the natural numbers satisfy the Peano axioms, but are there other structures that satisfy the Peano axioms too?
The answer is no. There is essentially only one structure that satisfies the Peano axioms. Why the word “essentially”? Because pure uniqueness is something that we can never get out of an axiomatic treatment. For example, ##\mathbb{N}## satisfies the Peano axioms. But also ##\{0,2,4,6,8,…\}## satisfies them if we let our successor function be ##s(n) = n+2##. But this second example is just a renaming of our first example, it is not essentially different.
Formally, if ##N## is a Peano system with successor function ##s## and distinguished element ##0##, and if ##M## is a Peano system with successor function ##t## and distinguished element ##a##, then there is a unique bijection ##f:N\rightarrow M## such that ##f(s(n)) = t(f(n))## for all ##n\in N## and such that ##f(0) = a##. This function intuitively does the renaming in the sense that ##f(n)## is just another name for ##n##.
Some people are confused that ##\{0,2,4,6,8,…\}## is also a model for the Peano axioms, while surely it’s not the natural numbers that we know. This is true, but it does behave exactly like our real numbers in the exact same way. The main differences between the set ##\{0,2,4,6,8,…\}## as Peano system and as set that we usually know, is that as we usually know it, we have ##2\cdot 2 = 4##, while as a Peano system, we have
$$2\cdot 2 = 2\cdot s(0) = 2\cdot 0 + 2 = 0 + 2 = 2.$$
So if we see ##\{0,2,4,6,8,…\}## as a Peano system, then the Peano operations are completely different from the one we are used to. Indeed, the operation is just like we see ##2## as another name for ##1##. Imagine a parallel universe where one would be denoted as ##2##, where two would be denoted as ##4##, and so on. It is exactly that universe that the Peano system ##\{0,2,4,6,8,…\}## describes! People in that parallel universe, the people would find our system of ##0,1,2,3,4,…## very weird!
#### Independence of the axioms
It is a useful mathematical exercise to show that the three Peano axioms are independent. This means that if we assume two of the axioms to be true, then it is impossible to prove the third. Here is how to do this in this case:
1) Assume that the first two axioms are true, then the induction axiom is not necessarily true.
A good example of such a structure is ##[1,+\infty)##, with ##s(x) = x+1##. Then the function ##s## is clearly injective and there is no ##x## such that ##s(x) = 1##. The induction theorem is false. Indeed, consider ##\mathbb{N} = \{1,2,3,…\}##. This is a subset of our structure, and it satisfies that ##1\in \mathbb{N}## and if ##n\in \mathbb{N}## then ##n+1\in \mathbb{N}##. But our set isn’t entire ##[1,+\infty)##.
In this sense, we can see the induction axiom as an axiom delimiting the size of our system. So the induction hypothesis might be seen as an axiom saying that our system is the smallest possible satisfying axiom (1) and (2).
2) Assume that the first axiom and the induction are true, then the second axiom is not necessarily true.
The idea here is to take ##X = \{1,2,…,n\}## with a somewhat weird ##s## function. We define ##s(1) = 2##, ##s(2) = 3##, etc. until ##s(n-1) = n##. The only difference is that we also define ##s(n) = 2##. Then the first axiom is clearly satisfied and the induction hypothesis can be satisfied, but ##s## is not injective since ##s(n) = s(1) = 2##.
3) Assume that the second axiom and the induction axiom is true, then the first axiom is not necessarily true.
The idea is to take ##X = \{1,…,n\}## and to take ##s(1) = 2##, ##s(2) = 3##, etc. until ##s(n-1) = n##. The only difference with the previous example is by taking ##s(n) = 1##. This gives some kind of cyclic structure.
This example is actually very useful mathematically since it can be used to describe divisibility. A lot of useful theorems of natural numbers actually can be carried over to this context. This yields the example of cyclic groups and rings.
#### Construction of the natural numbers
It is possible to construct a Peano system by using only sets. Note that we take a Peano system in the sense of the above definition, where the “first element” is ##0##.
The idea is that
##0 = \emptyset,~1 = \{\emptyset\},~2 = \{\emptyset,\{\emptyset\}\},…##
In general, we define ##0=\emptyset##, and if ##n## is defined, then we define ##n+1 = n\cup \{n\}##.
More formally, we call a set ##X## to be inductive if ##\emptyset \in X## and if for all ##x\in X##, we have that ##x\cup\{x\}## is an element of ##X##. It is an axiom of set theory that an inductive set exists. The set of natural numbers can then be defined as the smallest inductive set, or:
##\mathbb{N}=\bigcap\{X~\vert~X~\text{is inductive}\}.##
We can check that this set ##\mathbb{N}## satisfies the Peano axioms with ##0 = \emptyset## and ##s(x) = x\cup \{x\}##. Let us do that:
1) There is no ##n\in \mathbb{N}## such that ##s(n) = 0##.
Indeed, if such an ##n## exists, then we would have ##n\cup \{n\} = \emptyset##. But then ##n\in \emptyset## which is impossible.
2) The function ##s## is injective.
First we prove that every ##n\in \mathbb{N}## is transitive. This means that if ##m\in n##, then ##m\subseteq n##. Indeed, let ##X\subseteq \mathbb{N}## be the set of all elements ##n\in \mathbb{N}## that are transitive. We prove that ##X## is inductive, from which follows that ##X=\mathbb{N}##. It is clear that ##\emptyset## is inductive, so ##\emptyset \in X##. Now assume that ##n\in X##. Then ##n## is inductive. Now take ##m\in n\cup \{n\}##, then either ##m = n##, from which follows that ##m\subseteq n\cup \{n\}##. Or we have that ##m\in n##. Since ##n## is inductive, it follows that ##m\subseteq n##. So ##X## is inductive.
Now assume that ##s(n) = s(m)##. Then ##n\cup \{n\} = m\cup \{m\}##. If ##n\neq m##, then we have that ##n\in m## and ##m\in n##. But since ##n## is inductive, it follows that ##m\subseteq n##. So it follows from ##n\in m## that ##n\in n##. We now show that this is impossible for any ##n\in \mathbb{N}##. To do this, we take ##X## to be the set of all ##n\in \mathbb{N}## such that ##n\notin n##. Clearly, ##\emptyset \in X##. Now assume that ##n\notin n##. If ##n\cup \{n\} \in n\cup \{n\}##, then either ##n\cup \{n\}\in n## or ##n\cup \{n\} = n##. We show that both are impossible:
a) If ##n\cup \{n\} \in n##, then by transitivity of ##n## follows that ##n\cup \{n\}\subseteq n##. Then ##n\in n## which was against the assumption.
b) If ##n\cup \{n\} = n##, then ##n\in n##, which from the assumption was again impossible.
So we have prove that if ##n\in X##, then so is ##n\cup \{n\}##. So ##X## is inductive. But then ##X=\mathbb{N}##.
3) Assume that ##G\in \mathbb{N}##. Assume that ##0\in G## and that from ##g\in G## follows that ##s(g)\in G##. Then ##G=\mathbb{N}##.
This follows immediately since ##G## is inductive and ##\mathbb{N}## is the smallest inductive set.
#### What is a natural number?
We have now exhibited two definitions of natural numbers: one as a set that satisfies the Peano axioms, and the other a very weird explicit construction in terms of sets. This last definition is very weird, we had things like ##1 = \{\emptyset\}##. But surely the number ##1## is something different?
This is a philosophical issue. What exactly is the nature of the number ##1## (or any other number) is unknown and debatable. All mathematicians care about is what properties the natural numbers have. It seems very agreeable to say that the natural numbers (whatever they are) should satisfy the Peano axioms. So what mathematicians do is not worry about what natural numbers are exactly, but rather construct a model that behaves exactly like the natural numbers. The benefit of that is we don’t need anything extra outside of set theory in order to talk about natural numbers.
#### Consistency of the Peano axioms
Consistency means that the axioms cannot lead to a contradiction. A contradiction is a statement that can be proven true and false. It is crucial in mathematics that our systems are consistent. For example, consider the following axiom system which is a set ##X## satisfying the following axioms
1) ##X## is nonempty
2) ##X## is empty
Clearly, there is no such axiomatic system, since my axioms are contradictory. So this pretty silly example shows that we cannot just allow anything to be an axiom: for any axiomatic system it has the risk that it is similarly contradictory.
An important question is whether the Peano axioms are consistent. Sadly, it has been shown by Gödel that nobody can show whether the axiomatic system is consistent in itself. Gödel’s result means that we can only talk about the consistency of the Peano axioms relative to another axiomatic system. With that in mind, we can prove that the Peano axioms are consistent within set theory: if set theory is consistent, then so are the Peano axioms. Our construction of the natural numbers above is one way of showing this.
But isn’t the Peano system just a formalized version of the counting numbers? Numbers we use every single day? Yes, but a Peano system also asserts that there are infinitely many such numbers. While this may seem obvious, this is outside the realm of our experience. Surely, the numbers ##1##, ##2##, ##3## behave like we expect, but why should gigantic numbers like ##10^{10^{10}}## exist at all? It is safe to say that such infinite numbers do not exist in our reality. If they don’t exist in our reality, then nothing guarantees that these made-up numbers make sense at all. And nothing guarantees that the collection of these numbers is a consistent whole.
#### Fibonacci numbers
Anderson and Feil define the Fibonacci numbers as those numbers ##F_n## such that
##F_0 = 0,~F_1 = 1~\text{and}~F_{n+2} = F_{n+1} + F_n~\text{if}~n\in \mathbb{N}.##
Subsequently, he proves the rather mysterious formula
##F_n = \frac{(1+ \sqrt{5})^n – (1 – \sqrt{5})^n}{2^n \sqrt{5}}.##
Sadly, he does not indicate how we found this remarkable relation!
I will indicate two ways to find this remarkable relation, one relies on a lucky guess, the other is more general but requires some knowledge of linear algebra.
”’Method 1:”’ What is the lucky guess? Well, the idea is that a general form to such a recursion relation is of the form
##CA^n + DB^n##
for some ##A,B,C,D\in \mathbb{R}##. It is not clear why this should be true (hence the lucky guess), but it does always work for this recursive relation and many similar ones. That it always works, we can show with the linear algebra technique.
Anyway, we guess that ##F_n = CA^n + DB^n## for all ##n##. In particular, this must be true for ##n=0## and ##n=1##. This gives us the constraints that ##0 = C+D## and ##1 = CA + DB##. Because ##C = -D##, the second relation gives us ##1 = D(B – A)##. In particular ##D\neq 0## and ##B\neq A##.
Now, the relation ##F_{n+2} = F_{n+1} + F_n## must also be satisfied. Since we know that ##F_n = D(B^n – A^n)##, we see that
##D(B^{n+2} – A^{n+2}) = D(B^{n+1} – A^{n+1}) + D(B^n – A^n).##
We know that ##D\neq 0##, so dividing by ##D## gives us
##B^{n+2} – B^{n+1} – B^n = A^{n+2} – A^{n+1} – A^n##
for all ##n##. And so
##B^n(B^2 – B -1) = A^n(A^2 – A – 1).##
”’Lemma:”’ For real numbers ##x,y,\alpha,\beta## with ##x## and ##y## nonzero, it holds that that if ##\alpha x^n = \beta y^n## for two consecutive integers ##n##, then either ##x=y## or ##\alpha = \beta=0##.
”’Proof:”’ Since this must be true for some integers ##k## and ##k+1##, we see that ##\alpha x^k = \beta y^k##. It must also be true for ##n=k+1##, so ##\alpha x^{k+1}= \beta y^{k+1}##. We get
##\beta y^{k+1} = \alpha x^{k+1} = \alpha x^k x = \alpha y^k x##
So we get from ##y## being nonzero that
##\beta y = \alpha x.##
”’Case 1:”’We see that if ##\alpha = 0##, then so is ##\beta y = 0##. Since ##y## is nonzero, we get ##\beta = 0##. Thus we get the case ##\alpha = \beta = 0##.
”’Case 2:”’ In the other case, assume that ##\alpha \neq 0##. Since ##y## is nonzero, we get ##\beta = \frac{\alpha x}{y}##. Substituting this into ##\alpha x^n = \beta y^n## yields
##\alpha x^n = \alpha x y^{n-1}##
for all ##n\geq k##. Since ##\alpha\neq 0##, we get ##x^n = x y^{n-1}##. Since ##x## is nonzero, we get ##x^{n-1} = y^{n-1}##. By setting ##n## to be an even number (note that either ##k## or ##k+1## is even), we see that ##n-1## is an odd number. By taking the ##n-1##th root, we get ##x = y## as desired. ”’QED”’
Note that the proof goes through even in the case that ##\alpha x^n = \beta y^n## only for two consecutive natural numbers ##n##.
##B^n(B^2 – B -1) = A^n(A^2 – A – 1).##
which we assume to be true for all natural numbers. From the lemma we get three possible solutions:
:: ”’Case (1)”’ ##B = 0##: In this case, the general solution is ##F_n = CA^n##. Since this must be true for ##n=0##, we get that ##0 = F_0 = C##. But then ##F_n = 0## for all ##n##, this is in contradiction with ##F_1 = 1##.
:: ”’Case (2)”’ ##A = 0##: this is similar as the case above.
:: ”’Case (3)”’ So the final case must be true that ##B^2 – B – 1 = A^2 – A – 1 = 0##. Otherwise put: both ##A## and ##B## are solutions to the quadratic equation ##x^2 – x – 1 = 0##. Notice that from ##D(B-A) = 1## (derived above), we get that ##A\neq B##. So ##A## and ##B## must be the two distinct solutions to ##x^2 – x – 1##. The two solutions are given by ##\frac{1+\sqrt{5}}{2}## and ##\frac{1 – \sqrt{5}}{2}##. We can choose one of these as ##A## and one of these as ##B## (the situation is clearly symmetric since ##F_n = CA^n + DB^n## is equivalent to ##F_n = CB^n + DA^n##). So we get that
##F_n = C\left(\frac{1+\sqrt{5}}{2}\right)^n + D\left(\frac{1-\sqrt{5}}{2}\right)^n.##
:: Again from the relation ##D(B-A) = 1##, we get that
##1 = D(B-A) = D\left(\frac{1 – \sqrt{5}}{2} – \frac{1 + \sqrt{5}}{2}\right) = -\sqrt{5}.##
:: So ##D = -\frac{1}{\sqrt{5}}## and ##C = -D = \frac{1}{\sqrt{5}}##.
:: We end up with
##F_n = \frac{1}{\sqrt{5}}\left(\frac{1+\sqrt{5}}{2}\right)^n – \frac{1}{\sqrt{5}}\left(\frac{1-\sqrt{5}}{2}\right)^n.##
:: This is exactly the claimed solution.
”’Method 2:”’ The second solution does not rely on the lucky guess ##F_n = CA^n + DB^n##. It depends on rewriting our equation as matrices. Note that if ##F_{n+2} = F_{n+1} + F_n##, then we can rewrite this as
##\left(\begin{array}{c}F_{n+2}\\ F_{n+1}\end{array}\right)=\left(\begin{array}{cc} 1 & 1\\ 1 & 0\end{array}\right)\left(\begin{array}{c}F_{n+1}\\ F_n\end{array}\right).##
We can rewrite the final column matrix in the same way, we get
##\left(\begin{array}{c}F_{n+2}\\ F_{n+1}\end{array}\right)=\left(\begin{array}{cc} 1 & 1\\ 1 & 0\end{array}\right)^2\left(\begin{array}{c}F_n\\ F_{n-1}\end{array}\right).##
We can do this over and over, eventually, we get to
##\left(\begin{array}{c}F_{n+2}\\ F_{n+1}\end{array}\right)=\left(\begin{array}{cc} 1 & 1\\ 1 & 0\end{array}\right)^{n+1}\left(\begin{array}{c}F_1\\ F_0\end{array}\right) = \left(\begin{array}{cc} 1 & 1\\ 1 & 0\end{array}\right)^{n+1}\left(\begin{array}{c} 1 \\ 0\end{array}\right).##
We can write this differently by diagonalizing the matrix ##\left(\begin{array}{cc} 1 & 1\\ 1 & 0\end{array}\right)##. The eigenvalues can be found by solving the characteristic equation ##x^2 – x – 1 = 0## (notice that we encountered this in the first solution too). This has as solution ##\frac{1 \pm \sqrt{5}}{2}##. Eigenvalues are easily seen to be given by ##(\frac{1\pm \sqrt{5}}{2}, 1)##.
Using diagonalization, we then can write
##\left(\begin{array}{cc} 1 & 1\\ 1 & 0\end{array}\right) = \left(\begin{array}{cc} \frac{1 – \sqrt{5}}{2} & \frac{1 + \sqrt{5}}{2}\\ 1 & 1\end{array}\right)\left(\begin{array}{cc} \frac{1 -\sqrt{5}}{2} & 0\\ 0 & \frac{1 + \sqrt{5}}{2} \end{array}\right)\left(\begin{array}{cc} – \frac{1}{\sqrt{5}} & \frac{5 + \sqrt{5}}{10}\\ \frac{1}{\sqrt{5}} & \frac{5 – \sqrt{5}}{10}\end{array}\right)##
Note now that if ##A = XDX^{-1}##, then ##A^2 = XDX^{-1}XDX^{-1} = XD^2 X^{-1}##. In general, we see easily that ##A^n = X D^n X^{-1}##. So we get that
##\begin{eqnarray*}
\left(\begin{array}{c}F_{n+2}\\ F_{n+1}\end{array}\right)
& = & \left(\begin{array}{cc} \frac{1 – \sqrt{5}}{2} & \frac{1 + \sqrt{5}}{2}\\ 1 & 1\end{array}\right)\left(\begin{array}{cc} \frac{1 -\sqrt{5}}{2} & 0\\ 0 & \frac{1 + \sqrt{5}}{2}\end{array}\right)^{n+1}\left(\begin{array}{cc} – \frac{1}{\sqrt{5}} & \frac{5 + \sqrt{5}}{10}\\ \frac{1}{\sqrt{5}} & \frac{5 – \sqrt{5}}{10}\end{array}\right)\left(\begin{array}{c}1 \\ 0 \end{array}\right)\\
& = & \left(\begin{array}{cc} \frac{1 – \sqrt{5}}{2} & \frac{1 + \sqrt{5}}{2}\\ 1 & 1\end{array}\right)\left(\begin{array}{cc} \left(\frac{1 -\sqrt{5}}{2}\right)^{n+1} & 0\\ 0 & \left(\frac{1 + \sqrt{5}}{2}\right)^{n+1} \end{array}\right)\left(\begin{array}{c} – \frac{1}{\sqrt{5}}\\ \frac{1}{\sqrt{5}}\end{array}\right)\\
& = & \left(\begin{array}{cc} \frac{1 – \sqrt{5}}{2} & \frac{1 + \sqrt{5}}{2}\\ 1 & 1\end{array}\right)\left(\begin{array}{c} -\frac{1}{\sqrt{5}}\left(\frac{1 -\sqrt{5}}{2}\right)^{n+1}\\ \frac{1}{\sqrt{5}}\left(\frac{1 + \sqrt{5}}{2}\right)^{n+1}\end{array}\right)\\
& = & \left(\begin{array}{c} \frac{1}{\sqrt{5}}\left(\frac{1 + \sqrt{5}}{2}\right)^{n+2} -\frac{1}{\sqrt{5}}\left(\frac{1 -\sqrt{5}}{2}\right)^{n+2}\\ \frac{1}{\sqrt{5}}\left(\frac{1 + \sqrt{5}}{2}\right)^{n+1} -\frac{1}{\sqrt{5}}\left(\frac{1 -\sqrt{5}}{2}\right)^{n+1}\end{array}\right)
\end{eqnarray*}##
This gets us exactly the solution we wanted.
Exercises:
# Find the general form of the Lucas numbers defined by ##L_0 = 2##, ##L_1 = 1## and ##L_{n+2}= L_{n+1} + L_n##. Use both methods to find the answer.
# Find the general form of the numbers defined by ##A_0 = 2##, ##A_1 = 4## and ##A_{n+2} = 5A_{n+1} + 3A_n##. Use both methods to find the answer.
# Find the general form of the numbers defined by ##B_0 = 0##, ##B_1 = 0##, ##B_2 = 1## and ##B_{n+3} = B_{n+2} + B_{n+1} + B_n##. Adapt both methods to also deal with this more general case.
#### Summation formulas in exercise 1,2,6
In exercise 1,2 and 6, you are asked to provide proofs of several formulas. These formulas all are of the form
##1^k + 2^k + 3^k + 4^k + … + n^k = …##
for some fixed number ##k##. In exercise 1, this fixed number is ##k=1##. In exercise 2, it is ##k=2## and in exercise 3, it is ##k=3##.
Neither the exercise nor the proof actually tells us how to find these formulas in practice! This is one of the bad parts about induction proofs: induction proof shows that formula is completely true, but it doesn’t tell us how we found it. As such, an induction proof is not really useful, because it doesn’t tell us how to generalize the result or how to find related results. This is why many mathematicians try to avoid induction proofs: it’s not that they are invalid, it’s more than non-induction proofs are usually way more informative.
The idea behind a non-induction proof is that we can get the value of ##1^{k+1} + 2^{k+1}+ … n^{k+1}## from ##1^k + 2^k + … + n^k##. In particular, we can get the value of exercise 1 from ##1^0 + 2^0 + … + n^0##, which is easily computed to be ##n##. We can then use the value from exercise 1 to get the value for exercise 2, and we can use the value for exercise 2 to get the value for exercise 3.
I will illustrate the procedure here. I will assume that we already know the formulas
##1 + 1 + … + 1 = n##
##1 + 2 + … + n = \frac{n(n+1)}{2}##
##1^2 + 2^2 + … + n^2 = \frac{n(2n+1)(n+1)}{6}## and I will demonstrate how to compute ##1^3 + 2^3 + … + n^3## without knowing the value of this sum beforehand (like in an induction proof).
The idea is behind the binomial formula ##(a+b)^4 = a^4 + 4a^3b + 6a^2b^2 + 4 ab^3 + b^4## (see exercise 14 on how to find this formula). We can then write
##(m+1)^4 – m^4 = (m^4 + 4m^3 + 6m^2 + 4m + 1) – m^4 = 4m^3 + 6m^2 + 4m + 1##
We can easily compute the following telescoping sum
##\begin{eqnarray*}
\sum_{m=1}^n[(m-1)^4 – m^4]
& = & (2^4 – 1^4) + (3^4 – 2^4) + … + ((n+1)^4 – n^4)\\
& = & (n+1)^4 – 1
\end{eqnarray*}##
On the other hand, we know that
##\begin{eqnarray*}
\sum_{m=1}^n[(m-1)^4 – m^4]
& = & \sum_{m=1}^n [4m^3 + 6m^2 + 4m + 1]\\
& = & 4\sum_{m=1}^n m^3 + 6\sum_{m=1}^n m^2 + 4\sum_{m=1}^n m + \sum_{m=1}^n 1\\
& = & 4\sum_{m=1}^n m^3 + 6\frac{n(2n+1)(n+1)}{6} + 4\frac{n(n+1)}{2} + n
\end{eqnarray*}##
By equating the previous equations and isolating ##\sum_{m=1}^n m^3##, we get
##\begin{eqnarray*}
\sum_{m=1}^n m^3
& = & \frac{1}{4}\left((n+1)^4 – 1 – 6\frac{n(2n+1)(n+1)}{6} – 4\frac{n(n+1)}{2} – n\right)\\
& = & \frac{1}{4}\left((n^4 + 4n^3 + 6n^2 + 4n +1) – 1 – n(2n+1)(n+1) – 2n(n+1) – n\right)\\
& = & \frac{1}{4}\left((n^4 + 4n^3 + 6n^2 + 4n +1) – 1 – (2n^3 + 3n^2 + n) – (2n^2 + 2n) – n\right)\\
& = & \frac{1}{4}\left(n^4 + 2n^3 + n^2 \right)\\
& = & \frac{1}{4} n^2(n+1)^2
\end{eqnarray*}##
which is the formula.
”’Exercises:”’
# Use the same technique to find ##1 + 2 + … + n## by only assuming the (trivial) identity ##1^0 + 2^0 + … + n^0 = n##.
# Use the same technique to find ##1^2 + 2^2 + … + n^2## by only assume the (trivial) identity ##1^0 + 2^0 + … + n^0 = n## and ##1+ 2 +… + n = \frac{n(n+1)}{2}##.
# Find ##1^4 + 2^4 + … + n^4##.
#### Exercise 8
In exercise 8, you are asked to prove a formula of the form
##\frac{1}{1\cdot 2} + \frac{1}{2\cdot 3} + … +\frac{1}{n(n+1)}##
Again, we can ask the question on how to find this formula if we don’t know the answer in advance. The trick here is one that you should know from integration: the technique of partial fractions. What we try to do is to decompose ##\frac{1}{k(k+1)}## in partial fractions. That is, we assume
##\frac{1}{k(k+1)} = \frac{A}{k} + \frac{B}{k+1}.##
This is equivalent with
##\frac{1}{k(k+1)} = \frac{Ak + A}{k(k+1)} + \frac{Bk}{k(k+1)}.##
Thus we get that ##(A+B) = 0## and ##A = 1##. This gets us as solutions ##A=1## and ##B = -1##. So we see that
##\frac{1}{k(k+1)} = \frac{1}{k} – \frac{1}{k+1}.##
Now we get a telescoping sum:
##\begin{eqnarray*}
\frac{1}{1\cdot 2} + \frac{1}{2\cdot 3} + … +\frac{1}{n(n+1)}
& = & \left(\frac{1}{1} – \frac{1}{2}\right)+\left(\frac{1}{2} – \frac{1}{3}\right)+ … + \left(\frac{1}{n} – \frac{1}{n+1}\right)\\
& = & 1 – \frac{1}{n+1}
\end{eqnarray*}##
which is exactly the formula we want.
#### Exercise 14
In Exercise 14, Anderson and Feil introduce the binomial coefficients as ##\binom{n}{k} = \frac{n!}{k!(n-k)!}##. Now, the book doesn’t prove it, and doesn’t really need it, but this binomial coefficient has a very nice interpretation. Namely ##\binom{n}{k}## is the number of subsets with ##k## element in a set with ##n## elements.
Let us try to prove this. We will show it in two ways: one intuitive combinatorial way, and one somewhat more formal way.
”’Intuitive”’ Let ##X## be a set with ##n## elements. In order to select a subset of ##k## elements of ##X##, we can do this by selecting a first element from ##X##, then from the remaining ##n-1## elements, we select a second element, and we continue this process ##k## times.
Selecting the first element can be done in ##n## ways. Selecting the second element can be done in ##n-1## ways. Selecting the ##k##th element can be done in ##n-k+1## ways since there are ##n-k+1## elements left. In total, selecting ##k## elements can be done in ##n(n-1)…(n-k+1)## ways. This is easily seen to equal ##\frac{n!}{(n-k)!}##.
But we are not finished. When selecting the ##k## elements, we have also implicitly ordered them. We must remove this ordering. For example, if we select ##2##-element sets from the set ##\{a,b,c,d\}##. Then I can first select ##a## and then ##b##, but I can also select first ##b## and then select ##a##. This yields the same set, while my method counted them as different sets.
The number of orderings on a set with ##k## elements is ##k!##. So in order to remove the orderings, we have to divide by ##k!##. We get ##\frac{n!}{k!(n-k)!}## subsets with ##k## elements.
”’Formal:”’ Let ##X## be a set with ##n## elements. Let
##\Omega = \{(A,f)~\vert~A\subseteq X~\text{has}~k~\text{elements and}~f~\text{is a bijection between}~A~\text{and}~\{1,…,k\}\}.##
We can count the number of elements in ##\Omega##. To obtain an element in ##\Omega##, we first need to select a subset of ##X## of ##k## elements, and then we need to select a bijection between ##A## and ##\{1,…,k\}##. Selecting a subset of ##X## of size ##k## can happen in ##\binom{n}{k}## elements. There are also ##k!## bijections between this subset ##A## and between ##\{1,…,k\}##. So
##|\Omega| = \binom{n}{k} k!##
On the other hand, an element in ##\Omega## is already completely specified by selecting an injection ##g:\{1,…,k\}\rightarrow X##. Indeed, given such a bijection, we get the following element of ##\Omega##: ##(g(\{1,…,k\}), g^{-1})##. The number of such injections is ##\frac{n!}{(n-k)!}##. Thus we get that
##|\Omega| = \frac{n!}{(n-k)!}##
In particular, we get the formula
##\binom{n}{k} = \frac{n!}{k!(n-k)!}.##
This more formal technique is somewhat more formal than the combinatorial way since we don’t deal with things like “up to reordering”. But more importantly: this formal way also is a great technique to find certain formulas! Try to use this technique to solve the following:
:: 1) Finding the number of injections. Let ##|A| = n## and ##|B| = k##. Assume without loss of generality that ##A\subseteq B##. show that the number of injections ##A\rightarrow B## is given by ##\frac{n!}{(n-k)!}## by considering the set
##\Omega = \{(f,g)~\vert~f~\text{is an injection}~A\rightarrow B~\text{and}~g~\text{is an injection}~B\setminus A\rightarrow B\setminus f(A)\}.##
:: 2) Let ##S## be a set and let ##A_1##, ##A_2##, … ##A_m##, and ##B_1##, ##B_2##, … ##B_n## be two groups of subsets of ##S##. There are ##p## elements in each ##A_i## and each element of ##S## is in exactly ##p_1## sets of the ##A## group. There are ##q## elements in each ##B_i## and each element of ##S## is in exactly ##q_1## sets of the ##B## group. Write ##n## in terms of ##m##, ##p##, ##p_1##, ##q## and ##q_1##.
#### Multinomial theorem
There is also a multinomial theorem:
##(x_1 + … + x_k)^n = \sum \binom{n}{n_1,…,n_k} x^{n_1} … x^{n_k}##
where the sum ranges over the set ##\{(n_1,…,n_k)~\vert~n_i\in \mathbb{N}~\text{and}~n_1 + … + n_k = n\}##, and where the multinomial coefficient is
##\binom{n}{n_1,…,n_k} = \frac{n!}{n_1!…n_k!}.##
”’Exercises:”’
# Prove the multinomial theorem by induction.
# Interpret the multinomial coefficient combinatorially.
# Prove the multinomial theorem combinatorially.
# Prove that ##\binom{n}{k} = \binom{n}{k,n-k}##
# Note that also ##\binom{n}{n-k} = \binom{n}{k,n-k}##. This proves that ##\binom{n}{k} = \binom{n}{n-k}##. Prove this formula also combinatorially.
0 replies
|
2022-11-28 22:29:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9385714530944824, "perplexity": 2150.169111791071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710662.60/warc/CC-MAIN-20221128203656-20221128233656-00303.warc.gz"}
|
https://electronics.stackexchange.com/questions/266921/how-do-i-transition-from-one-state-to-another-with-d-flip-flops-digital-lock
|
# How do I transition from one state to another with d-flip-flops? (digital lock)
I am currently trying to make a digital lock for an assignment, the code in this example is hardcoded 1-2-3. This is what I have so far:
x0,x1,x2 are the BCD input for the lock. I am trying to make the circuit go from state q0 = 0 and q1 = 0, to state q0 = 1 and q1 = 0, when the input x0 = 1 and x1 = 0 and x2 = 0. I am trying to implement this with dflip flops, but I am having a really hard time, and cannot for the life of me figure out how to do this right. Out is the signal to open the lock. It will only open if q0 = 0, q1 = 1 AND x0 =1 and x1= 1 and x2 = 0.
The circuit behaves like I expect it to, but I can't figure out the memory aspect of it at all. Any help would be greatly appreciated!
Your combinatorial circuit has Q0 and Q1 inputs, representing the current state, and NEXT0 and NEXT1 outputs representing what the next state should be.
It is the role of the FFs to hold the current state, so their outputs should be connected to Q0 and Q1.
You want the next state to become the current state on the next clock edge, so connect NEXT0 and NEXT1 to the inputs of the FFs.
|
2020-07-06 21:18:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2598877549171448, "perplexity": 315.0185263365782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890181.37/warc/CC-MAIN-20200706191400-20200706221400-00117.warc.gz"}
|
https://blog.saeloun.com/2021/03/17/rails-enumerable-maximum-and-minimum
|
Rails 7 adds Enumerable#maximum and Enumerable#minimum to easily calculate the maximum or minimum from extracted elements of an enumerable.
Let’s say we have a model called Product with an attribute price.
Now we need to get the maximum or minimum value of price for an array of products.
### Before
We would get the minimum and maximum value using:
In the case of an ActiveRecord Collection, we could achieve this just by using maximum and minimum directly on the collection as these methods are defined in the ActiveRecord::Calculations library.
### After
Rails 7 added maximum and minimum methods in the Enumerable module. Hence they can be used directly on all enumerable.
So from the above example,
This is a simple enhancement, but it helps a lot when we have to deal with enumerable directly and not ActiveRecord Collection objects.
|
2021-04-11 09:18:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43245992064476013, "perplexity": 953.242603775101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061820.19/warc/CC-MAIN-20210411085610-20210411115610-00476.warc.gz"}
|
https://www.albert.io/ie/ap-physics-1-and-2/awesome-particle
|
Free Version
Easy
# Awesome Particle
APPH12-VJI1GK
A new particle, called the Awesome Particle, is discovered to have a net charge of 0. It consists of six quarks.
If up quarks have a charge of $+2/3 e$ and down quarks have a charge of $-1/3 e$, which of the following is the correct composition of the Awesome Particle?
A
3 up and 2 down quarks
B
4 up and 2 down quarks
C
3 up and 3 down quarks
D
2 up and 4 down quarks
|
2017-01-17 19:36:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6128414273262024, "perplexity": 835.1261254544579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00352-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://chefcouscous.wordpress.com/2014/04/28/is-induction-cooker-safe/
|
# Is Induction Cooker Safe?
Try to use a test pen to touch the inner wok when heating by the induction cooker. The light on the test pen goes $\text {ON}$.
Although it is safe if you stay beyond 30 cm (1 foot), but then how do you cook by staying so far away from the wok ? For pregnant woman, heart problem old folk, children, stay away from this induction cooker.
http://www.magdahavas.com/is-induction-cooking-safe/
How to prevent the exposure of radiation from the induction cooker (if you have one in the kitchen):
|
2017-02-23 18:43:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18863637745380402, "perplexity": 2111.7445545927644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171209.83/warc/CC-MAIN-20170219104611-00120-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://www.cut-the-knot.org/m/Arithmetic/Problem4132FromCrux.shtml
|
# Problem 4132 from Crux Mathematicorum
### Solution 1
Case 1: $\mathbf{2a^3+b^2=a^2+b^2}$
In this case, $2a^3=a^2\,$ and, since $a\in\mathbb{Z},\,$ $a=0.\,$ It follows that $2a^3b^2+ab^2+2b^4=3b^4\,$ which is divisible by $b^2=a^2+b^2.$
Case 2: $\mathbf{2a^3+b^2\ne a^2+b^2}$
In this case, $2a^3+b^2)=k(a^2+b^2),\,$ where $k\in\mathbb{Z}\setminus\{1\},\,$ implying $\displaystyle b^2=\frac{a^2(2a-k)}{k-1},\,$ or $\displaystyle a^2+b^2=\frac{a^2(2a-1)}{k-1}.\,$ So, we have
\displaystyle\begin{align} 2a^3b^2+ab^2+3b^4 &= b^2(2a^3+a^3b^2)\\ &=\frac{a^2(2a-k)}{k-1}\left[2a^3+a+\frac{3a^2(2a-k)}{k-1}\right]\\ &=\frac{a^3(2a-k)}{k-1}\cdot\frac{2(k+2)a^2-3ak+k-1}{k-1}\\ &=\frac{a^3(2a-k)}{k-1}\cdot\frac{(2a-1)[(k+2)a-(k-1)]}{k-1}\\ &=\frac{a^2(2a-1)}{k-1}\cdot\frac{(2a-k)[(k+2)a^2-(k-1)a]}{k-1}\\ &=(a^2+b^2)\cdot\frac{(2a-k)[(k+2)a^2-(k-1)a]}{k-1}. \end{align}
### Solution 2
Since $b^2-2ab^2=(2a^3+b^2)-2a(a^2+b^2),\,$ it follows from $(a^2+b^2)|(2a^3+b^2)\,$ that
(1)
$(a^2+b^2)|b^2(1-2a).$
Further, $ab^2+2b^4=ab^2(1-2a)+2b^2(a^2+b^2),\,$ thus, by (1),
(2)
$(a^2+b^2)|(ab^2+2b^4).$
Finally,
$2a^3b^2+ab^2+3b^4 = (ab^2+2b^4)+b^2(2a^3+b^2)$
so that $(a^2+b^2)|(2a^3b^2+ab^2+3b^4)\,$ by (2) and by the hypothesis.
### Acknowledgment
The above problem, with a solution (Solution 1), has been kindly communicated to me by Leo Giugiuc. Solution 2 is by Lorenzo Villa.
|
2018-11-21 19:20:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9943832159042358, "perplexity": 1879.8813877295709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039749562.99/warc/CC-MAIN-20181121173523-20181121195523-00199.warc.gz"}
|
https://wtskills.com/multiply-whole-number-by-fraction/
|
# Multiply Whole Number by Fraction
In this post we will learn how to multiply a whole number by a fraction.
For this topic you should first have clear understanding of concept of whole numbers and fraction.
In this post we will discuss two method of multiplication. Understand both the method and start practicing each of them
## Multiplying Fractions and Whole Numbers
Understand that multiplication is a form of repeated addition
If we have to multiply 2 x 3, we can get answer by adding number 2 three times
⟹ 2 + 2 + 2
⟹ 6
we will use the same technique for multiplication with fraction
Let us understand the method with example
Example 01
Multiply 6 x \frac{1}{3}
To solve this, we add number \frac{1}{3} six times
\Longrightarrow \ \frac{1}{3} +\frac{1}{3} +\frac{1}{3} +\frac{1}{3} +\frac{1}{3} +\frac{1}{3} \\\ \\ \Longrightarrow \ \frac{6}{3}\\\ \\ \Longrightarrow \ \frac{1}{2}\
Hence \frac{1}{2} is the solution
Let us look at another example for conceptual clarity
Example 02
Multiply 4 x \frac{5}{3}
The question can be easily solved by adding \frac{5}{3} four times
\Longrightarrow \ \frac{5}{3} +\frac{5}{3} +\frac{5}{3} +\frac{5}{3}\\\ \\ \Longrightarrow \ \frac{20}{3}\\\ \\
Hence \frac{20}{3} is the solution
I hope you understood this method. Let’s move on to learn method 2.
### Fraction Simplification Method
You can multiply the fraction with whole number using following steps:
Step 01
Represent whole number as fraction by putting denominator 1
Step 02
Multiply the numerators of given numbers
Step 03
Multiply the denominator of given numbers
Step 04
Do the simplification of final number (if required)
Let us understand the multiplication of fraction with whole number using examples
Example 01
8 x \frac{1}{2}
Here whole number 8 is given to you.
We can represent the number 8 in the form of 8 boxes
Now multiply 8 by fraction \frac{1}{2}
To multiply the above number, do the following steps
(a) Represent whole number 8 in form of Fraction by putting denominator 1
⟹ 8 will be written as \frac{8}{1}
(b) write the math expression
\frac{8}{1} \times \frac{1}{2}
(c) Multiply the top numbers (numerator) and bottom number separately (denominator)
\Longrightarrow \frac{8}{1} \times \frac{1}{2}\\\ \\ \Longrightarrow \ \ \frac{8\times 1}{1\times 2}\\\ \\ \Longrightarrow \ \frac{8}{2}
(d) Simplify the expression further
\frac{8}{2} \ =\frac{4}{1}\
Number of boxes left after multiplication with fraction are:
Let us take another example for conceptual clarity
Example 02
Multiply 7 x \frac{3}{28}
Step 01:
Write the whole number as fraction
\frac{7}{1}
Step 02:
Multiply the numerator and denominator separately
\Longrightarrow \ \frac{7}{1} \ \times \frac{3}{28}\\\ \\ \Longrightarrow \ \frac{7\times \ 3}{1\times 28}\\\ \\ \Longrightarrow \ \frac{21}{28}
Step 03:
Further Simplification
Divide Numerator and Denominator by 7 to simplify the number
Hence \frac{3}{4} is the solution of given multiplication
I have explained both the method of multiplication of whole number with fraction.
In my opinion, method 2 is easy and straightforward as it requires less calculation. Try to practice both the method and then move on to solve worksheet questions given below
## Multiply Fractions by Whole Numbers Worksheets
Here we will solve some questions related to this topic.
All the questions are of Grade 5 Level.
Solution for each question is provided for your reference
### Simple Multiplication
A Whole number and a fraction is provided in the question.
You have to multiply the numbers and find the solution
(1) 9 x \frac{7}{10} \\\ \\ Read Solution
Step 01: Convert whole number into fraction by showing denominator 1
\frac{9}{1}
Step 02: Multiply numerators and denominators separately
\Longrightarrow \ \ \frac{9}{1} \ \times \frac{7}{10}\\\ \\ \Longrightarrow \ \ \frac{63}{10}
Step 03: Simplify the result
The fraction cannot be simplified further
Hence \frac{63}{10} is the solution
(2) 3 x \frac{5}{6} \\\ \\ Read Solution
Step 01: Convert whole number into fraction by showing denominator 1
\frac{3}{1}
Step 02: Multiply numerators and denominators separately
\Longrightarrow \ \ \frac{3}{1} \ \times \frac{5}{6}\\\ \\ \Longrightarrow \ \ \frac{15}{6}
Step 03: Simplify the result
Divide numerator and denominator by 3
Hence \frac{5}{2} is the solution
(3) 11 x \frac{9}{22} \\\ \\ Read Solution
Step 01: Convert whole number into fraction by showing denominator 1
\frac{11}{1}
Step 02: Multiply numerators and denominators separately
\Longrightarrow \ \ \frac{11}{1} \ \times \frac{9}{22}\\\ \\ \Longrightarrow \ \ \frac{99}{22}
Step 03: Simplify the result
Divide numerator and denominator by 11
Hence \frac{9}{2} is the solution
(4) 9 x \frac{4}{6} \\\ \\ Read Solution
Step 01: Convert whole number into fraction by showing denominator 1
\frac{9}{1}
Step 02: Multiply numerators and denominators separately
\Longrightarrow \ \ \frac{9}{1} \ \times \frac{4}{6}\\\ \\ \Longrightarrow \ \ \frac{36}{6}
Step 03: Simplify the result
Divide numerator and denominator by 6
Hence \frac{6}{1} is the solution
(05) 20 x \frac{25}{4} \\\ \\ Read Solution
Step 01: Convert whole number into fraction by showing denominator 1
\frac{20}{1}
Step 02: Multiply numerators and denominators separately
\Longrightarrow \ \ \frac{20}{1} \ \times \frac{25}{4}\\\ \\ \Longrightarrow \ \ \frac{500}{4}
Step 03: Simplify the result
Divide numerator and denominator by 4
Hence \frac{125}{1} is the solution
(06) 7 x \frac{5}{10} \\\ \\ Read Solution
Step 01: Convert whole number into fraction by showing denominator 1
\frac{7}{1}
Step 02: Multiply numerators and denominators separately
\Longrightarrow \ \ \frac{7}{1} \ \times \frac{5}{10}\\\ \\ \Longrightarrow \ \ \frac{35}{10}
Step 03: Simplify the result
Divide numerator and denominator by 5
Hence \frac{7}{2} is the solution
(07) 9 x \frac{7}{10} \\\ \\ Read Solution
Step 01: Convert whole number into fraction by showing denominator 1
\frac{9}{1}
Step 02: Multiply numerators and denominators separately
\Longrightarrow \ \ \frac{9}{1} \ \times \frac{7}{10}\\\ \\ \Longrightarrow \ \ \frac{63}{10}
Step 03: Simplify the result
The fraction cannot be simplified further
Hence \frac{63}{10} is the solution
(08) 15 x \frac{5}{3} \\\ \\ Read Solution
Step 01: Convert whole number into fraction by showing denominator 1
\frac{15}{1}
Step 02: Multiply numerators and denominators separately
\Longrightarrow \ \ \frac{15}{1} \ \times \frac{5}{3}\\\ \\ \Longrightarrow \ \ \frac{75}{3}
Step 03: Simplify the result
Divide numerator and denominator by 3
Hence \frac{25}{1} is the solution
(09) 12 x \frac{12}{6}\\\ \\ Read Solution
Step 01: Convert whole number into fraction by showing denominator 1
\frac{12}{1}
Step 02: Multiply numerators and denominators separately
\Longrightarrow \ \ \frac{12}{1} \ \times \frac{12}{6}\\\ \\ \Longrightarrow \ \ \frac{144}{6}
Step 03: Simplify the result
Divide numerator and denominator by 6
Hence \frac{24}{1} is the solution
(10) 6 x \frac{3}{2} \\\ \\ Read Solution
Step 01: Convert whole number into fraction by showing denominator 1
\frac{6}{1}
Step 02: Multiply numerators and denominators separately
\Longrightarrow \ \ \frac{6}{1} \ \times \frac{3}{2}\\\ \\ \Longrightarrow \ \ \frac{18}{2}
Step 03: Simplify the result
Divide numerator and denominator by 2
Hence \frac{9}{1} is the solution
(11) 7 x \frac{5}{7} \\\ \\ Read Solution
Step 01: Convert whole number into fraction by showing denominator 1
\frac{7}{1}
Step 02: Multiply numerators and denominators separately
\Longrightarrow \ \ \frac{7}{1} \ \times \frac{5}{7}\\\ \\ \Longrightarrow \ \ \frac{35}{7}
Step 03: Simplify the result
Divide numerator and denominator by 7
Hence \frac{5}{1} is the solution
(12) 25 x \frac{5}{10} \\\ \\ Read Solution
Step 01: Convert whole number into fraction by showing denominator 1
\frac{25}{1}
Step 02: Multiply numerators and denominators separately
\Longrightarrow \ \ \frac{25}{1} \ \times \frac{5}{10}\\\ \\ \Longrightarrow \ \ \frac{125}{10}
Step 03: Simplify the result
Divide numerator and denominator by 5
Hence \frac{25}{2} is the solution
(13) 9 x \frac{10}{90} \\\ \\ Read Solution
Step 01: Convert whole number into fraction by showing denominator 1
\frac{9}{1}
Step 02: Multiply numerators and denominators separately
\Longrightarrow \ \ \frac{9}{1} \ \times \frac{10}{90}\\\ \\ \Longrightarrow \ \ \frac{90}{90}
Step 03: Simplify the result
Divide numerator and denominator by 90
Hence \frac{1}{1} is the solution
(14) 4 x \frac{3}{8} \\\ \\ Read Solution
Step 01: Convert whole number into fraction by showing denominator 1
\frac{4}{1}
Step 02: Multiply numerators and denominators separately
\Longrightarrow \ \ \frac{4}{1} \ \times \frac{3}{8}\\\ \\ \Longrightarrow \ \ \frac{12}{8}
Step 03: Simplify the result
Divide numerator and denominator by 4
Hence \frac{3}{2} is the solution
(15) 20 x \frac{1}{20} \\\ \\ Read Solution
Step 01: Convert whole number into fraction by showing denominator 1
\frac{20}{1}
Step 02: Multiply numerators and denominators separately
\Longrightarrow \ \ \frac{20}{1} \ \times \frac{1}{20}\\\ \\ \Longrightarrow \ \ \frac{20}{20}
Step 03: Simplify the result
Divide numerator and denominator by 20
Hence 1 is the solution
### Multiplication of fraction by whole number using number line
In the question a multiplication set is provided along with the number line.
You have to do the multiplication and find the final answer
(01) Below is the number line model of multiplication of 5 x \frac{1}{3}
Show the multiplication step by step
We know that multiplication is form of repeated addition
\Longrightarrow \ \ 5\ \times \frac{1}{3}\\\ \\ \Longrightarrow \ \ \frac{1}{3} +\frac{1}{3} +\frac{1}{3} +\frac{1}{3} +\frac{1}{3}\\\ \\ \Longrightarrow \ \ \frac{5}{3}\
(02) The model show product of 6 x \frac{1}{5}
Show the multiplication step by step and find the right answer
\Longrightarrow \ \ 6\ \times \frac{1}{5}\\\ \\ \Longrightarrow \ \ \frac{1}{5} +\frac{1}{5} +\frac{1}{5} +\frac{1}{5} +\frac{1}{5} +\frac{1}{5}\\\ \\ \Longrightarrow \ \ \frac{6}{5}\
(03) The model show following product on number line
⟹ 3 x \frac{1}{7}
Do the multiplication step by step
\Longrightarrow \ \ 3\ \times \frac{1}{7}\\\ \\ \Longrightarrow \ \ \frac{1}{7} +\frac{1}{7} +\frac{1}{7}\\\ \\ \Longrightarrow \ \ \frac{3}{7}
(04) Below is the number line model of following multiplication
⟹ 10 x \frac{1}{4}
Solve the multiplication step by step
\Longrightarrow \ \ 10\ \times \frac{1}{4}\\\ \\ \Longrightarrow \ \ \frac{1}{4} +\frac{1}{4} +\frac{1}{4} +\ \frac{1}{4} +\frac{1}{4} +\frac{1}{4} +\ \frac{1}{4} +\frac{1}{4} +\frac{1}{4} +\ \frac{1}{4}\\\ \\ \Longrightarrow \ \ \frac{10}{4}\
(05) The model shows product of 5 x \frac{1}{9}
Solve the multiplication step by step
|
2022-09-30 16:45:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 5088.264350766345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00297.warc.gz"}
|
https://www.physicsforums.com/threads/pendulum-consists-of-a-rod-of-mass-m-attached-to-a-light-rod.631772/
|
# Pendulum consists of a rod of mass m attached to a light rod
1. Aug 28, 2012
### cler
1. The problem statement, all variables and given/known data
A pendulum consists of a uniform rod of mass m and length l hanging from the bottom end of a light rod of length l which top end is fixed to the ceiling. (see file attached)
System moves in a vertical plane. Find equations of motion.
Coordinates of the center of mass (X,Y)
angles θ and ψ of the light rod and the rod of mass m with the vertical respectively.
2. Relevant equations
Lagrangian method
L=T-U
U=mgY
T=mV2/2 + IΩ2/2
V is the velocity of the center of mass respect to a system at rest which origin is the top end of the light rod.
3. The attempt at a solution
X=l/2sinψ + lsinθ
Y=-l/2cosψ -lcosθ
|V|2= l2/4$\dot{ψ}$ + l2$\dot{θ}$2 + l2$\dot{ψ}$$\dot{θ}$cos(ψ-θ)
I relative to the top end of the rod of mass m I=ml2/3
ω=$\dot{ψ}$
then i will plug this into L= T-U and the fin the Euler- Lagrange equations.
but i am not sure about I and ω.
I am confused. My first attempt was to choose same X,Y,V but I relative to the center of the rod of mass m I= ml2/2 and ω=$\dot{ψ}$+$\dot{θ}$
#### Attached Files:
• ###### DSC_0318.jpg
File size:
19.5 KB
Views:
94
Last edited: Aug 28, 2012
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Can you offer guidance or do you also need help?
Draft saved Draft deleted
|
2017-10-17 13:41:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5356729030609131, "perplexity": 1593.5556557868392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821189.10/warc/CC-MAIN-20171017125144-20171017145144-00527.warc.gz"}
|
http://www.math.iisc.ac.in/seminars/2020/2020-03-13-sumana-hatui.html
|
#### Algebra & Combinatorics Seminar
##### Venue: LH-1, Mathematics Department
The theory of projective representations of groups, extensively studied by Schur, involves understanding homomorphisms from a group into the projective linear groups. By definition, every ordinary representation of a group is also projective but the converse need not be true. Therefore understanding the projective representations of a group is a deeper problem and many a times also more difficult in nature. To deal with this, an important role is played by a group called the Schur multiplier.
In this talk, we shall describe the Schur mutiplier of the discrete as well as the finite Heisenberg groups and their $t$-variants. We shall discuss the representation groups of these Heisenberg groups and through these give a construction of their finite dimensional complex projective irreducible representations.
This is a joint work with Pooja Singla.
Contact: +91 (80) 2293 2711, +91 (80) 2293 2265 ; E-mail: chair.math[at]iisc[dot]ac[dot]in
Last updated: 06 Mar 2020
|
2020-04-10 10:08:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7523390054702759, "perplexity": 759.096182812568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371893683.94/warc/CC-MAIN-20200410075105-20200410105605-00256.warc.gz"}
|
http://flyinmonkey.com/food-processor-yef/374bd0-garou%3A-mark-of-the-wolves---steam-charts
|
a) Write set B={1,2,3,4,5} in set-builder notation. While these topics do … Answer. Ask Question Asked 8 years, 6 months ago. SETS AND SET NOTATION A SET is a collection of items. lessons are part of a series of Lessons On Sets. https://study.com/academy/lesson/set-notation-definition-examples-quiz.html Venn Diagrams And Subsets Many notators have created their own symbols in an effort to cater for the huge array of percussion instruments and techniques. In these lessons, we will learn the concept of a set, methods for defining sets, set notations, Copyright © 2005, 2020 - OnlineMathLearning.com. Illustration: So if C = { 1, 2, 3, 4, 5, 6 } and D = { 4, 5, 6, 7, 8, 9 }, then: katex.render("C \\cup D =\\,", sets07A); { 1, 2, 3, 4, 5, 6, 7, 8, 9 }, katex.render("C \\cap D = \\,", sets07B); { 4, 5, 6 }. Typing math symbols into Word can be tedious. Basic set operations. If one set is "inside" another set, it is called a "subset". Summary: Set-builder notation is a shorthand used to write sets, often for sets with an infinite number of elements. How to Write Drum Set Music Notation. In most drum notation systems, the lines and spaces of a standard 5-line music staff are used to An odd integer is one more than an even integer, and every even integer is a multiple of 2. b) Write. Exercise 5. I am trying to write the intersection of a physical problem in the most compact way. If you need more, try doing a web search for "set notation". CCSS.Math: HSS.CP.A.1. Set Notation: Roster Method, Set Builder Notation. Then an odd integer, being one more than a multiple of 2, is x = 2m + 1. Square brackets and other notation (or nothing at all) have other meanings. Notation for drums and percussion varies considerably from arranger to arranger, and from publisher to publisher. State whether each … We simply list each element (or \"member\") separated by a comma, and then put some curly brackets around the whole thing:This is the notation for the two previous examples:{socks, shoes, watches, shirts, ...} {index, middle, ring, pinky}Notice how the first example has the \"...\" (three dots together). Summary: Set-builder notation is a shorthand used to write sets, often for sets with an infinite number of elements. A good way to begin is to try writing the notation from any page in your drum lesson books. Basic set notation. Relative complement or difference between sets. Sets can be related to each other. It stipulates that sets be written in the format { x : x has property Y } , which is read as "the set of all elements x such that x has the property Y; the colon ":" means "such that". A set is given a name, usually an uppercase letter. Write x 2 - 4 > 0 in its simplest form in set notation. Write set A using roster notation if A = { x | x is odd, x = 7 n, 0 < x < 70}. Intersection and union of sets. A set is a well-defined collection of distinct objects. For more help, like how to write latitude and longitude using degrees, minutes, and seconds, scroll down. Set Notation(s): A discussion of set notation: lists, descriptions, and set-builder notation. Type-set formatting: Text-only formatting: Notes (2, 3) (2, 3) Put points in parentheses. Set notation is used to help define the elements of a set. Related Pages The items are called the MEMBERS or ELEMENTS of the set. If you perform different numbers of reps per set, traditional Number X Number notation doesn’t always work. The best way to become good at writing drum notation is to do it. The symbols shown in this lesson are very appropriate in the realm of mathematics and in mathematical logic. Active 8 years, 6 months ago. Answer. A variant solution, also based on mathtools, with the cooperation of xparse allows for a syntax that's closer to mathematical writing: you just have to type something like\set{x\in E;P(x)} for the set-builder notation, or \set{x_i} for sets defined as lists. This set notation literally translates to: 'Gather together all the animals that are cows.' For instance, if I were to list the elements of "the set of things on my kid's bed when I wrote this lesson", the set would look like this: { pillow, rumpled bedspread, a stuffed animal, one very fat cat who's taking a nap }. We use a special character to say that something is an element of a set. Later, you’ll be able to listen to a song and write out the drum part the drummer is playing. We can have infinite sets for example {1, 2, 3, …}, meaning that the set has an infinite number of elements. MS Word Tricks: Typing Math Symbols 2015-05-14 Category: MS Office. 8 $\begingroup$ Does anyone know a good resource (preferably pictures) that illustrates a conventional way to write the special sets symbols, i.e. This notation can also be used to express sets with an interval or an equation. How this adds anything to the student's understanding, I don't know. Sets are "unordered", which means that the things in the set do not have to be listed in any particular order. The elements of B can be listed, being not too many integers: B = { –4, –3, –2, –1, 0, 1, 2, 3, 4, 5, 6 }. 0. The intersection will be the set of integers which are both odd and also between –4 and 6. If it does, these are the symbols to use: katex.render("\\mathbb{N}\\,", sets02A); : the natural numbers, katex.render("\\mathbb{Z}\\,", sets02B); : the integers, katex.render("\\mathbb{Q}\\,", sets02C); : the rationals, katex.render("\\mathbb{R}\\,", sets02D); : the real numbers. Symbols For Set Notation In Use: ... eliminate the need to write long, plain language instructions to describe calculations and other processes. The formal way of writing "is a multiple of 2" is to say that something is equal to two times some other integer; in other words, "x = 2m", where "m" is some integer. A = { x : x is a letter in the word dictionary } We read it as “A is the set of all x such that x is a letter in the word dictionary” For example, What is the right set notation for this setup. Set Builder Form : Set-builder notation is a notation for describing a set by indicating the properties that its members must satisfy. This is what notes normally look like. When you have that number, write it down in degrees and denote whether it lies east or west of the Prime Meridian. Knowing how to write drum notation gives you an advantage as a student and a performer. Where the “5+3+2” would be a set of 5, then a set of 3, then a set of 2 in close proximity. Solution: a) Because set B consists of the natural numbers less than 6. we write B={x|x∈ℕ and x6} Another acceptable answer is B={x|x∈ℕ and x≤5}. Use set notation to describe: (a) the area shaded in green (b) the area shaded in red : Look at the venn diagrams on the left. SETS AND SET NOTATION A SET is a collection of items. 4. Use set notation to describe: (a) the area shaded in blue (b) the area shaded in purple. It is also normal to show what type of number x is, like this: 1. We also have the empty set denoted by {} or Ã, meaning that the set has no elements. There are infinitely-many of them, so I won't bother with a listing. You never know when set notation is going to pop up. You could do 1×5,1×6,1×8,1×10, but it takes a lot of pen strokes. Write set C using a rule if C = {11, 21, 31, 41, 51, 61}. or âdoes not belong toâ, Example: Google Classroom Facebook Twitter. These rests are normally 5-10 seconds in length. The default way of doing it is to use the Insert > Symbols > More Symbols dialog, where you can hunt for the symbol you want. As, we know that, Range of the functions, y = tanx and y = cotx is i.e. Notation plays an important role in mathematics. Universal set and absolute complement. The means \"a It's a lot easier to describe the last set above using the roster method: The ellipsis (that is, the three periods in a row) means "and so forth", and indicates that the pattern continues indefinitely in the given direction. The following video describes: Set Notations, Empty Set, Symbols for Relative complement or difference between sets. The note head shown above is a note ball. If, instead of taking everything from the two sets, you're only taking what is common to the two, this is called the "intersection" of the sets, and is indicated with an upside-down U-type character. The set B is a subset of A, so it contains only things that are in A. The items are called the MEMBERS or ELEMENTS of the set. Directions: Write out the event in probability notation and then identify the outcome space. CCSS.Math: HSS.CP.A.1. Bringing the set operations together. It is used with common types of numbers, such … Embedded content, if any, are copyrights of their respective owners. Some notations for sets are: More Lessons On Sets. A closed interval is one that includes its endpoints: for example, the set { x | − 3 ≤ x ≤ 1 } . Answer: { x: x > 2 or x < - 2 } A set can be described directly by enumerating all of its elements between curly brackets, as in the following two examples: {\displaystyle \ {7,3,15,31\}} is the set containing the four numbers 3, 7, 15, and 31, and nothing else. A set is a well-defined collection of distinct objects. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. We relate a member and a set using the symbol â. Set 1 and set 4 can be written as { x / x is a letter of the modern English alphabet} and { x / x is a type of sausage} { x / x is a letter of the modern English alphabet} is read, " The set of all x such that x is a letter in the modern English alphabet. To write this interval in interval notation, we use closed brackets [ ]: We have already seen how to represent a set on a number line, but that can be cumbersome, especially if we want to just use a keyboard. It takes the form \displaystyle \left\ {x|\text {statement about }x\right\} {x∣statement about x} which is read as, “the set of all These are pronounced as "C union D equals..." and "C intersect D equals...", respectively. Subset, strict subset, and superset. Proper function notation for matrix functions? In set-builder notation, the previous set looks like this: { x ∣ x ∈ N, x < 1 0 } 1. Thankfully, there is a faster way. the set of all real numbers. If How to write special set notation by hand? Write the unbounded set in both interval notation and set notation. Some notations for sets are: {1, 2, 3} = set of integers greater than 0 and less than 4 = {x: x is an integer and 0 < x < 4} We also have the empty set denoted by {} or Ø, meaning that the set has no elements. Thus, their range will be … So, in full formality, the set would be written as: katex.render("\\mathbf{\\color{purple}{\\{\\,x \\in \\mathbb{Z}\\,\\mid\\, x = 2m + 1,\\, m \\in \\mathbb{Z}\\,\\}}}", sets06); The solution to the example above is pronounced as "all integers x such that x is equal to 2 times m plus 1, where m is an integer". Try the given examples, or type in your own Some sets are big or have many elements, so it is more convenient to use set-builder notation as opposed to listing all the elements which is not practical when doing math. A mathematical example of a set whose elements are named according to a rule might be: { x is a natural number, x < 10} If you're going to be technical, you can use full "set-builder notation" to express the above mathematical set. Set-builder is an important concept in set notation. Subset, strict subset, and superset. Bringing the set operations together. Yes, the symbols require those double-barred strokes for all the vertical portions of the characters. This video introduces how to read and write interval and set notation. The colon is a delimiter that transitions from the notation for the generic representative to the notation that is … Basic set operations. ... Now that we know how to denote events, the next step is to use the set notation to represent set operations. Describing Sets Then we have: A = { pillow, rumpled bedspread, a stuffed animal, one very fat cat who's taking a nap }. there are no restrictions on x), you can simply state the domain as, ' all real numbers ,' or use the symbol to represent all real numbers . You must understand it! The "things" in the set are called the "elements", and are listed inside curly braces. $\mathbb{N,Z,Q,R,C}$ etc., by hand? 1. We welcome your feedback, comments and questions about this site or page. Then A is a subset of B, since everything in A is also in B. If two sets are being combined, this is called the "union" of the sets, and is indicated by a large U-type character. If an object z is not an element of Each operation will … Extended sets (also known as rest-pause) is a technique to break 1 set into many sets, with short rest periods between them. an object x is an element of set A, we write x â A. There’s another way! Write the set in set notation. elements of the set. Write down the set of solutions to the inequality in all three notations. Click here to get an answer to your question ️ how to write vowels in set builder notation abdulmuneemthegreat1 abdulmuneemthegreat1 10.06.2020 Math Primary School How to write vowels in set builder notation ... let C be the set containing all the consonants, and let V be the set containing all the vowels. Exercise 3. So let's name this set as "A". Imagine how difficult it would be to text a friend about a cool set if the only way to do this was with a number line. The elements of B are even, so I need to pick out the elements of A which are even; these will be the elements of the subset B. Range of the functions is the set of all real numbers greater than or equal to 1 or less than or equal to -1. Please submit your feedback or enquiries via our Feedback page. So that means the first example continues on ... for infinity. The cat's name was "Junior", so this set could also be written as: A = { pillow, rumpled bedspread, a stuffed animal, Junior }. Set-theory and logical statements generally have their own notation. Set-builder notation is generally used to represent a group of real numbers. Exercise 4. Since A = { 4, 5, 6, 7, 8 } (because "inclusive" means "including the endpoints") and B = { –9, –8, –7, –6, –5, –4, –3, –2, –1 }, then their union is: { –9, –8, –7, –6, –5, –4, –3, –2, –1, 4, 5, 6, 7, 8 }. These Mark the set on the real number line. Example 1. The individual objects in a set are called the members or elements of the set. I am not really familiar with Set Theory notation, but I think it has the answer. You will often find it useful to write a drum rhythm or fill that you want to remember. Essentially the braces are saying ‘this is a set/collection of things’. This same set, since the elements are few, can also be given by a listing of the elements, like this: Listing the elements explicitly like this, instead of using a rule, is often called "using the roster method". Email. If A = {1, 3, 5} then 1 â A and 2 â A. Answer. Basic set notation. Answer. Email. Reading Notation : ‘|’or ‘:’ such that. To show something is not a subset, you draw a slash through the subset symbol, so the following: ...is pronounced as "B is not a subset of A". This isn't a rule as far as I know, but it does seem to be traditional. We can use set-builder notation: $\{x|x\ge 4\}$, which translates to “all real numbers x such that x is greater than or equal to 4.”Notice that braces are used to indicate a set. Your text may or may not get technical regarding the names of the types of numbers. It is not a perfect circle; it’s slightly oval. In set-builder notation, the previous set looks like this: katex.render("\\{\\,x\\,\\mid \\, x \\in \\mathbb{N},\\, x < 10\\,\\}", sets03); The above is pronounced as "the set of all x, such that x is an element of the natural numbers and x is less than 10". Since, and . Usually, you'll see it when you learn about solving inequalities, because for some reason saying "x < 3" isn't good enough, so instead they'll want you to phrase the answer as "the solution set is { x | x is a real number and x < 3 }". The vertical bar is usually pronounced as "such that", and it comes between the name of the variable you're using to stand for the elements and the rule that tells you what those elements actually are. How To Write Set Notation for set properties such as: Union - Elements in Set A or Set B Intersection - Elements in Set A and Set B Cardinality - the count of … Viewed 8k times 15. We can write the domain of f(x) in set builder notation as, {x | x ≥ 0}. Ex:(a) If XX is the set containing the first three positive integers, we can write X={1,2,3}X={1,2,3}. There is a fairly simple notation for sets. For example, a … Look at the venn diagram on the left. You can use regular lined notebook paper if you don’t have staff paper. Universal set and absolute complement. Try the free Mathway calculator and How do you write arrow notation for functions involving changes of dimension? How to define sets with both the roster (or list) method and using set-builder notation. URL: https://www.purplemath.com/modules/setnotn.htm, © 2020 Purplemath. (2, 3) (2, 3) When you are writing an open interval, use parentheses, and note that "this is an interval", to differentiate an … Fortunately, mathematicians have agreed on notation to describe a set. Sets are usually named using capital letters. There are a lot of ways to notate a set; and the verbal method (as done in the examples above) is certainly one way.Another common way to denote a set is to list out the elements with a comma between each one, and to place curly brackets around the element list. {1, 2, 3} = set of integers greater than 0 and less than 4 = {x: x is an integer and 0 < x < 4}. The individual objects in a set are called the members or Google Classroom Facebook Twitter. empty set, symbols for âis an element ofâ, subset, intersection and union. set A, we write z â A. â denotes âis an element ofâ or âis a member ofâ or If the domain of a function is all real numbers (i.e. The set above could just as easily be written as: A = { Junior, pillow, rumpled bedspread, a stuffed animal }. In other words: Since "union" means "anything that is in either set", the union will be everything from A plus everything in B. Write set D using a rule if D = {1, 5, 9, 13, 17, 21, … It is used with common types of numbers, such as integers, real numbers, and natural numbers. problem and check your answer with the step-by-step explanations. Web Design by. For example, instead of making a list of all counting numbers smaller than 1000, it is more convenient to write { x / x is a counting number less than 100 } Set notation question: Write collection that contains all possible unions of sets from another collection. But I digress.... A set, informally, is a collection of things. We have a symbol showing membership. Solve: x 2 - 4 > 0 to get x 2 > 4. Intersection and union of sets. The third method is interval notation, in which solution sets are indicated with parentheses or brackets.The solutions to $x\ge 4$ are represented as $\left[4,\infty \right)$. Or, if the dots are between elements, like this: ...it means that the pattern continues in the same manner through the unwritten middle. It looks like an odd curvy capital E. For instance, to say that "pillow is an element of the set A", we would write the following: katex.render("\\mathrm{pillow} \\in A", sets01); This is pronounced as "pillow is an element of A". The following table gives a summary of the symbols use in sets. There's plenty more you can do with set notation, but the above is usually enough to get by in most algebra-class circumstances. An extended set may be seen as: Pull-up: 1x5+3+2. Note that it's unnecessary to load amsmath if you load mathtools. in words, how you would read set B in set-builder notation. âbelongs toâ, â denotes âis not an element ofâ or âis not a member ofâ All right reserved. This relationship is written as: That sideways-U thing is the subset symbol, and is pronounced "is a subset of". Well, to give that message in set notation, you could write {animals | the animal is a cow}. ... Now that we know that, Range of the set of integers which are both odd and between. Are copyrights of their respective owners as integers, real numbers (.! To a song and write out the drum part the drummer is....: set notations, empty set denoted by { } or Ã, meaning that the things the. Lot of pen strokes set Theory notation, but I digress.... a set N. Feedback page the answer it contains only things that are in a a for..., I do n't know lot of pen strokes a is also to. Feedback, comments and questions about this site or page can also be used to write drum notation you... Individual objects in a is a collection of items and every even integer is a cow.! The next step is to use the set C intersect D equals ''! A cow }, so I wo n't bother with a listing the items are called the members elements.: https: //www.purplemath.com/modules/setnotn.htm, © 2020 Purplemath ‘ | ’ or ‘ ’. Plain language instructions to describe a set of coordinates could look like “ degrees. Longitude using degrees, minutes, and every even integer, being one more than an number! Face of the characters the types of numbers, such as integers, real numbers ( i.e if object! Is not a perfect circle ; it ’ s slightly oval degrees E ” and... Your own problem and check your answer with the step-by-step explanations animals that are cows. could look “... Discussion of set a, so it contains only things that are in a are called the or... Question Asked 8 years, 6 months ago set denoted by { } or Ã, that. On sets say that something is an element of a series of Lessons on sets x > 2 or <. An equation is a fairly simple notation for drums and percussion varies considerably from arranger to arranger, natural. Next step is to try writing the notation from any page in your drum lesson books symbols in effort! For the huge array of percussion instruments and techniques most compact way a note ball a rule far! That you want to remember: Typing math symbols 2015-05-14 Category: ms Office note head above! Note ball unbounded set in both interval notation interval notation interval notation then! A well-defined collection of items distinct objects of '' ( x ) in Builder... a '' and a set extended set may be seen as Pull-up! With an infinite number of elements shown in this lesson are very appropriate in the set C... Writing subsets of the types of numbers step-by-step explanations set and various methods for sets. Describe calculations and other processes: 'Gather together all the animals that are cows '! Is x = 2m + 1, meaning that the things in the notation. A summary of the characters } \$ etc., by hand are pronounced as a '' to.! ’ ll be able to listen to a song and write out the drum part the drummer is.... Various math topics usually an uppercase letter own problem and check your answer with the step-by-step explanations the of. Am trying to write the intersection will be the set, you ’ ll be able to listen a! Writing the notation from any page in your own problem and check your with. '' in the set of solutions to the student 's understanding, I n't! Nothing at all ) have other meanings: ( a ) write set C using a rule as far I... More help, like how to write sets, often for sets both. For something in the set how to write set notation x < - 2 } Set-theory logical! And a set is inside '' another set, symbols for set notation: ‘ | or! Usually an uppercase letter usually enough to get by in most algebra-class.! Simplest form in set Builder notation ms Word Tricks: Typing math symbols 2015-05-14:. Double-Barred strokes for all the animals that are cows. sets, often sets! Properties that its members must satisfy url: https: //www.purplemath.com/modules/setnotn.htm, © 2020.... Out the drum part the drummer is playing your feedback or enquiries via our feedback page a shorthand used write... About this site or page … there is a shorthand used to express sets with infinite! We use a special character to say that something is an element of set a, it. Slightly oval the items are called the members or elements of the second die be used to write unbounded... Integer, being one more than an even number on the face of the set of could. Infinitely-Many of them, so I wo n't bother with a listing the above is a of! ’ s slightly oval often for sets with an infinite number of elements also between and. A set solve: x 2 - 4 > 0 in its simplest form set! And 6 and union listed inside curly braces describes: set notations empty... The drummer is playing find it useful to write sets, often for sets odd! Song and write out the event in probability notation and then identify the outcome space used to express with... Varies considerably from arranger to arranger, and natural numbers names of the second die more an... A multiple of 2, { x | x ≥ 0 } is all real (! If one set is given a name, usually an uppercase letter url https... Following video describes: set notations, empty set denoted by { } or Ã, meaning that things! The things '' in the set has no elements something is an element of set notation Category ms... Represent set operations colon ) is a collection of distinct objects in set-builder notation D equals ''! Infinite number of elements, respectively ll be able to listen to a song and out!: 1 all three notations ‘ this is n't a rule if C = {,. This notation can also be used to express sets with an infinite number of elements is written as::. Whether each … there is a subset of a are all the vertical portions of the set using! The outcome space on the face of the symbols shown in this lesson are very in! The right set notation literally translates to: 'Gather together all the odd integers 11, 21 31. Symbols for set notation ( or nothing at all ) have other meanings the things the. To be traditional from publisher to publisher notation in use:... the. The properties that its members must satisfy their respective owners: Typing math symbols 2015-05-14 Category ms... The real number line lies east or west of the set B is a subset of a set called. things '' in the first part ( before the colon ) is a shorthand used to write,! Arranger, and set-builder notation is a notation for this setup is written as::... To remember write out the event in probability notation and set notation notations, set. Its members must satisfy this site or page these are pronounced as intersect. Normal to show what type of number x is, like how to write sets, for. Of '' to arranger, and every even integer is one more than a multiple of 2, a... Between –4 and 6 technical regarding the names of the set is, like:. You need more, try doing a web search for set is... ( before the colon ) is a set/collection of things ’ use...... Symbols for set notation for drums and percussion varies considerably from arranger to arranger, and natural.! Later, you ’ ll be able to listen to a song write! The answer, try doing a web search for set notation '' let 's name this as! A web search for set notation literally translates to: 'Gather together all the vertical portions the. To begin is to use the set Theory notation, but it a! 31, 41, 51, 61 } the free Mathway calculator and problem solver to. Set in both interval notation is a fairly simple notation for this setup help... Odd integer, being one more than an even integer is a generic for. We use a special character to say that something is an element of set a, we know how write. The properties that its members must satisfy '' and C intersect D equals... '', and even... Use set notation to describe: ( a ) the area shaded in purple, a set uppercase. Prime Meridian notators have created their own symbols in an effort to cater for the huge how to write set notation! B in set-builder notation ’ s slightly oval have to be how to write set notation lies or! A student and a set and various methods for defining sets subset '' I think it has the.. Pull-Up: 1x5+3+2 ll be able to listen to a song and write out the part... And write interval and set notation literally translates to: 'Gather together all the odd.... Notation can also be used to help define the elements of the symbols use in sets character to say something... A member and a performer a well-defined collection of things ’ a notation for involving! Shown above is usually enough to get by in most algebra-class circumstances to is.
|
2021-08-04 02:21:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7549962401390076, "perplexity": 712.6448316155527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154500.32/warc/CC-MAIN-20210804013942-20210804043942-00103.warc.gz"}
|
http://mathoverflow.net/questions/178667/first-description-of-how-to-remove-radicals-from-equations
|
# First Description of how to Remove Radicals from Equations
Who first described the technique of removing radicals as indicated in the answers to questions Tools for Removing Radicals from Equations and Rewrite sum of radicals equation as polynomial equation ?
When has it been described the first time and, what was the motivation? Was it actually an equation that involved square roots etc., or, has it been described as a proof, that equations involving radicals can be converted to polynomial equations?
I have already done a lot of online research without success; therefore I hope for answers from the MO community, even if the answers may be wellknown to experts in the field.
-
I don't really have an answer (hence the comment), but for some publications that are possibly worth looking at, see some of the older references I cite at the very end of this 16 November 2010 ap-calculus at Math Forum and see the references I give in my answer to the math StackExchange question History of the theory of equations: John Colson. – Dave L Renfro Aug 18 '14 at 17:03
@DaveLRenfro thanks for the pointers to further information; that is more than I hoped for. – Manfred Weis Aug 19 '14 at 5:44
|
2015-02-28 12:26:46
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8103582859039307, "perplexity": 613.7785021971557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461944.75/warc/CC-MAIN-20150226074101-00029-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://codeforces.com/problemset/problem/1517/C
|
C. Fillomino 2
time limit per test
1 second
memory limit per test
256 megabytes
input
standard input
output
standard output
Fillomino is a classic logic puzzle. (You do not need to know Fillomino in order to solve this problem.) In one classroom in Yunqi town, some volunteers are playing a board game variant of it:
Consider an $n$ by $n$ chessboard. Its rows are numbered from $1$ to $n$ from the top to the bottom. Its columns are numbered from $1$ to $n$ from the left to the right. A cell on an intersection of $x$-th row and $y$-th column is denoted $(x, y)$. The main diagonal of the chessboard is cells $(x, x)$ for all $1 \le x \le n$.
A permutation of $\{1, 2, 3, \dots, n\}$ is written on the main diagonal of the chessboard. There is exactly one number written on each of the cells. The problem is to partition the cells under and on the main diagonal (there are exactly $1+2+ \ldots +n$ such cells) into $n$ connected regions satisfying the following constraints:
1. Every region should be connected. That means that we can move from any cell of a region to any other cell of the same region visiting only cells of the same region and moving from a cell to an adjacent cell.
2. The $x$-th region should contain cell on the main diagonal with number $x$ for all $1\le x\le n$.
3. The number of cells that belong to the $x$-th region should be equal to $x$ for all $1\le x\le n$.
4. Each cell under and on the main diagonal should belong to exactly one region.
Input
The first line contains a single integer $n$ ($1\le n \le 500$) denoting the size of the chessboard.
The second line contains $n$ integers $p_1$, $p_2$, ..., $p_n$. $p_i$ is the number written on cell $(i, i)$. It is guaranteed that each integer from $\{1, \ldots, n\}$ appears exactly once in $p_1$, ..., $p_n$.
Output
If no solution exists, output $-1$.
Otherwise, output $n$ lines. The $i$-th line should contain $i$ numbers. The $j$-th number on the $i$-th line should be $x$ if cell $(i, j)$ belongs to the the region with $x$ cells.
Examples
Input
3
2 3 1
Output
2
2 3
3 3 1
Input
5
1 2 3 4 5
Output
1
2 2
3 3 3
4 4 4 4
5 5 5 5 5
Note
The solutions to the examples are illustrated in the following pictures:
|
2021-06-13 13:34:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7783976197242737, "perplexity": 301.6682215451035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487608856.6/warc/CC-MAIN-20210613131257-20210613161257-00131.warc.gz"}
|
https://www.zbmath.org/?q=an%3A1081.53077
|
# zbMATH — the first resource for mathematics
Quantum cohomology via $$D$$-modules. (English) Zbl 1081.53077
Author’s summary: We propose a new point of view on quantum cohomology, motivated by the work of Givental and Dubrovin, but closer to differential geometry than the existing approaches. The central object is a $$D$$-module which “quantizes” a commutative algebra associated to the (uncompactified) space of rational curves. Under appropriate conditions, we show that the associated flat connection may be gauged to the flat connection underlying quantum cohomology. This method clarifies the role of the Birkhoff factorization in the “mirror transformation”, and it gives a new algorithm (requiring construction of a Gröbner basis and solution of a system of o.d.e.) for computation of the quantum product.
##### MSC:
53D45 Gromov-Witten invariants, quantum cohomology, Frobenius manifolds 14N35 Gromov-Witten invariants, quantum cohomology, Gopakumar-Vafa invariants, Donaldson-Thomas invariants (algebro-geometric aspects)
##### Keywords:
Quantum cohomology; $$D$$-module; Birkhoff factorization
Full Text:
|
2021-08-02 23:28:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2505965530872345, "perplexity": 1487.8730727429943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154385.24/warc/CC-MAIN-20210802203434-20210802233434-00297.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/write-formulae-sodium-chloride-sodium-carbonate-chemicals-common-salt-washing-soda_27671
|
Share
Write the Formulae of Sodium Chloride and Sodium Carbonate. - Science
Course
ConceptChemicals from Common Salt - Washing Soda
Question
Write the formulae of sodium chloride and sodium carbonate.
Solution
The formula of sodium chloride is NaCl and that of sodium carbonate is Na2CO3.
An aqueous solution of sodium chloride is neutral because sodium chloride is formed from a strong acid, hydrochloric acid (HCl), and a strong base, sodium hydroxide(NaOH). When sodium chloride is dissolved in water, it gets hydrolysed to give equal amounts of hydroxide and hydrogen ions, and this makes its aqueous solution neutral.
Is there an error in this question or solution?
APPEARS IN
Solution Write the Formulae of Sodium Chloride and Sodium Carbonate. Concept: Chemicals from Common Salt - Washing Soda.
S
|
2020-02-18 19:02:25
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8938427567481995, "perplexity": 4606.391658312158}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143805.13/warc/CC-MAIN-20200218180919-20200218210919-00190.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/23609-exponential-functions.html
|
# Math Help - exponential functions
1. ## exponential functions
A man jumps from above - his speed of descent is given by V = 50(1-2^-0.2t) m/s where t is the time in seconds. Find the time taken for his speed to reach 40 m/s.
I know to set the equation to:
40 = 50(1 - 2^-0.2t)
but i can't seem to carry it out from there..because i can't take the log of both sides when the numbers are negative.
2. rearrange it to get the $2^{-0.2t}$ by itself.. eg:
$40 = 50(1-2^{-0.2t})$
=>
$40 = 50 - 50*2^{-0.2t}$
=>
$-10 = -50*2^{-0.2t}$
=>
$1/5 = 2^{-0.2t}$
then take logs of both side and continue. you should get something like $t = 5 \frac{\log{5}}{\log{2}}$
|
2015-05-29 15:33:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8231988549232483, "perplexity": 526.9854343549331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930143.90/warc/CC-MAIN-20150521113210-00079-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://www.scottaaronson.com/blog/?p=1786
|
## The Quest for Randomness
So, I’ve written an article of that title for the wonderful American Scientist magazine—or rather, Part I of such an article. This part explains the basics of Kolmogorov complexity and algorithmic information theory: how, under reasonable assumptions, these ideas can be used in principle to “certify” that a string of numbers was really produced randomly—something that one might’ve imagined impossible a priori. Unfortunately, the article also explains why this fact is of limited use in practice: because Kolmogorov complexity is uncomputable! Readers who already know this material won’t find much that’s new here, but I hope those who don’t will enjoy the piece.
Part II, to appear in the next issue, will be all about quantum entanglement and Bell’s Theorem, and their very recent use in striking protocols for generating so-called “Einstein-certified random numbers”—something of much more immediate practical interest.
Thanks so much to Fenella Saunders of American Scientist for commissioning these articles, and my apologies to her and any interested readers for the 4.5 years (!) it took me to get off my rear end (or rather, onto it) to write these things.
Update (4/28): Kate Becker of NOVA has published an article about “whether information is fundamental to reality,” which includes some quotes from me. Enjoy!
### 141 Responses to “The Quest for Randomness”
1. Michael Says:
What is the gain in the pursuit of randomness?
2. Scott Says:
Michael: Cryptographic security, for one thing … did you read the article?
3. Liron Says:
“The trick is the following: Given a probability distribution, we consider a search problem that amounts to ‘Find a string that occurs with large probability in the distribution and also has large Kolmogorov complexity.'”
Can you elaborate how “verify that a sampler’s output is faithful to a probability distribution” –> “verify that a searcher’s output has high Kolmogorov complexity” is a useful reduction?
Hi Scott,
Great article, thanks! One question for you:
You write, “As a first step, we could check whether the digits 0 through 9 appeared with approximately equal frequency, among, say, the first million digits. Passing such a test is clearly necessary for randomness…”
If someone took the first million digits PI and individually multiplied each digit by 2, would the resulting number be as random as PI? The resulting number would no longer be normal since there would be more even numbers than odd (when you multiply 2 x 0..4, you’d get an even product, and for 2 x 5..9 your product would consist of the digit 1 along with an even digit) and the only odd number in the resulting sequence would be 1.
If this number isn’t as random as the first million digits of PI, doesn’t it feel strange that you could reverse the process (dividing each digit or pair of digits starting with a 1 by 2) to make a more random number than you started with?
5. Scott Says:
Liron #3: Well, it converts a sampling problem into a search problem, which can be interesting for various complexity-theoretic reasons. For example, it allows you to prove the non-obvious statement that FBPP=FBQP if and only if SampBPP=SampBQP. The advantage of search problems over sampling problems, of course, is that it’s conceptually much clearer what you need to do to verify the output! Admittedly, running my reduction will produce a search problem that involves calculating the Kolmogorov complexity of the output, which is uncomputable! 🙂 By substituting time-bounded Kolmogorov complexity, I show one can bring the effort needed down to PSPACE and even the counting hierarchy, but it would be shocking if one could bring it all the way down to NP. Still, nice to know that FBPP=FBQP iff SampBPP=SampBQP.
For more details, see my paper The Equivalence of Sampling and Searching.
6. Scott Says:
Vadim #4: Here’s an even simpler version of your thought experiment. Take a uniformly-random n-bit string,
x = x1…xn,
and map it to a 2n-bit string in which every bit of x occurs twice:
y = x1x1…xnxn.
Then x is algorithmically-random, y is not algorithmically random, and there’s a completely reversible, easily-computable mapping between the two. But all that’s going on here is that x is maximally compressed, whereas y is needlessly verbose—I fail to see any “paradox.”
Yes, I see what you mean, thanks!
8. James Gallagher Says:
The Dilbert cartoon nails it – in physics, the important distinction between random and non-random is unpredictability – ie there is always a range of outcomes but any single one of them occurs without any possibility of human or god knowing or being able to predict which one will occur.
The mathematical discussion is just a boring analysis of this situation.
I liked most of your article, but your inclusion of boson-sampling makes it a little nerdy, not so much general interest reading
9. Jerry Says:
Scott: Very good article. I look forward to the sequel.
Leonard Mlodinow (@CalTech) has written the very enjoyable book, “The Drunkard’s Walk – How Randomness Rules Our Lives”.
http://www.amazon.com/Drunkards-Walk-Randomness-Rules-Lives/dp/0307275175/ref=sr_1_1?s=books&ie=UTF8&qid=1398204500&sr=1-1&keywords=drunkards+walk+how+randomness+rules+our+lives
He notes that Apple had to make their iPod “shuffle” option less random in order to make it appear random, as customers complained that songs would repeat to often, just like Dilbert’s “9”s.
Are you familiar with Benford’s law, used in forensic accounting?
Go to any page of the Boston Globe and count the number of times the digits 1 through 9 appear.
10. domenico Says:
I am not sure, but in crystallography there is the possibility to verify that some lattices (integer translation of three indipendent vectors) give costructive interference with a wave vector that it is a reciprocal vector (three indipendent vectors that are orthogonal to the lattice).
If the difference between scattered ray is an integer M, measured with a little pertubation of the interference with a little rotation, then if the direction of the incident ray is along a vector of the reciprocal lattice, then there is factorization of a number in a product of integer: if it is all right, a crystal can be used like a factorization for little integers, because the crystal produce all the possible interference (Laue method).
I am thinking that if the complexity of the crystallography results is great for each wavelenght, then the number of results that give the crystal is greater of a digital computer that give the same results with the theory.
11. Michael P Says:
You’ve got a typo here:
Both sequences have the same probability, namely, 2-30
12. Michael P Says:
You discussed both string compression, which is often associated with Shannon enthropy, and Kolmogorov complexity. What are known relations between enthropy and Kolmogorov complexity?
13. Scott Says:
James #8: Well, it’s obvious that randomness has something to do with unpredictability. Unfortunately, that observation by itself doesn’t get you as far as you might like. For how can you know if something is unpredictable or not? Just because you can’t predict it, doesn’t mean that no one else can. So that’s what the article is about—glad you liked most of it.
14. Scott Says:
Michael P #12: Good question! There’s a very strong relation, known since the 1970s, between the Shannon entropy of a computable distribution and the Kolmogorov complexity of a typical sample from that distribution. Indeed, this is exactly what I used to prove my equivalence of sampling and searching result. You can find a formal statement of the connection in Section 2.2 of my paper (or, say, in Li-Vitanyi, which is the standard reference for this area). But informally, it says the following: given any computable distribution D={px}x, almost every element x drawn from D has Kolmogorov complexity equal to
K(x) = log(1/px)±O(1),
where the O(1) can depend on the length of the program to sample D. In particular, we have
H(D) ≤ Ex~D[K(x)] ≤ H(D)+O(1),
where H is the Shannon entropy. (The first inequality is fairly obvious, while the second follows from the existence of the Shannon-Fano code.)
15. Miguel B Says:
Scott,
You wrote:
“… if the digits really were drawn completely randomly, and we looked at a million of them, then with overwhelming probability we would see roughly 10 percent zeros, 10 percent ones, and so forth.”
I fully expected your article to be the first “popular” account of randomness that didn’t assume that only an uniform distribution can be truly random. Sorry to be so picky, but this is a pet peeve of mine, and I was dissapointed.
Great article otherwise!
16. Scott Says:
Miguel #15: Sorry, I thought it was clear enough, from the context, that “completely random” was just a way to say “uniformly random” to a non-mathematician.
17. James Gallagher Says:
Scott #8
That’s why I included god in the things that can’t predict the outcome – I don’t believe in such a simple idea of god, but I do believe in such a simple idea of randomness (that no god could predict)
Look forward (as always) to the next installment
18. Miguel B Says:
Scott,
I understand. However, I can imagine a non-technical person thinking, “Wait a second, what if the completely random numbers come from throwing ten dice and adding the results?! Dr. Aaronson test is all wrong!”
Maybe I just overestimate non-technical people, but an intuitive understanding of the central limit theorem should be part of everybody’s mental toolkit.
19. William Hird Says:
Along the lines of psuedorandomness and unpredictability, Jeff Lagarias has a paper (sorry, no link) on psuedorandom generators where he famously states the unpredictability paradox: if a deterministic algorithm is unpredictable, it is hard to prove anything about it; in fact proving that it is unpredictable is just as hard as solving P vs. NP.
20. Scott Says:
William #19: Lagarias is right, but why is that a “paradox”? It’s a well-known phenomenon in computational complexity. Proving pseudorandomness is hard, and is indeed closely related to the great lower bound problems like P vs. NP (sometimes it’s easier, sometimes it’s actually harder, depending on what kind of pseudorandomness you want).
21. James Cross Says:
Scott,
I can’t believe you answer all of these questions. But I love it. I bought your because of it (Kindle – still a little pricey for digital but worth it).
So if the universe can be reduced or represented as bits. would it be a random number at any point in time?
22. Scott Says:
James #21: That depends on how you’re representing the state of the universe by bits, on what you mean by the state of the universe, and on what you mean by “random number.” For example, if you just look at a quantum pure state evolving by the Schrödinger equation, its Kolmogorov complexity (after truncating the amplitudes) increases only logarithmically with time—since to specify the state at a given time, it suffices to specify the initial state together with how long the state has been evolving for. On the other hand, if you include the results of quantum measurements (from an MWI perspective, which “branch” we’re in), then Kolmogorov complexity should increase more-or-less linearly with time, reaching its maximum when there’s just a soup of low-energy photons at the heat death of the universe. In that sense, one could say that the universe would converge toward a state that was “maximally random.” But even then, whether your encoding of the state of the universe was a Kolmogorov-random string, would depend entirely on whether you were representing the state in the most succinct possible way. For most reasonable encoding schemes, you wouldn’t be.
23. William Hird Says:
Scott#20:
My memory is bad on this, I forget the exact context for which Professor Lagarias uses the word “paradox”, but he shows three mathematical objects, (1) a secure psuedorandom generator, (2) a one way function, (3) a perfectly secure cryptosystem. If one of these objects exist, they all exist. I’m going to guess that he uses the word paradox akin to “a difficult puzzle to solve” 😉
24. Rahul Says:
Interesting article! What’s the state-of-the-art method when testing random number generators in the field? Are they tested by nuanced versions of n-digit frequencies? Or Kolmogorov based techniques?
25. Scott Says:
Rahul #24: If you don’t know anything about where your random numbers came from (e.g. that they were quantum-mechanically generated), then the “state of the art” is just to throw a whole bunch of statistical randomness tests at your sequence, looking for patterns that would make the sequence compressible. (If you’d like to “try this out at home,” then simply try feeding your sequence to good compression programs such as gzip.) You can think of what you’re doing as computing an extremely crude upper bound on the Kolmogorov complexity. I.e., if you find a pattern that makes your sequence compressible, then you’ve proved that it has non-maximal Kolmogorov complexity, so it almost certainly wasn’t drawn uniformly at random. But the converse isn’t true: there could be a subtle or not-so-subtle computable pattern (that your sequence was the binary digits of √2, let’s say) that your statistical analysis tools simply failed to find.
26. Attila Szasz Says:
Wow, I’d really love to see your take on a Great Ideas in Algorithmic Information Theory course/notes someday,
there are a plenty of subtle topics covered in Li-Vitányi exceptionally well, but I’m still pretty sure your style and insights would prove useful for anyone trying to get a first grasp on stuff like Levin search, inductive reasoning, Chaitin type perspective of Gödel’s theorems (his omega and philosophical remarks probably included), the incompressiblity method, time bounded kolmogorov complexity (I especially wonder whether you’ve ever used some of this material in your work, some theorem of Fortnow or Sipser for instance), just to name a few.
27. Scott Says:
Attila #26: OK, will add to my stack! 🙂 (Part of the challenge, but also part of the reason to do to it, would be to master this material myself.)
28. asdf Says:
You know about Downey and Hirschfeldt’s book on algorithmic randomness? There is an online draft:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.130.666
It looks really interesting, though mostly about computability theory and logic, including relative computability. There’s a part though about notions of randomness that are computable.
29. Scott Says:
asdf #28: No, hadn’t seen that! Will take a look.
30. William Hird Says:
Rahul#24
Assuming you are talking psuedorandom numbers and not “real random” numbers, I would say (my opinion) that the Dieharder suite of random number test software is state of the art. See Robert Brown’s website at Duke University. He is quite a character!
31. gidi Says:
I thought of an (obviously) wrong counterargument to the proof that K(x) cannot be computed, which led me to a question.
When one thinks of computing K(x) the obvious algorithm is simply going over all strings in increasing order and running each string as a program to see if it outputs x. Since “print x” works, the number of strings we have to go over is finite, so this is “almost” good.
The problem of course is that when we run a program we never know if it will eventually print x(e.g. the halting problem), so the above algorithm is no good.
However, this led me to think to the following question:
– are there notions of Kolmogorov-type complexity that count also the resources used till we print x? (resources in terms of time and maybe also memory).
32. Rahul Says:
Scott #25: Thanks!
So, I’m very eager to read the next part of Scott’s article.
But I cannot resist asking: Is there really a way to do better that the current strategy of statistical testing?
i.e. Can there really be a test that can certify random numbers to the extent that if a a black box were spewing out something like Pi() or sqrt(2) or some such statistically random pattern then it could tell that apart?
Sounds amazing to me! Am I understanding this correctly?
So what if one took a large sequence certified to be random by this test and then loaded it on another black box as a list and then tested Black Box #2? Would this divine test flag it as non-random when reused?
33. Sam Hopkins Says:
Clearly a pseudorandom number cannot have a high Kolmogorov complexity. Is there a rigorously-defined property that can tell us when we have a good pseudorandom number? “Statistical randomness” seems pretty ad hoc, and might do a good job for practical purposes, but doesn’t seem that conceptual. Because of the intimate relationship between pseudorandomness and, for example, one-way functions, I would think there ought to be some measure like Kolmogorov* complexity, where the asterix indicates we take into account time complexity as well. Well, not exactly time complexity of the forwards direction (producing the number), but time complexity of the backwards direction (inverting the pseudorandom generator).
So the setup would be something like: we have a string a_n of numbers, indexed by natural numbers n \in N, say. We look for a Turing machine that can invert the production of these numbers (i.e. yield n on an input of a_n), and see how great a time-complexity it has. The higher the minimum time complexity, the better the “pseudorandomness” of the sequence of strings.
Surely something like this has been considered before.
34. Scott Says:
gidi #31 and Sam Hopkins #33: Yes! Both of you are groping toward something that actually exists, namely, the theory of time-bounded Kolmogorov complexity (which, sadly, I wasn’t able to get into in this article). You can, for example, define KT(x) as the minimum, over all programs P that output x, of |P|+log(T(P)), where |P| is the number of bits in P and T(P) is the number of time steps. And there are other ways to combine program length with running time, to get a single measure of time-bounded Kolmogorov complexity.
And yes, passing from K to KT immediately solves the uncomputability problem of Kolmogorov complexity: now you only have to iterate over finitely many programs, each for a finite amount of time! But maybe not surprisingly, instead of uncomputability you now have a computational intractability problem on your hands—arising from the fact that you still have exponentially many possible programs to check.
You might hope that there would be some clever way to avoid the exponential search. But alas, if cryptographically-secure pseudorandom number generators exist—which, as William #23 said, is known to be equivalent to the existence of one-way functions—then there can’t be such a way, since it could be used to distinguish random strings from pseudorandom ones, and hence break any PRNG, if it existed. In fact, time-bounded Kolmogorov complexity is so intimately related to pseudorandomness in cryptography and complexity theory, that usually people just talk directly about the latter rather than about KT.
I say more about pseudorandomness in Chapters 7 and 8 of QCSD, and there are many good resources elsewhere (e.g., Luca Trevisan’s lecture notes and Avi Wigderson’s survey articles).
35. Scott Says:
Rahul #32: Alas, if your “divine pseudorandomness test” existed, then as I explained in comment #34, you would be able to use the test to break any cryptographic pseudorandom number generator. And that, in turn, is known to let you invert any one-way function. So, if OWFs exist—which is only a “slightly” stronger conjecture than P≠NP—then your divine test shouldn’t be possible.
That’s not to say, of course, that one can’t “approach” divinity without reaching it! For example, if you tell me that your bits being the binary expansion of some famous irrational number is the possibility you’re worried about, then we can simply throw a check for that in to our randomness tester. On the other hand, if all you tell me is that you’re worried your n-bit string can be generated by some computer program that’s √n bits long and takes n2 time steps, then I can’t efficiently test for that without being able to invert OWFs.
36. Rahul Says:
Scott #35:
Thanks again! So what’s the rough route / roadmap to doing better at random number generation than where we are now (say, the Yarrow algorithm). i.e. How? And why?
Is there a practical need to do better than what can fool a typical statistical randomness test suite? And if someone offers me a YarrowPlus claiming it’s better than Yarrow what randomness metric do I use to judge? i.e. both will equally resist compression or perform well at conventional tests, right?
PS. Am I asking too many questions? If so, I’ll stop. 🙂
37. Scott Says:
Rahul #36: Yes, there’s a practical need for random numbers better than what could fool a typical statistical randomness test suite—and that practical need arises almost entirely from cryptography. One part of the solution is cryptographic pseudorandom number generators, such as Blum-Blum-Shub, Blum-Micali, etc., whose security can be based on “standard” hard problems such as factoring and discrete log. This Wikipedia page gives a good summary of what’s available.
Note, however, that certain elliptic-curve-based CPRNG’s were recently revealed to have been backdoored by the NSA! So, that underscores the importance of checking that whatever CPRNG you’re using actually matches what was analyzed in a theoretical security reduction, rather than relying on intuition or authority, or saying “sure, some shortcuts were taken in implementation, but I still don’t see how to break the thing, so it’s probably fine.”
Another part of the solution is quantum-mechanical random number generators. For that, you can just use a Zener diode, or amplified thermal noise in analog circuits (as recent Intel chips actually do). Or, if you’re paranoid about your hardware being compromised, you can also use Bell inequality violations to get “device-independent, certified randomness”—something whose theory was worked out within the last few years, that’s just starting now to be demonstrated experimentally, and that’s the subject of Part II of my article.
The two approaches are more complementary than competing—since typically, what you’d want to do in practice is first use physics to generate a small random seed, then use the cryptographic methods to expand the seed into a long pseudorandom string.
And yes, that’s enough questions for today. 🙂
38. William Hird Says:
I would like to mention one more point about the Lagarias paper on generating psuedorandom numbers, he doesn’t mention the field of cellular automata as a possible source of algorithmic unpredictability. Cellular automata (like Conway’s Game of Life) are known to exhibit the property of emergence, it appears to generate states that can’t be predicted from the initial conditions. I think that if any generator is ever going to be proved to be unpredictable, it will probably have to be based on the principles of the cellular automata. So one could envision a generator based on the theory of randomness extraction: you would have one cellular automaton generating a “pool of bits” close to having a uniform distribution and then have a second cellular automation extract the bits from the “pool” , like a lottery drawing mechanism.
39. fred Says:
small typo page 4
“But if k is even a little bit smaller than n (say, n – 50), then 2^k+1 – 1 will be vastly smaller than 2n.”
should be “2^n” I think.
40. Scott Says:
William #38: Yeah, Wolfram discusses the same thing in New Kind of Science. I completely agree that cellular automata (maybe combined with a randomness extractor) seem like a good and obvious way of getting cryptographic-quality pseudorandomness; I should’ve mentioned that in comment #37. If a CPRNG is all you want, then you don’t need anything “structured” like factoring or discrete log: in principle, any one-way function is known to suffice. And a sufficiently “scrambling” cellular automaton seems like it should be able to give you, not merely a one-way function, but a pretty good source of pseudoentropy directly (i.e., without needing to apply some complicated reduction to get the pseudoentropy out).
41. Sandro Says:
This part explains the basics of Kolmogorov complexity and algorithmic information theory: how, under reasonable assumptions, these ideas can be used in principle to “certify” that a string of numbers was really produced randomly—something that one might’ve imagined impossible a priori. Unfortunately, the article also explains why this fact is of limited use in practice: because Kolmogorov complexity is uncomputable!
Given Kolmogorov complexity is incomputable, could there exist some deterministic algorithm that could pass all possible statistical randomness tests, in principle?
42. fred Says:
Scott #40
From what I’ve seen it’s not that it’s necessarily hard to come up with better sources of pseudo-randomness (cellular automaton and whatnot), but the difficulty is that the method has to be practical, i.e. fast and efficient enough to be embedded on cell phone chips, etc.
43. Rahul Says:
Given Kolmogorov complexity is incomputable, could there exist some deterministic algorithm that could pass all possible statistical randomness tests, in principle?
Pi() would, rght?
44. Scott Says:
Sandro #41: No, the obvious problem is that, as soon as you specify which deterministic algorithm A you’re using, there’s always at least one statistical test that distinguishes A’s output from random. Namely, the test that simply checks whether or not your string is the deterministic output of A! 🙂 This, in particular, kills Rahul #43’s suggestion of using the digits of π.
What’s not ruled out by this argument, and what we think you can actually get, is a deterministic scheme for randomness expansion. I.e., for starting out with a 500-bit truly random seed s (which maybe you obtained quantum-mechanically, or from thermal noise, or the weather, or whatever), and then deterministically expanding s into a million-bit random string f(s), in such a way that the only way to distinguish f(s) from a uniformly-random million-bit string, is essentially to loop through all 2500 possibilities for s. Or more generally, to expand an n-bit random seed into as many pseudorandom bits as you want—say, n10 of them—in such a way that distinguishing the result from random requires exp(n) time. This is exactly what CPRG’s (cryptographic pseudorandom generators) accomplish, under plausible complexity assumptions.
45. Nick M Says:
If Kolmogorov complexity is uncomputable so is Omega? But (an) Omega was computed to 64 digits by Calude, Dinneen and Shu back in 2001 (admittedly using a lot of tricks) and the result has Greg Chaitin’s endorsement.
46. Sandro Says:
Scott #44:
in such a way that distinguishing the result from random requires exp(n) time.
This is closer to what I was getting at: is it possible that some deterministic algorithm performing randomness expansion could pass any computable test, regardless of its complexity. Certainly the complexity criterion of taking exp(n) makes for “practical/good enough” CPRG, but I’m wondering if there’s some achievable ideal, in principle.
I’m assuming a randomness test is simply given a bit string source that it can query to arbitrary length. This is basically induction on bit string sources, ie. Solomonoff Induction could eventually reproduce the generating function, but it’s dependent on incomputable Kolmogorov complexity, so this seems to leave open the possibility that *some* algorithm could not be discoverable by *any* induction.
47. William Hird Says:
Fred#42:
Fred, many cellular automata functions can be implemented using shift registers and simple logic gates: these are trivial to implement in hardware and as we all know, there seems to be no end to man’s ingenuity to cram more logic circuitry into smaller spaces ( and draw less current too !).
48. Sam Hopkins Says:
Scott #34: I’m a little confused about how the time-complexity of the TM that produces a pseudorandom number should factor into how “pseudorandom” it is. Presumably we want our pseudorandom generators to be fast. It’s the “inverse TM” that we care about being slow, right?
49. Scott Says:
Nick #45: Yes, Ω is uncomputable—I wouldn’t say “because” Kolmogorov complexity is uncomputable (a different proof is needed), but it is. And yes, for a suitable choice of programming language, it’s possible to compute a finite prefix of Ω, maybe even a long one. (For example, if I had a language where it took more than 1000 bits to write a halting program, then I could trivially know that the first thousand digits of Ω were all 0’s!) Likewise, with enough cleverness, it’s possible to learn the values of K(x) for various particular strings x, at least when K(x) is sufficiently small.
There’s no contradiction whatsoever here with the general unsolvability of these problems—any more than there is between the MRDP theorem (showing that solvability of Diophantine equations is uncomputable in general), and the fact that Andrew Wiles managed to prove Fermat’s Last Theorem.
50. Scott Says:
Sandro #46: Well, if you have a deterministic function f that takes an input seed x of size n, and that expands x to a longer pseudorandom output f(x) via a computation taking T(n) time, then it’s always going to be possible to “reverse-engineer” f (i.e., to find a preimage x mapping to a given f(x)) in 2nT(n) time, by simply trying all possibilities. In that sense, what you’re asking for is impossible.
51. Scott Says:
Fred #42: Yes, as William #47 says, cellular-automaton-based CPRG schemes also tend to have the advantage of being more efficient than the number-theoretic schemes. (Though there’s nothing magical about cellular automata here: any “sufficiently scrambling” operation on bits is likely to work just as well.)
52. Scott Says:
Sam #48: Generally, given a seed of length n, you want the forward-computation of your CPRG to be “efficient” (i.e., to be doable in poly(n) time), whereas inversion should require exp(n) time.
53. Sam Hopkins Says:
One more question: do you know of any connection, formal or mere analogy, between randomness defined in this computational sense, and “quasirandomness”/”pseudo-randomness” as studied in additive combinatorics? Looking from far outside, it seems one of the big takeaways from that field is that an object will either behave nearly like a “random” one, or will nearly have a rigid structure.
54. Michael Dixon Says:
Scott,
I’ve recently been trying to discover ways of implementing some notions of cryptographic protocols to scenarios to situations dealing with non-linear time and time travel. I’m a bit bothered by how many conflicts in Star Trek-like shows could be avoided (or enhanced) with clever applications of cryptography and secrecy of information.
For example, what kind of cryptographic primitives would be needed to help distinguish a dishonest time traveler from their “past/present” counterparts? This might seem impossible at first, since the time traveler can play dumb. But imagine that we forced the two to play an information game against each other (their aims being to prove that the other is the time traveler). In addition, let us have the ability to put them to sleep and wipe their memory similar to the situation posed in the Sleeping beauty problem. Can we distinguish one from the other?
In trying to solve this, one of the biggest hurdles I’ve encountered concerns how the relationship between randomness and time is established. The most familiar (classical) models of randomness, such as Martin-Lof’s, are definitely not “exotic” enough to work with strange variants of temporal logic. What other models of randomness might you suggest (given they exist) I look into instead?
55. Rahul Says:
I had a question about the CSPRNG vs. PRNG distinction: Wikipedia seems to say that most PRNG’s will even fail the next-bit test i.e. Given a random sequence the next bit is predictable with more than 50% probability.
How does such an attack work? Say for Mersenne twister or LCG how does one reverse engineer them to predict the next bit?
56. Scott Says:
Sam #53: Good question! The type of pseudorandomness that arises in things like the Green-Tao theorem (or the Riemann Hypothesis, for that matter) is typically much, much weaker than the type of pseudorandomness sought in cryptography. For the latter, you need pseudorandomness against completely-arbitrary polynomial-time distinguishers, whereas for the former, you “merely” need pseudorandomness against whatever specific regularities would make your theorem false if they were present.
The flip side, of course, is that in cryptography you have the freedom to design your PRG however you like—the more structureless, the better—whereas in math, you’re trying to prove pseudorandom behavior in particular, “structured” objects (such as the set of prime numbers).
Having said that, for many applications in theoretical computer science (e.g., Valiant-Vazirani, approximate counting, error-correcting codes, extractors and condensers), we too only need the weaker kinds of pseudorandomness—the kinds that can be shown to exist unconditionally using present tools! And largely because of that, there’s been an extremely rich interaction over the last decade between CS theorists and mathematicians about pseudorandomness (especially regarding additive combinatorics, and its applications to randomness extractors). Avi Wigderson, Terry Tao, and Timothy Gowers have all written eloquently about this story, and I’m not going to do it justice in a blog comment.
57. Jon Lennox Says:
Michael @54: Given Scott’s proof that $$P^{CTC} = PSPACE$$, does a universe with time travel in fact have any cryptographic primitives at all?
58. William Hird Says:
Mike Dixon #54
Hi Mike, just out of curiosity what is (are ) the motivation (s) for your question, are you a sci-fi writer looking for plausible story lines or is the question motivated by pure scientific inquiry? If the latter, there are several problems with some of the concepts you are alluding to like “time travellers”; we don’t even know if such a phenomenon is even possible (is it even “allowed” by the laws of quantum mechanics as we know them)? What do you mean by “nonlinear time”, can you relate this notion to some known aspect of general relativity?
59. Richard Says:
A lovely article. I have a question about history. When you wrote “Kolmogorov-Solomonoff-Chaitin complexity”, does that reflect the historical order of their discoveries?
60. Scott Says:
Richard #59: No, according to Wikipedia it was first Solomonoff in 1960/1964, then Kolmogorov in 1965, then Chaitin in 1966/1968. But besides their being independent, they also had different concerns—e.g., Solomonoff was much more focused on inductive inference than on the K(x) function as such.
61. Michael Dixon Says:
@57: Yes and no. It depends on what your standards for cryptographic primitives are. If you say that computationally feasible = polynomial time and expect that any implementation of a primitive has to be based on computational (in)feasibility, then probably. Though this is not always the case. For instance, others have considered using NC^0 or logspace instead.
However, from a pure logic setting, you can abstract away the computational infeasibility aspect entirely. For instance, I can just assume the existence of a perfectly secure Enc(k,m). I don’t have to worry about implementing its security using a computationally hard problem. I only care about how I can preserve security with its usage. This is what we happens in BAN logic and CPL.
@58: For me it is just a theoretical or philosophical inquiry. Even given the silly premises of these sci-fi scenarios, can ideas from cryptography and information security help us? I think the answer is an obvious yes.
Recently, I researched various logics for cryptographic protocols. I noticed that they indirectly assumed a lot about the nature of information/randomness. Typical protocol models admit a close connection between randomness and time. Algorithmic randomness is typically understood using sequential forms of computation. Randomness defined using notions of unpredictability require that past “events” not yield information about a future event (such as a random number generator). Steps for executing protocols are done sequentially (sometimes in parallel). Though, all of these assume that there is a linear order on states/events. I want to know if we can generalize or spice up these concepts to handle more elaborate situations.
For further clarification, I’m only considering the basic logic surrounding the problem. I am ignoring the laws of modern physics for the moment. So instead of thinking about the possibilities permissible by quantum mechanics, I am thinking about what is possible under variations of LTL (linear temporal logic). By “nonlinear time”, I am referring to temporal logics that assume a non-sequential structure to time. I am not suggesting that such a thing is sensible in the real world. It is just a thought experiment.
62. Dani Phye Says:
Scott, sorry this is slightly off topic (loved the article by the way), but I was reading your lecture notes for Quantum Computing Since Democritus, and I got to Lecture 6: P, NP, and Friends.
In it you state that PSPACE is contained within EXP, because “A machine with nk bits of memory can only go through 2n^k different configurations, before it either halts or else gets stuck in an infinite loop.”
Does this take into account (assuming the standard multi-tape Turing machine with a working string, read-only input, and write-only output) the fact that the head could be in any of nk positions, with m possible internal states, for a total of nk*m*2^(n^k) possible configurations? Or are you just assuming they are asymptotically close?
63. Scott Says:
Dani #62: Sorry if it was unclear! In that passage, I was simply defining the memory to consist of everything needed to describe the machine’s current state—so if it’s a Turing machine, then the tape head position, the internal state, the works. But I invite you to check that, in any case, including that stuff adds only log(nk)+O(1) bits, so it has no effect whatsoever on the asymptotics.
64. Dani Phye Says:
Scott 63: I see, thanks. I’m assuming the proof would be to simply take my n^k*m*2^(n^k) possible configurations, and have the tape itself represent the binary number for one of those, in which case the length would be log(n^k*m*2^(n^k)) = log(n^k)+log(m)+n^k,
= log(n^k) + O(1) + n^k,
though I’m not sure how that would be realized in practice on a Turing machine.
65. Scott Says:
Dani #64: Just forget about Turing machines, and think about writing the simulation in your favorite programming language. (And even there, just enough to convince yourself that such a simulation could be written.) Because of Turing-equivalence, the low-level details really don’t matter.
66. Dani Phye Says:
Scott #65: Well I always assumed Turing machines were nice because of their simplicity (when writing proofs), but they are horrendous to program in, you’re right. I suppose our computers generally do such a simulation too (storing program state in memory, and not really having much of an internal state if you idealize away the cache and such) already, so it’s “straightforward” in that you don’t have to do anything. Again, thanks!
67. Sandro Says:
Scott #50:
How does this work exactly? The set of total functions are not recursively enumerable, so without knowing the randomization function or its input, how would you design a preimage attack?
68. Scott Says:
Sandro #67: In this game, you always assume that the adversary knows the function f—i.e., you’re not trying to do “security through obscurity.” Everything that the adversary doesn’t know is encapsulated in the input x to f. Indeed, if the adversary didn’t know f, then it would be totally unreasonable to call f a “deterministic PRG,” since all the randomness you wanted could simply be shoehorned into the choice of f! 🙂 Now, given that the adversary knows f, all she has to do is loop over all 2n possible inputs x, and evaluate f(x) for each.
69. Shmi Nux Says:
Scott, some three years ago I ventured to make a rather pedestrian estimate of the upper bound for the complexity of QM, partly in response to Yudkowsky’s (and others’) claims that the MWI version is somehow “simpler” than the one with an explicit Born rule (if one defines simplicity as Occam’s razor expressed by measuring Kolmogorov’s complexity of a given model). It seems that the preference for MWI currently cannot be based on the Occam’s razor so expressed. Did I make any obvious blunders?
70. A. Certain Says:
I’m confused about one of the points in the article. You write, “When should you suspect a coin of being crooked? When a sequence x of flips of the coin satisfies K(x) « |x|.”
If you had a coin that was weighted to produce heads 75% of the time, I think most people would think of that coin is “crooked,” but (at least from my understanding of the article and the underlying math), the sequence would still have the property that K(x) = O(|x|).
I’m assuming that the problem is that you used the word “crooked” to mean not random, as opposed to not “fair.” Is that correct?
71. Scott Says:
A. Certain: No, if the coin was weighted to produce heads 75% of the time, then you can calculate from Shannon’s entropy formula that K(x)≈0.811|x| with overwhelming probability, and that’s certainly small enough compared to |x| to make it clear that x wasn’t uniformly random.
72. Scott Says:
Shmi Nux #69: I’m not sure that trying to estimate the Kolmogorov complexity of existing physical theories is useful—not merely because K(x) is uncomputable, but because even if you could compute it, the answers would depend severely on the choice of programming language, and even more, on what sorts of things you want to use the theory to calculate (e.g., how richly are you allowed to specify the initial data? how much is actually done by the program itself, and how much is ‘offloaded’ to whoever supplies input to the program?).
So I don’t think you can do better, at present, than relying on the simplicity-judgments of people who really understand the theories in question. Even there, though, you find an interesting effect, where the better someone understands a theory, the less complex it will seem to that person. For example, if you ask a general relativist, the defining equation of GR couldn’t possibly be simpler: it’s just G=8πT. Of course, if you’re novice enough to need definitions of G and T—well, the definitions fill a whole page if you write them directly in terms of the metric tensor, less if you use intermediate concepts. And if you need a definition of the metric tensor too … you see the point. GR might have a somewhat-large “Kolmogorov complexity” if you’re starting completely from scratch, but it has tiny Kolmogorov complexity if you’ve loaded enough mathematical intuition into your linking libraries.
73. domenico Says:
I am thinking that the difference between a random number and a calculated numbers can be obtained from the evaluation of the Hausdorff dimension of the graphical reprentation of the numerical sequence of the group of numerical digit: for example d_1={x_1,x_2,x_3},d_2={x_4,x_5,x_6}
I am thinking that each numerical calculus is a function on the numerical digit, and each function is a subspace of the graphical representation, and a true random number have not a simple numerical calculus to obtain the numerical digit: the random number could have D dimension so that fill uniformly all the space, and a calculated number could have D-\alpha dimension.
74. Neal Kelly Says:
Hi Scott,
I was just reading an arstechnica article on Stanford’s password policy and it got me wondering about how that guideline fits in with Kolmogorov complexity.
Since words are drawn from language, and thus vastly more likely to contain certain letters over others, I’m assuming the reduced Shannon entropy of such a random-word password would satisfy K(x) « |x|.
I feel like I should know this from reading recent posts, but would the Kolmogorov complexity of such a password after it’s hashed and stored on server still be significantly less than |x|, and does that matter for cryptographic security?
I assume if we take a string with complexity K(x), and send it through some hashing algorithm, the complexity of the output is at most K(x)+c, where c is related to the complexity of the hashing algorithm. It seems to me that this upper bound essentially follows from the definition of K(x).
Then, since K(x) isn’t computable, a hacker couldn’t easily identify whether an individual password was generated from a word-phrase instead of random letters, but could they, for example, run all the hashed passwords in a database through gzip, and then see which passwords were most efficiently compressed, and target those in particular?
75. Scott Says:
Neal #74: Yes, everything you write looks correct. You can’t get blood from a stone, and you can’t get entropy (or algorithmic randomness, which is interchangeable for this discussion) from nothing. So, if you actually care about security, then I recommend choosing a password with lots of numerals, punctuation marks, letter sequences that mean something only to you, etc. etc. just like all those websites tell you to do. Of course, in the more common case that you don’t care about security, using your birthday or a dictionary word is fine. 🙂
76. Neal Kelly Says:
And regarding the advice to pick 4 words, like in the xkcd strip, does that introduce an actual, exploitable attack?
77. Scott Says:
Neal #76: A nonsensical concatenation of words is a fine choice. It takes longer to type (especially on a mobile device), but it might be easier to remember, and could easily have at least as much entropy as a random concatenation of 6-8 alphanumeric characters. Or, of course, you could just abbreviate the words, or replace them with their first initials, to get the best of both approaches.
78. Douglas Knight Says:
Everyone should uses passwords consisting only of lowercase letters. Other rules are enforced by idiots who don’t compute entropy, as at Stanford. Using all the numbers and symbols on my keyboard gets 6.6 bits per character, compared to 4.7 for just the lower case alphabet. If 8 characters drawn from the large set is OK, that’s 52 bits, which you can get in 12 characters drawn from the small alphabet. Yet Stanford requires 20 characters and then demands our gratitude for their bullshit generousity. And that 8 character password is only as good as the 12 character a-z password if every character has equal chance of being uppercase, of being a number or a symbol. Is that true? Of course not: people add just enough symbols to get by the gatekeeper, which is worth hardly anything.
When you change your google password, it does a good job kibitzing on the strength. I find the update on every additional character enlightening.
79. Sandro Says:
Scott #68
Sandro #67: In this game, you always assume that the adversary knows the function f—i.e., you’re not trying to do “security through obscurity.”
This makes sense for a conservative security analysis, but I’m after a different question. I’m trying to figure out what we can ascertain automatically by induction on observable properties, in principle.
As a specific example, can we ascertain whether quantum mechanics is truly indeterministic by observing enough quantum randomness? de Broglie-Bohm is a deterministic interpretation of QM, so clearly there is at least one possible theory that could generate quantum randomness indefinitely. So could there exist some randomness test that might one day determine whether quantum randomness was truly indeterministic, and thus decide between deterministic or indeterministic intepretations of QM?
80. Scott Says:
Sandro #79: Well, it’s not just “conservative security analysis”—it’s that part of what it means for f to be deterministic is that the would-be predictor knows it. Otherwise, you might as well just let f introduce new random bits and thereby solve the entire problem by fiat!
Regarding your new question: that’s what Part II of my article (coming soon) will be all about! But briefly, in a certain philosophical sense, you can obviously never rule out determinism. For someone could always maintain that, no matter how random events look, everything that’s ever happened or will happen is deterministically foretold in “God’s unknowable encyclopedia.” You can never rule that out—and in fact, it’s not that different from de Broglie-Bohm, which also posits an underlying determinism, but one that we can’t “see” by making measurements even in principle.
Even there, though, quantum mechanics (specifically, the Bell inequality and its more modern refinements) lets us say something nontrivial and surprising: namely, that there can’t possibly be any way to “cash out” this hypothetical determinism into actual predictions of quantum measurement outcomes, unless you can also exploit the determinism to send signals faster than light. And furthermore, even if you only want to believe that the determinism “exists” at an unmeasurable level, you then also need to believe in a preferred reference frame, which goes against the spirit of special relativity.
81. BlueFive Says:
Your use of the phrase “base 2 numbers,” while perhaps technically defensible, is confusing and needlessly distracts the reader.
First off, you’re using “base 2” as a compound modifier, but you haven’t inserted a hyphen. Second, you use the plural “numbers,” which set this reader off on a search for other uses of 2 as a base, a search which was not satisfied until ten paragraphs later. And third, by far the most common use of the term “base 2” is in describing the base-2 number system, not as a description of an exponential expression. And in an article that includes quite a few strings comprised exclusively of zeroes and ones, the confusion is heightened.
Instead of writing “base 2 numbers are used because,” it would have been clearer to write “the base ‘2’ is used because” or ” ‘2’ is used as the base of the exponential expression because.”
Other than that, great article!
82. James Gallagher Says:
Sandro #79 Scott #81
As Scott says, about the best we can currently do wrt to ruling out deterministic hidden-variables is Bell Inequality tests, GHZ state observations and similar experiments carried out by, in particular, groups like Anton Zeilinger’s.
However I would suggest deterministic theories can be demonstrated to be unreasonable by a very simple “demonstration” of free-will – just walk around in circles but reverse direction every prime number times of revolutions.
Now, no known deterministic theory of nature can get a macroscopic body to do that – ie it is incredibly unlikely from a determinsitic interpretation – it’s never been observed elsewhere in the universe for example, and if it was observed we have no deterministic explanation for it – we would surely think we had discovered a signal of intelligent free-will operating elsewhere in the universe.
83. Scott Says:
James #82: Sorry, but such an experiment tells us nothing about free will, or even about indeterminism. And I say that as someone who’s gone on record taking “free will” WAY more seriously than most scientists! 😉 In particular, the results are perfectly consistent with the possibility that we’re deterministic automata that evolved by natural selection to have large brains capable of Turing-universal computation, so in particular, of generating the sequence of primes. No one denies that we’re complicated (and sometimes even intelligent)—the question is just whether we’re also “indeterministic” (and if so, in which senses)!
84. James Gallagher Says:
Scott #84
That’s why I was careful to use the word unreasonable!
I’m very aware that the full machinary of deterministic automata can be brought to the table – but you still have such a hard job of getting a macroscopic object to do prime number based dynamics that it becomes close to believing in Gods or any other kind of incredible supernatural influences – rather than just the simple idea of genuine free-will.
As long as we don’t allow free-will to do magic (something not allowed by physics) I think it’s an ok idea – so we might speculate that free-will enables us to “load the dice” for a collapse eigenstate – somehow evolution discovered this and it’s an incredibly complex emergent phenomena that we don’t currently understand.
BUT, I think I’ve gone a little off-topic 🙂
85. James Gallagher Says:
I meant to say something like:
…you have such a such a hard job of getting a macroscopic object to do prime number based dynamics spontaneously along with all the usual observed behaviour…
86. Scott Says:
James: Most scientists would say, “sure it’s hard, and that’s why you needed a mechanism as powerful as natural selection, plus four billion years, to do it!” The argument you’re making seems to me to venture well beyond the questions of free will and indeterminism, and into the realm of intelligent design (i.e., it seems tantamount to denying that natural selection can produce the sorts of complex adaptive behaviors that we see on earth). Or if not, then I don’t understand what blocks that implication.
87. James Gallagher Says:
Scott #87
The emergence of free-will is surely a phase change beyond simple evolutionary adaption to the environment.
But I’m going to sound hopelessly imprecise and pretentious to try to talk about this, like an ancient greek talking about fire or lodestones.
So, whereof one cannot speak, thereof one must be silent
For now…
88. Itai Says:
Scott
Will you consider probability distribution who has no second moment ( infinite variance ) and has first moment,
or distribution who has both no second moment ,and first moment is not defined by Lebesgue integration ( so the conditions of the strong law of large numbers will not hold)
to be higher kind of randomness in probability theory?
will you call it “Knightian uncertainty” ?
I can give you examples of such distribution if you are not familiar with it.
89. Scott Says:
Itai #88: No. If the distribution can be specified in advance, then I wouldn’t call it “Knightian uncertainty”; the moments are irrelevant.
90. Itai Says:
“Knightian uncertainty” sound to me much the same as Donald Rumsfeld “unknown unknowns”?
http://en.wikipedia.org/wiki/There_are_known_knowns
known knowns -can be described as deterministic
known unknowns – can be described as stochastic
unknown unknowns – not any of the above .
So,would you call such distributions more random than another?
such distributions have infinite range, and not exponential small chance of having extreme values ( like in most well known infinite range distributions : geometric ,normal or exponential distributions ).
I wonder if such distribution are possible in physics, and if not what prevents it from appearing there ( strong law of large numbers not valid, and standard deviation can make trouble with uncertainty principle )
91. Scott Says:
Itai #90: Precisely because these distributions need to have infinite range, I question their physical relevance. In real physics, you basically always have a finite cutoff for anything you’re calculating—given, if nothing else, by the Planck scale. Indeed, when there’s not a known finite cutoff (as in unrenormalized QFT), you typically have to impose a cutoff, to avoid getting nonsensical infinite answers for measurable quantities!
92. Itai Says:
Scott #91
I heard Cauchy distribution / Lorentzian is used in physics, also is Levy distribution.
not sure if it is for measurable values.
such distribution have feasible physical values.
If those distribution are not physical ( I’m not sure there is a physical argument for it )
so we should demand in addition to normalization that :
1. integral X^2 |psi|^2 < inf
2. integral |x| |psi|^2 < inf
nobody demands it , I don't know any real wave function psi that not hold these condition ( but maybe it's possible ,who knows ).
The standard infinite range distribution (normal ,geometric, exponential ) can theoretically have extreme values but I guess it will not happened in the life of the universe.
93. Ben Standeven Says:
James Gallagher #87
Why couldn’t free will be produced by natural selection? Being unpredictable seems pretty adaptive to me…
Or did you mean that it couldn’t be produced by a mutation? But then it does sound like a supernatural process.
94. Jerry Says:
re: Kate Becker’s article.
…Quantum cryptography is already being used commercially for some bank transfers and other highly secure transmissions…
How can quantum cryptography be employed by a classical (i.e. bank’s) computer? If a few lines of code that contain exclusive quantum gates live within a classical program that outputs classical information (as it must) in a way that provides the speed and encryption benefits of quantum computation, have we reached a milestone?
95. fred Says:
How does the Kolmogorov complexity evolve as we start adding extra bits to the object?
E.g. if we concatenate two strings s1 and s2, the total K-complexity is bounded by
min(K(s1),K(s2) <= K(s1+s2) <= K(s1)+K(s2)
The expansion of PI has the same complexity regardless of the number of digits. And adding more bits always implies that either the complexity stays constant or increases, but can never decrease, right?
Also, instead of concatenation, what if we superpose two strings, e.g. we take the expansion of PI, and then a random string 010001100010 (like a Poisson process) and we add them
314159265358
+010001100010
=324150365368
Seems like the same relation still holds. The type of addition (or any operation) we do with the two strings can be described by a small constant in terms of K-complexity, so it's pretty much irrelevant.
96. James Gallagher Says:
Ben Standeven #94
Well obviously I don’t know (!), but yes we can speculate that (macroscopic) unpredictability itself is evolutionary beneficial – and then we have an insanely complex anaylsis of how different species’ adaption to varying levels and types of unpredictable behaviour by rival species might have resulted in what we consider to be “free-will” behaviour today.
I prefer however to believe that evolution solved the interpretation of QM debate, maybe even before the era of prokaryotic cells 🙂
97. Serge Says:
Congratulations Scott on a great article about Kolmogorov complexity! I’d like to know your opinion about a few questions that keep puzzling me.
1) Is K(S) related in any way to the probability for a bit sequence S of complexity K(S) to be output by a computer?
2) More generally, is it true that the objects which actually exist in our universe must be of low Kolmogorov complexity? In that case, it would yield a measure quite analogous to the probability of observing a particle in quantum mechanics.
3) Since K(S) is uncomputable, is it possible to construct different consistent models of computer science by simply assigning different Kolmogorov complexities to their respective bit sequences? Thus, some exotic models could have different properties than ours. In particular, some algorithms might have a low probability of being output by our brain in one universe – that is, a high probability of being found – but a high such probability in another one. What do you think of this hypothesis?
98. Serge Says:
Of course, I meant “some algorithms might have a low probability of being output by our brain in one universe – that is, a *low* probability of being found – but a high such probability in another universe.”
The global idea being that some of the undecidabilities encountered in complexity theory come from the uncomputability of Kolmogorof complexity, which in turn might get decided by physics.
99. Scott Says:
Serge #97:
1) Sure, if you picked a computer program randomly from the universal distribution and ran it, then the probability that the program’s output would be x would go roughly like 2-K(x). (The universal distribution is the one where every prefix-free program of length n occurs with probability 1/2n.)
2) Well, if you accept that quantum measurement outcomes are really random (as QM says they are, and as the Bell inequality violation makes hard to deny), then a sequence of n independent 50/50 quantum measurement outcomes should have Kolmogorov complexity close to n, with overwhelming probability. So, that would be an obvious counterexample to everything in Nature having low Kolmogorov complexity. However,
a) it would still be reasonable to suppose that everything in Nature could be simulated by a short computer program with access to a random-number generator, and algorithmic information theory lets you formalize that statement (this is what the latter part of my article was about).
b) If you believed the Many-Worlds Interpretation, you could dispense even with the need for the random-number generator, by simply saying that the computer program should output all the branches of the wavefunction (with the appropriate amplitudes), since the branches are all equally real!
In any case, once you take away these quantum-mechanical complications, the statement that “our universe has low Kolmogorov complexity” is almost (if not quite) equivalent to the statement “the laws of physics are simple and universal,” which of course has been an extremely successful guiding principle since the 1600s.
Oh, one other point that I should mention: even if the universe as a whole has low Kolmogorov complexity, that still wouldn’t prevent specific parts of the universe from having high Kolmogorov complexity! To see how that’s possible, consider that you can have an extremely short computer program that outputs every possible string in lexicographic order—but some of the individual strings output by that program can only be output by long programs, since the programs have to specify the entire string. (Indeed, the previous discussion about the entire quantum wavefunction having low Kolmogorov complexity, even while individual branches of it have high Kolmogorov complexity, was an example of the same sort of thing.)
3) Yes, you can use Gödel’s Theorem to show that, given any formal system F of your choice, there will exist strings x (in fact, almost all strings!), such that the exact value of K(x) is independent of F. And that means, in particular, that it must be possible to construct “nonstandard models” of F, in which K(x) is assigned a lower value than it really has. (By contrast, if F is consistent, then there are no models of F in which K(x) is assigned a higher value than its actual value—do you see why?)
In any case, as I explain in Chapter 3 of my Democritus book, I personally think that “nonstandard models of arithmetic” are much better thought of as artifacts of Gödel’s theorems (the completeness and incompleteness theorems) than as “alternate realities.” After all, there is a shortest computer program that outputs a given string x; it’s just that particular formal systems might not be powerful enough to prove it.
100. Rahul Says:
I found some parts of that Kate Becker article a bit puzzling, almost annoying. e.g.
Could our universe, in all its richness and diversity, really be just a bunch of bits?
What does that mean? Is that the Nick Bostrom version of the-universe-is-a-simulation-and-we-are-agents-in-it?
Sure, your model of the universe may be it as “just a bunch of bits” but does that really make it so? What does it even mean to say that the universe is “really” X and not Y.
101. Scott Says:
Rahul #100: Well, yes, that’s what I was trying to say! And at least Kate was nice enough to quote my objections within the article itself. 🙂
102. Rahul Says:
Another bit I found intriguing was this quote by Vlatko Vedral in that article
The rules of quantum information provide the most “compact” description of physics
Is that really true? I mean is everything else derivable from this compact set of rules & what exactly is this set of rules.
And is it conversely true: Can all the rules of quantum information be derived from “laws that govern matter and energy.”
103. David Brown Says:
“… Bell’s inequality … lets us say something nontrivial and surprising …”
According to ‘t Hooft, Bell’s theorem is likely to be false.
http://arxiv.org/abs/1207.3612 “Discreteness and Determinism in Superstrings” by Gerard ‘t Hooft, 2012
What is the best counter-argument to ‘t Hooft’s argument against Bell’s theorem?
104. Itai Says:
Scott
I know that some wave functions are not normalizable but are physically possible,
Doesn’t it implies that physical wave functions with no first/second moment do exist also.
Don’t you think it suggests some problematic things in the probalistic interpetation in QM?
105. Scott Says:
David Brown #103: Err, the best counterargument is that Bell’s theorem is a theorem! 🙂 Theorems can’t be false; at best, their assumptions can be argued to be unfair or unjustified. But in ‘t Hooft’s case, the assumptions that he has to make to escape Bell’s theorem—basically, of a “cosmic conspiracy” in which our own brains, measurement devices, and random-number generators all collude with the elementary particles to make it look the CHSH game can be won 85% of the time (but not more than that!)—are a quadrillion times crazier than what he’s trying to avoid. They’re almost like a backhanded compliment to Bell’s analysis, like a convoluted way of admitting he’s lost the argument. Anyway, I’ll say more about this in Part II of the article, so stay tuned for that.
106. Scott Says:
Itai: Can you give me an example of the type of wavefunctions you have in mind? If you’re talking about, e.g., Dirac delta functions, those can always be regarded as just convenient approximations to Gaussians with very small spread. In any case, no, I don’t think this issue suggests anything problematic in the probabilistic interpretation of QM.
107. Itai Says:
Scott
I don’t have any specific example of such wave function,
also i’m not familiar with much except hydrogen atom and potential well.
However I read in Roger Penrose “Road to Reality” some criticism about it ( I think you will agree on the nature of quantum waves and states as opposed to probabilities ):
“For some states, such as momentum states, |psi| diverges, so we do not get
a sensible probability distribution in this way (the probability density being zero everywhere, which is reasonable for a single particle in an infinite
universe).
In accordance with this probability interpretation, it is not uncommon for
the wavefunction to be called a ‘probability wave’. However, I think that this
is a very unsatisfactory description. In the first place, Psi(x) itself is complex,
and so it certainly cannot be a probability. Moreover, the phase Psi up to an
overall constant multiplying factor) is an essential ingredient for the Schrodinger evolution.
Even regarding |psi|^2 as a ‘probability wave’
does not seem very sensible to me.
Recall that for a momentum state, the
modulus |Psi| of Psi is actually constant throughout the whole of spacetime.
There is no information in |Psi| telling us even the direction of motion of the
wave! It is the phase, alone, that gives this wave its ‘wavelike’ character.
Moreover, probabilities are never negative, let alone complex. If the
wave function were just a wave of probabilities, then there would never be
any of the cancellations of destructive interference. This cancellation is a
characteristic feature of quantum mechanics.
108. Anon Says:
Completely unrelated: Scott, Sean Carroll named dropped you in an Intelligence Squared debate (although he accidentally referred to you as a physicist!). He was paraphrasing you in response to his opponent bringing up quantum mechanics in relationship to consciousness. “Quantum mechanics is confusing, consciousness is confusing, so maybe they’re the same”. http://youtu.be/lzCYva2aYCo?t=44m35s
109. Michael Says:
“(although he accidentally referred to you as a physicist!).”
I know Scott doesn’t think of this as a compliment (though he should 😉 ), but there are few who combine technical sills, common sense, and intuition better than Scott (OK, maybe David Deutsche), but very few others. 😉
110. wolfgang Says:
@Itai
The ill-behaved wave functions in quantum mechanics that you seem to worry about are either eliminated with appropriate boundary conditions and/or initial conditions.
Usually, experiments take place in a laboratory and the natural boundary condition is that psi = 0 at the walls and outside. This eliminates all your examples.
Alternatively, if it is easier to handle in calculations, one has to assume that psi falls off quickly enough at large distances.
Also, if the initial state is well behaved, unitarity guarantees that it will remain well behaved.
There can be some technical issues, e.g. with plane waves in the continuum limit, but there is never a physical issue, only the question on how to best handle the math …
However, if we leave conventional quantum mechanics and talk about the “wave function of the universe” e.g. in quantum cosmology, then there really are issues to worry about. Now psi is a function e.g. over the space of all 3-geometries and it is much less clear what boundary conditions are ‘natural’ …
Hi Scott
I read your blog occasionally and it is very interesting although I am not sure I understand everything. What I think that I understand is issues about randomness. My understanding is probably on the border with crack-pot realm but it works for me and it can be interesting for you and others. For example
Let n be a 512 bit string. Treat n as positive integer. Apply (3n+1)/2 if n is odd otherwise do n/2. The result is new n. Repeat step above 256 times to acquire 256 bit parity string recording 1 when n is odd and 0 when n is even. Discard first 128 bits and use the rest of 128 bits as a hash of n.
1. While above algorithm is sort of described that does not mean that function description is there (if it is considered as a function). Without knowledge of the input the function can be composed in 2^256 ways and that depends entirely on the input. There is the same case in random oracle theory when input complexity is on the par with function description complexity. While this case guaranties non-correlation between function instances (the randomness) it is considered as impractical exercise (table with records of fair coin flips for example) where above is not.
2. There is possibility of finding some pattern / correlation between instances when above algorithm is run. That possibility will eventually lead to reduction of 2^256 complexity above. In effect that will mean that branching can be reduced by combination of the sequence and the looping algorithm structures. That runs against structural programming theorem and branching non-reduction. Simply put, it is impossible to make program for checking inputs of 3n+1 problem without using branching structure or exhaustive search.
3. From my understanding from above argumentation the proposed hashing scheme is a one way function, random oracle and NP complete problem (crack-pot alert). It seems (at least for my point of view) that humble branching algorithm structure is proverbial electrical fence between P and NP. Basically, when you have input you are in P, if you have partial output only you do not have function description anymore and you are in NP.
112. Jim Cross Says:
Scott,
Will you be commenting on this?
Is Consciousness Computable? Quantifying Integrated Information Using Algorithmic Information Theory
Phil Maguire, Philippe Moser, Rebecca Maguire, Virgil Griffith
http://arxiv.org/abs/1405.0126
113. Jay Says:
Jim #112
These authors argue that consciousness should be based on lossless integration, otherwise the memory traces would be affected each time we remember something. Problem is, this is exactly what happens.
http://learnmem.cshlp.org/content/7/2/73.full
114. Scott Says:
Jim #112: Well, Philippe Moser is a serious computability theorist. But that paper doesn’t seem relevant to me, since I don’t accept the starting assumptions of “integrated information theory” anyway, and don’t know why some other people do (maybe I’ll write a blog post about that sometime). And the last sentence of the abstract seems overblown. So no, I guess I probably won’t be commenting on this paper more than I just did.
115. James Gallagher Says:
@Itai
just in addition to wolfgang #111, it can be shown taht the wavefunction decays to zero at infinity with increasing potentials or we have a plane wave solution at infinity (with decreasing potentials), assuming the wave function obeys a schrodiinger evolution equation.
This is rigorously proved under quite general conditions in (for example) The Schrodinger Equation – Berezin and Shubin, Chapter 2.4
(Unfortunately the whole section is not viewable in google books, but you could try a search for a djvu copy of the book if you don’t have access to a suitable library)
116. James Cross Says:
Thanks Jay and Scott.
“The implications of this proof are that we have to abandon
either the idea that people enjoy genuinely unitary consciousness or that brain processes can be modeled computationally.”
They really do not prove that the first part is true. They start with it as an assumption.
It might be that people do not really have unitary consciousness – that it is an illusion. Actually this conforms to my own view. For example, when I drive to work I might be listening to Bach on the radio and thinking about a problem I am working on at work all the while my eyes, hands, and feet are doing everything to keep me from wrecking. Consciousness seems to involve all of that but it hardly seems unitary.
But it also still might be both parts are true or that consciousness cannot be practically modeled on any substrate other than living material.
117. Chris Says:
Scott,
You and your readers may be interested in this proposal for a hardware quantum random number generator based on a smartphone camera:
118. James Gallagher Says:
Chris #118
Interesting article. But I doubt this RNG can claim pure quantum randomness any more than Intel’s RdRand implementation on its Ivy Bridge chips.
(If someone can devise a Bell Inquality violation demo using smartphones I’ll be amazed)
119. Jay Says:
Scott #114
Word on the street is Giulio Tononi publish a paper starting with “Using string theory we demonstrated that the Higgs filed shoud not exist, otherwise it would allow gravitationnal waves”. Word on the blogs is Penrose answered “Well, Tononi is serious psychology theorist. However I don’t accept string theory and do not see why other do”. Hopefully, at some point someone will explain that we *do* have empirical evidences for gravitationnal waves.
James #116
By “unitary” it seems you meant “continous” whereas these authors meant “information lossless”. For experimental evidences that neither feature apply to consciousness, you might find usefull to search for “backward masking” and “reconsolidation”, respectively.
120. rrtucci Says:
BICEP is likely wrong!!! And it’s not even an Italian project!
Makes one yearn for the golden days of yore when mathematicians, not flaky physicists, ran the show.
http://resonaances.blogspot.com/2014/05/is-bicep-wrong.html
121. Michael Gogins Says:
Perhaps there is some confusion between the proposition “consciousness is unitary” and the proposition “consciousness is unified.”
Most philosophers and psychologists think that consciousness is unified. That, indeed, unity is a defining mark of consciousness. There is one field of consciousness per subject, in which the subject is aware not only of the various phenomenal objects but also of him or her self as subject, of awareness itself. Consciousness does not divide up into phenomenal consciousness, consciousness of being conscious, and so on.
This of course is consistent with backward masking or reconsolidation or loss of memories or whatever. It is also consistent with doing more than one thing at the same time, losing and gaining consciousness of subsidiary tasks, and so on. A person who is driving, talking, and listening to music is not three subjects at the same time. Rather the focus of consciousness shifts rapidly and fluidly from one task and context to another.
But it is very hard for me to see how to model this unity of consciousness as a computation or physical process. I do not see how theorizing consciousness as any kind of modeling or reflection does not lead to some sort of infinite regress.
122. Jay Says:
Michael #121
Where did you get that most psychologists think that? I’d be surprised. To the contrary, clinical observations from split-brain patient plead that each cerebral hemisphere forms:
“indeed a conscious system in its own right, perceiving, thinking, remembering, reasoning, willing, and emoting, all at a characteristically human level, and . . . both the left and the right hemisphere may be conscious simultaneously in different, even in mutually conflicting, mental experiences that run along in parallel”
123. James Cross Says:
Michael and Jay,
They implicitly define unitary consciousness in this sentence:
“According to the integrated information theory, when we
think of another person as conscious we are viewing them as
a completely integrated and unified information processing
system, with no feasible means of disintegrating their conscious cognition into disjoint components.”
124. Jay Says:
They? The “most psychologists” they or the “serious computer theorists” they? 🙂
BTW, their account of IIT is questionnable too. Tononi defines elementary modes within qualia space, whatever that means, to be “the basic experiential qualities that cannot be further decomposed”. This is at odds with the idea that conscious cognition as a whole cannot be “disintegrated into disjoint component”.
http://www.biolbull.org/content/215/3/216.full
That said, Tononi’s manifesto is still provisionnal and work in progress. I’d be curious to read how he’d fit what we know from split brain in this picture.
125. Fenella Saunders Says:
Scott, what do you think of this?
http://physicsworld.com/cws/article/news/2014/may/16/how-to-make-a-quantum-random-number-generator-from-a-mobile-phone
126. Scott Says:
Fenella #125: That’s really cool! And I don’t see any reason why it wouldn’t work. Of course, it doesn’t give you the “Einstein-certified” guarantee that you get from Bell inequality violation, but it could be perfectly good enough for many cryptographic applications.
127. Yoni Says:
Hi Scott
I read the article and loved it. I did have one question though (apologies if it is in the comments above, I have not read them all).
You state that Kolmogorov complexity is uncomputable as a fact, and your proof for this is in effect that if it were computable then the algorithm that computes it would be useable to compute random numbers with higher Kolmogorov complexity than itself.
However what if the algorithm to compute Kolmogorov complexity is specific to the length of the string it is being run on? That way it could potentially exist and be of practical use for a given length string (and actually be able to be used to spit out the “most random” numbers of the given string length) but without itself having a lower complexity than the numbers it is analysing.
128. Scott Says:
Yoni #127: Yes, you raise a good point. For fixed n, there’s obviously an algorithm to compute K(x) for strings of length n only. Indeed, I invite you to prove, as an exercise, that there’s such an algorithm that takes only n+O(log n) bits to write down (but an immense amount of time to run)! And of course, there’s also an algorithm that takes ~2n bits to write down, and is fast to run: namely, just cache K(x) for every x∈{0,1}n in a gigantic lookup table! Using diagonalization arguments, one can prove that this tradeoff is inherent—so that for example, there’s no algorithm to compute K(x) for n-bit strings that takes poly(n) time to run and also poly(n) bits to write down.
In any case, the broader point is that, when computer scientists talk about “algorithms,” unless otherwise specified they almost always mean uniform algorithms: that is, algorithms that work for inputs of arbitrary size. And that’s what I meant in the article.
For, while it’s true that every problem whatsoever admits a nonuniform algorithm (even uncomputable problems), in some sense that observation simply pushes the uncomputability somewhere else: namely to the question, “OK, so given a value of n, how do I construct the algorithm that works for inputs of size n”? Clearly, there can’t be any algorithm for this meta-task—since if there were, then it would yield a (uniform) algorithm for our original problem, contrary to assumption.
129. Yoni Says:
Scott #128
Thanks for responding; I get your point now. Unfortunately I am going to have to decline your invitation (at least for the foreseeable future) as I have no idea where to even start! (I think I would probably need to go back to undergrad school to get even close).
I look forward to part II of your article, and have bookmarked your excellent blog.
Regards
130. Scott Says:
Yoni #128: OK, I’ll tell you the answer, just to show you that these things often aren’t nearly as hard as they look.
Recall, we want an algorithm that computes K(x) for every n-bit string x, and that takes only about n bits to write down. That algorithm will simply hardcode the number, call it M, of computer programs at most n bits long that halt when run on a blank input. Then, given x as input, our algorithm will start running every program at most n bits long, in parallel. And it will keep doing so until M of the programs have halted. Once that happens, it knows that the remaining programs will never halt! So then, among the M programs that halted, it just has to find the shortest one that outputs x, and that tells it K(x).
131. Yoni Says:
Scott #128
“just to show you that these things often aren’t nearly as hard as they look”
Lol – it took me about 45 minutes just to figure out what your answer meant; I still have no idea what “n+O(log n) bits” means (what is O and what has log n got to do with anything? No need to answer, just wanted to give you an idea of how waay out of my knowledge base we are here.) – so yes, it probably was about as hard as it looked 🙂
Despite that, I do still have a question on your solution:
The solution presumes that we have a way of knowing the number of programs that will halt when run on a blank input. Surely we can’t find that out by just running them as we have no idea when it might halt.
If you have a method of determining which programs will halt and which will not then we are back with a general solution to finding K(X) (i.e. create all programs up to length n, count the ones that end on a blank input [M], record [M], run all the programs until M stop, find the shortest one with x as the output.)
So surely this means that there is no way of finding M – am I missing something?
132. Scott Says:
Yoni: Yes, you’re absolutely right, finding the M for a given n is an uncomputable problem—and we know that, because otherwise K(x) itself would be computable, contrary to what I proved in the article.
However, you yourself are the one who asked me to play the “nonuniform game”—i.e., the game where you first fix a value of n, and then try to imagine the best possible algorithm for inputs of size n, completely ignoring how hard it is to find the algorithm (i.e., imagining that an all-knowing wizard hands you the algorithm, so all you need to do is run it on the x∈{0,1}n of your choice). If you want to take the difficulty of finding the algorithm back into account … well then, you’re right back to the uniform setting that I assumed in my article! 🙂
Incidentally, the reason the algorithm takes n+O(log n) bits to write down is simply that M takes n+1 bits to write down, n takes O(log n) bits to write down, and the rest of the program takes O(1) bits to write down, independent of n. So, adding it all up we get
n+1+O(log n)+O(1) = n+O(log n).
133. Yoni Says:
Scott: “you yourself are the one who asked me to play the ‘nonuniform game'” – I guess I am, although entirely unintentionally (I suppose that just proves that I am not the all-knowing wizard :p)
For my own clarification, if I understand you, the response to my initial questions is “yes, for x of any length n there exists an algorithm to calculate k(x), however that algorithm must be uncomputable since otherwise the general algorithm would also be computable violating the reasoning in your article”. Is that right?
“Incidentally”, my main issue was not remembering what O() means (I thought I simply didn’t know it but having found it out recall the notation having been used in uni). Googling “O()” is surprisingly unhelpful. Thanks for the explanation though.
134. Jay Says:
“n+O(log n) bits” => “about n bits, at least for large enough n”
135. Scott Says:
Yoni #133: Yes, you understand correctly!
Dear Mr. Aaronson,
I am going to begin looking into the definitions of Kolmogorov complexity and Shannon entropy a little more closely. I’ve never really studied the subject before:
For a while something has been bugging me about randomness:
In terms of the randomness unpredictability link: It seems to me that, to the extent we can call anything random, we have to be talking about systems for which we have limited information – we either are not resolving part of the physics, or we don’t have all of the state variables (just a projection thereof). (Or, I suppose in the case of QM, we don’t have a way to keep track of which branch we are ending up in.) In a Newtonian world, all randomness is pseudorandomness.
(PS – if you had the original wavefunction of the universe, knew the physics it operated by, and evolved it in time, why would the complexity increase logarithmically in time? Why at all? It seems that in any deterministic system, if you have the initial state and the integrator, you can’t get new information from it – you already know everything about what it will give you at any time in the future.)
Other examples that spring to mind: Minecraft worlds – as interpreted by our gameplayers perspective, they seem very large and complex. But as interpreted by the generating algorithm, they are just a tiny integer fed as a seed to the algorithm.
(I see you address this in #99)
What prevents you from *always* being able to find some integrator which, an arbitrary output string (or basket of arbitrary output strings) relative to the process is “simple”? It seems you have to include some sort of statement that a string is complex *relative* to some generator or evaluator.
Another example: Suppose you have the “Library of Babel” containing every possible book. Somewhere in there are the works of Shakespeare – though they are far enough away that finding them requires some piece of information that is as complex as authoring the book yourself. But where the works of Shakespeare are depend on how the library is organized. Transformations involving reorganizing the library, or applying some sort of cryptographic transformation to the characters (rot13, anything more complicated involving multiple characters) are equivalent. Somewhere there is a cryptographic transformation that brings all of the works of Shakespeare close to the origin of the library.
The book “AAAAAA…” might not mean much to you, but to the guy who reads things via his complicated (*relative* to you!) cryptographic language, it might be complex and meaningful prose. Likewise, all the books you find interesting would look like gibberish to observer B. Relative to observer B, it is not he who has a complicated language. Rather, you have a complicated filter which is the inverse of observer B’s filter relative to you.
PS – If the Kolmogorov complexity of the universe is low, then without appealing to some sort of limited perspective or information loss: Why would anything the universe contain have higher complexity than the universe itself?
(That includes any finite idea accessible to humans, and anything their ostensibly random generators spit out?)
From a “God’s eye view”, shouldn’t everything be bounded by the complexity of the universe?
I have a quick question about Kolmogorov complexity:
Isn’t the Kolmogorov complexity of a string dependent on the turing machine running your programs? I am currently thinking of the turing machine as a sort of generalized map between input programs and output strings (something that’s not at all how I usually think of them when programming sanely designed ones!):
Suppose you have a perversely designed turing machine that spits out whatever string you want when fed a null program of zero length? It seems you can offload the complexity requirements of your output string from the program to the turing machine. How do you analyze the Kolmogorov complexity of a turing machine then? (You can simulate turing machines on other turing machines, I suppose, but I haven’t been able to convince myself yet that there isn’t some arbitrariness here about what turing machine you are using as a standard.)
139. Scott Says:
MadRocketSci: Yes, you’re absolutely right that the definition of Kolmogorov complexity depends on your choice of universal reference machine (or in other words, your programming language). And you’re further right that, given any string x of any length, there’s some programming language that assigns x a small Kolmogorov complexity—namely, a language that simply hardwires x into its definition!
But there are two big factors that ameliorate the situation. The first is that the above issue doesn’t affect the asymptotics—so if you have an infinite sequence of longer and longer strings, eventually the programming-language dependence will get washed out and you’ll have K(x) uniquely defined up to an additive constant (that’s the content of the Invariance Theorem). The second factor is that in practice, we’re interested in programming languages that are “reasonable”—not ones that do things like hardcode particular strings that would otherwise be Kolmogorov-random. And while “reasonable” is obviously hard to formalize, one can approximate what it means by saying “something like C, or Python, or Lisp, or assembly, or Turing machine, or pretty much any programming language that anyone actually uses or has even defined mathematically.”
|
2018-05-22 04:20:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6531252861022949, "perplexity": 1063.0883042547368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864624.7/warc/CC-MAIN-20180522034402-20180522054402-00486.warc.gz"}
|
http://pdglive.lbl.gov/ParticleGroup.action;jsessionid=EF7001A4EA8910D978621488AF03EF33?node=BXXX010&init=0
|
# ${{\mathit \Delta}}$ BARYONS ($\mathit S$ = 0, $\mathit I$ = 3/2)
${{\mathit \Delta}^{++}}$ = ${{\mathit u}}{{\mathit u}}{{\mathit u}}$ , ${{\mathit \Delta}^{+}}$ = ${\mathit {\mathit u}}$ ${\mathit {\mathit u}}$ ${\mathit {\mathit d}}$, ${{\mathit \Delta}^{0}}$ = ${\mathit {\mathit u}}$ ${\mathit {\mathit d}}$ ${\mathit {\mathit d}}$, ${{\mathit \Delta}^{-}}$ = ${\mathit {\mathit d}}$ ${\mathit {\mathit d}}$ ${\mathit {\mathit d}}$
${{\mathit \Delta}{(1232)}}$ $3/2{}^{+}$**** ${{\mathit \Delta}{(1600)}}$ $3/2{}^{+}$*** ${{\mathit \Delta}{(1620)}}$ $1/2{}^{-}$**** ${{\mathit \Delta}{(1700)}}$ $3/2{}^{-}$**** ${{\mathit \Delta}{(1750)}}$ $1/2{}^{+}$* ${{\mathit \Delta}{(1900)}}$ $1/2{}^{-}$** ${{\mathit \Delta}{(1905)}}$ $5/2{}^{+}$**** ${{\mathit \Delta}{(1910)}}$ $1/2{}^{+}$**** ${{\mathit \Delta}{(1920)}}$ $3/2{}^{+}$*** ${{\mathit \Delta}{(1930)}}$ $5/2{}^{-}$*** ${{\mathit \Delta}{(1940)}}$ $3/2{}^{-}$** ${{\mathit \Delta}{(1950)}}$ $7/2{}^{+}$**** ${{\mathit \Delta}{(2000)}}$ $5/2{}^{+}$** ${{\mathit \Delta}{(2150)}}$ $1/2{}^{-}$* ${{\mathit \Delta}{(2200)}}$ $7/2{}^{-}$* ${{\mathit \Delta}{(2300)}}$ $9/2{}^{+}$** ${{\mathit \Delta}{(2350)}}$ $5/2{}^{-}$* ${{\mathit \Delta}{(2390)}}$ $7/2{}^{+}$* ${{\mathit \Delta}{(2400)}}$ $9/2{}^{-}$** ${{\mathit \Delta}{(2420)}}$ $11/2{}^{+}$**** ${{\mathit \Delta}{(2750)}}$ $13/2{}^{-}$** ${{\mathit \Delta}{(2950)}}$ $15/2{}^{+}$** ${{\mathit \Delta}{(\sim3000 \text{ Region})}}$Partial-Wave Analyses
|
2018-02-17 19:08:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9538609981536865, "perplexity": 4986.527511361907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807660.32/warc/CC-MAIN-20180217185905-20180217205905-00115.warc.gz"}
|
http://lambda-the-ultimate.org/node/2942
|
## Features of Common Lisp
A compelling description of the features that make CL the king of the Perl-Python-Ruby-PHP-Tcl-Lisp language ;)
Lisp is often promoted as a language preferable over others because it has certain features that are unique, well-integrated, or otherwise useful.
What follows is an attempt to highlight a selection of these features of standard Common Lisp, concisely, with appropriate illustrations.
## Comment viewing options
### Lisp - Hardly compelling
Lisp had its hey-day. Lisp Machines and Texas Instruments built machines with Lisp from the assembler up, and it has reached the level at which it is comfortable. If that level makes it king then Lisps true name must be King Canute.
### Lisp—Super Exciting
I think most people would agree that the "certain features" mentioned above are all nice things to have, individually. You may not find it exciting to have every one of them interoperating at once with the possibility of cleanly integrating whatever new features might cross your mind (without getting mired in syntax), but I do.
### Industrial strength
One thing that's often missed in comparisons to Lisp is the sheer industrial strength of some of the language's implementations. People using Perl/Python/Ruby/PHP/Tcl aren't usually (ever?) working with programs that use, say, 60GB to 100GB of RAM on a single machine, as one commercial Common Lisp application I've worked on recently does. (No, it's not airfare-related.)
For applications of that size, the talk about "scalability" of languages becomes quite real. You need an efficient garbage collector that's been tuned to support large memories. It should have an API that lets you exercise some control over memory usage. You really want native code compilation when you're processing such large data sets. Having the ability to do things like dump and restore an image of a running application can be important. Good support for dynamic update of code is very useful, with choices offered when new code conflicts with existing code. The list goes on.
Most of the other languages mentioned aren't even remotely in the same league. Of course, scaling horizontally across machines is one way to avoid the need for such features, but that's not practical for all applications.
### However...
...at what point to we distinguish between the language (on paper) from its implementations?
Virtually all of the virtues you mention are qualities of CL implementations, that aren't mandated by the language itself.
You mention scaling horizontally as a means for other languages to work on large datasets. Works great--Google has built an empire on this architecture. For I/O bound problems, it's probably the more natural way to scale. Back when LISP was designed, scaling horizontally was a difficult nut to crack; as computer networking interfaces, protocols were slow, expensive immature, not terribly reliably, and frequently proprietary. And the OS's of the time weren't terribly network-aware; the preferred solution to building bigger systems was building bigger boxes.
Oh, and the standalone RDBMS as we know it today, did not exist. The database management systems of the time were, to put it plainly, lousy. :)
Things like Javascript and Python generally aren't called upon to service the sorts of loads that CL was (and still is); because they are part of a system wherein other components do the heavy lifting. It's not surprising that implementations thereof are not "tuned" to large applications of this sort.
I'm curious. JavaScript is often described as Lisp's natural successor. Both are dynamically-typed strict multiparadigm languages that encourage functional programming (HOFs in particular), don't discourage stateful programming (from a cultural point of view), but which haven't adapted the "everything is an object" world view that Smalltalk descendants like Ruby embrace. The software you are reading this on probably has a JavaScript implementation built in to it--one tuned to the demands of web presentation on a single-user workstation.
Can you think of any reason that JavaScript (or a JS-like language with modules, if you think the recent Harmony summit and compromise is a deal-breaker), with a suitably-designed implementation, couldn't be pressed into similar service as you describe?
### Implementations! Implementations! Implementations!
(to paraphrase Ballmer.)
...at what point to we distinguish between the language (on paper) from its implementations?
I was commenting on languages that currently have no implementations capable of meeting certain needs. Whether such an implementation is theoretically possible is moot, for someone making a practical choice.
Quality of implementation issues are an obstacle that many young languages have faced [edit: and/or academic languages], in some cases preventing them from gaining wider popularity. But it can still be an issue for more established languages.
Virtually all of the virtues you mention are qualities of CL implementations, that aren't mandated by the language itself.
Yes, which is why I wrote "the sheer industrial strength of some of the language's implementations." ;)
You mention scaling horizontally as a means for other languages to work on large datasets. Works great--Google has built an empire on this architecture.
It works great for certain kinds of applications. If it worked great for all applications, the famous parallelism problem would be solved. (Strictly speaking, it's distribution not parallelism, but there's a close connection in practice - if Google had to wait for all its boxes to execute their code one after the other, and communicate the results from each one to the next, performance wouldn't be so great.)
Since languages with relatively weaker implementations can still scale well horizontally, we can expect to see them do that first. But that still leaves areas un-addressed, where those languages are at a disadvantage.
Can you think of any reason that JavaScript (or a JS-like language with modules, if you think the recent Harmony summit and compromise is a deal-breaker), with a suitably-designed implementation, couldn't be pressed into similar service as you describe?
Purely from a quality of implementation perspective, it could be done, of course. At that point, the language would have to compete on semantics and other such issues. But the fact that it can't do that now is relevant for anyone comparing languages today.
There may also be semantic reasons why one wouldn't really want such a heavy duty implementation for some languages. Perhaps it's just me, but I have difficulty imagining wanting to use, say, PHP in such contexts.
### Some code out there
Implementation scalability is not the only thing overlooked (or ignored) by many people using Perl/Python/Ruby/PHP/Tcl. Implementation and library code quality & support is also ignored. To many people the fact that an implementation exists (and is free) and large libraries/frameworks exist (and are also free, even though quality and support could be unknown for both -- *cough* not everything on CPAN really works *cough*) are good enough to indicate that time-to-market will be short, which is what they really care about.
### Yes, Some code is out there.
So often the reply from Lispers is "No we don't have that, but it would be easy to do by just ...". Maybe the access to reliable, pre-written, standard libraries is why people choose to not use Lisp, where they so often have to roll their own. (Time and again).
### Clojure
Hopefully, Lisps that target virtual machines (Clojure, for example) will improve situation in this respect.
### thanks for the reference!
Just googled clojure and am taking a look. I especially liked the ability to pop up a "hello world" GUI dialog with :
user=> (. javax.swing.JOptionPane (showMessageDialog nil "Hello World")) nil
### Apparently, it's not-Lisp bashing topic
But I remember that Perl has been used in bio-informatic to process DNA database with some success apparently, and those kind of data are not small usually..
[ And no, I'm not a Perl fan, I hate it in fact ]
I'm sure that many Perl libraries don't scale, but this doesn't mean that Perl is a toy either..
### Not toy, but limited
Those Perl applications rely heavily on disk storage, and only process a little of the data at a time. I was referring to systems which use large amounts of RAM, because the data access requirements are such that constantly accessing it from disk would be prohibitively slow.
Of course, there's always more than one way to skin the cat. These days, you might work around the need for a single large in-memory image by using a distributed memory cache. But such solutions add complexity, and can reduce performance compared to a single image, depending on data access patterns.
BTW, Perl in particular has a problem managing complex in-memory object graphs, because of its reference counting garbage collection that (a) is inefficient and scales badly and (b) can't handle circular structures.
|
2018-07-22 06:47:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.260677695274353, "perplexity": 2355.1642683266996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593051.79/warc/CC-MAIN-20180722061341-20180722081341-00123.warc.gz"}
|
https://leonadivide5050.com/natural-world-cvifrm/article.php?a13078=involutory-matrix-eigenvalues
|
involutory matrix eigenvalues
, which is a negative number whenever θ is not an integer multiple of 180°. One of the most popular methods today, the QR algorithm, was proposed independently by John G. F. Francis[19] and Vera Kublanovskaya[20] in 1961. Over an algebraically closed field, any matrix A has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into generalized eigenspaces. ) ξ The matrix. − / Thus, if matrix A is orthogonal, then is A T is also an orthogonal matrix. 2 Any row vector {\displaystyle n} ] is similar to T In particular, for λ = 0 the eigenfunction f(t) is a constant. d Convergent matrix: A square matrix whose successive powers approach the zero matrix. ⟩ In general, λ may be any scalar. The eigensystem can be fully described as follows. For this reason, in functional analysis eigenvalues can be generalized to the spectrum of a linear operator T as the set of all scalars λ for which the operator (T − λI) has no bounded inverse. ( 3 v E [ 1 In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix, or (increasingly) of the graph's Laplacian matrix due to its discrete Laplace operator, which is either − (sometimes called the combinatorial Laplacian) or − − / − / (sometimes called the normalized Laplacian), where is a diagonal matrix with equal to the degree of vertex , and in − /, the th diagonal … − , where the geometric multiplicity of is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics: where where I is the n by n identity matrix and 0 is the zero vector. / In essence, an eigenvector v of a linear transformation T is a nonzero vector that, when T is applied to it, does not change direction. In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast fabric's constituents' orientation and dip can be summarized in a 3-D space by six numbers. ;[47] . λ , {\displaystyle \mathbf {i} } Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation, because the mapping does not change its direction. i which is the union of the zero vector with the set of all eigenvectors associated with λ. E is called the eigenspace or characteristic space of T associated with λ. 1 Given the eigenvalue, the zero vector is among the vectors that satisfy Equation (5), so the zero vector is included among the eigenvectors by this alternate definition. The largest eigenvalue of is 4 or less. A However, if the entries of A are all algebraic numbers, which include the rationals, the eigenvalues are complex algebraic numbers. [a] Joseph-Louis Lagrange realized that the principal axes are the eigenvectors of the inertia matrix. {\displaystyle \det(A-\xi I)=\det(D-\xi I)} E th largest or Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which . H If λ 2 is an eigenvector of A corresponding to λ = 3, as is any scalar multiple of this vector. In Romance of the Three Kingdoms why do people still use bamboo sticks when paper had already been invented? The eigenvectors are used as the basis when representing the linear transformation as Λ. Conversely, suppose a matrix A is diagonalizable. {\displaystyle A} D Active 2 years, 4 months ago. , that is, any vector of the form The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … k I H ] ( ⋯ . E λ But from the definition of − All I know is that it's eigenvalue has to be 1 or -1. ( {\displaystyle \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )} b 1 with eigenvalues λ2 and λ3, respectively. / matrices, but the difficulty increases rapidly with the size of the matrix. Because the eigenspace E is a linear subspace, it is closed under addition. 0 (Three output arguments) integerdata Array of arbitrary data from uniform distribution on specified range of integers invhess Inverse of an upper Hessenberg matrix. λ is the same as the characteristic polynomial of λ {\displaystyle {\begin{bmatrix}0&1&-1&1\end{bmatrix}}^{\textsf {T}}} vectors orthogonal to these eigenvectors of . that is, acceleration is proportional to position (i.e., we expect PCA studies linear relations among variables. 2 That is, if two vectors u and v belong to the set E, written u, v ∈ E, then (u + v) ∈ E or equivalently A(u + v) = λ(u + v). λ {\displaystyle H} can be determined by finding the roots of the characteristic polynomial. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). ) The geometric multiplicity γT(λ) of an eigenvalue λ is the dimension of the eigenspace associated with λ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue. Its coefficients depend on the entries of A, except that its term of degree n is always (−1)nλn. [14] Finally, Karl Weierstrass clarified an important aspect in the stability theory started by Laplace, by realizing that defective matrices can cause instability. 1 ) is a fundamental number in the study of how infectious diseases spread. + As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal. PCA is performed on the covariance matrix or the correlation matrix (in which each variable is scaled to have its sample variance equal to one). Keywords: singular value decomposition, (skew-)involutory matrix, (skew-)coninvolutory, consimilarity 2000MSC:15A23, 65F99 1. be an arbitrary x Geometrically, an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction in which it is stretched by the transformation and the eigenvalue is the factor by which it is stretched. Taking the determinant to find characteristic polynomial of A. Show Instructions. A A Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the QR algorithm was designed in 1961. {\displaystyle \mu _{A}(\lambda _{i})} − A matrix that is not diagonalizable is said to be defective. {\displaystyle |\Psi _{E}\rangle } × λ 1 ) 0 If [3][4], If V is finite-dimensional, the above equation is equivalent to[5]. − That is, if v ∈ E and α is a complex number, (αv) ∈ E or equivalently A(αv) = λ(αv). ) E 2 {\displaystyle d\leq n} The total geometric multiplicity of λ {\displaystyle k} , for any nonzero real number distinct eigenvalues × . Suppose a matrix A has dimension n and d ≤ n distinct eigenvalues. All I know is that it's eigenvalue has to be 1 or -1. These roots are the diagonal elements as well as the eigenvalues of A. ( Historically, however, they arose in the study of quadratic forms and differential equations. , the eigenvalues of the left eigenvectors of E − {\displaystyle Av=6v} and Right multiplying both sides of the equation by Q−1. det ω A D Defective matrix: A square matrix that does not have a complete basis of eigenvectors, and is thus not diagonalisable. A can therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. referred to as the eigenvalue equation or eigenequation. ⟩ − In other words, I admit, I don't really know a nice direct method for showing this. ( D A Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. 0 Ψ The principal vibration modes are different from the principal compliance modes, which are the eigenvectors of … ( H ( = Suppose that A is a ‘nice’ matrix: the real parts of its eigenvalues are relativ ely small. {\displaystyle D-A} [12], In the meantime, Joseph Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm–Liouville theory. , is the dimension of the sum of all the eigenspaces of E {\displaystyle v_{2}} {\displaystyle {\begin{bmatrix}b\\-3b\end{bmatrix}}} v ( th diagonal entry is By clicking âPost Your Answerâ, you agree to our terms of service, privacy policy and cookie policy, 2020 Stack Exchange, Inc. user contributions under cc by-sa. n criteria for determining the number of factors). {\displaystyle R_{0}} This polynomial is called the characteristic polynomial of A. The On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector of A associated with λ. A^2 = I) of order 10 and \text {trace} (A) = -4, then what is the value of \det (A+2I)? then is the primary orientation/dip of clast, / d {\displaystyle 2\times 2} t + ( For other uses, see, Vectors that map to their scalar multiples, and the associated scalars, Eigenvalues and the characteristic polynomial, Eigenspaces, geometric multiplicity, and the eigenbasis for matrices, Diagonalization and the eigendecomposition, Three-dimensional matrix example with complex eigenvalues, Eigenvalues and eigenfunctions of differential operators, Eigenspaces, geometric multiplicity, and the eigenbasis, Associative algebras and representation theory, Cornell University Department of Mathematics (2016), University of Michigan Mathematics (2016), An extended version, showing all four quadrants, representation-theoretical concept of weight, criteria for determining the number of factors, "Du mouvement d'un corps solide quelconque lorsqu'il tourne autour d'un axe mobile", "Grundzüge einer allgemeinen Theorie der linearen Integralgleichungen. 1 m [21][22], Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices. is a (block triangular) involutory matrix. − For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex. ) 1 {\displaystyle (\xi -\lambda )^{\gamma _{A}(\lambda )}} Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The fundamental theorem of algebra implies that the characteristic polynomial of an n-by-n matrix A, being a polynomial of degree n, can be factored into the product of n linear terms. [citation needed] For large Hermitian sparse matrices, the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors, among several other possibilities.[43]. has four square roots, . V Then. {\displaystyle A{\boldsymbol {v}}_{k}=\lambda {\boldsymbol {v}}_{k}} a v 2 where 1 Note that. Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximate numerical methods. Equation (3) is called the characteristic equation or the secular equation of A. ) As a brief example, which is described in more detail in the examples section later, consider the matrix, Taking the determinant of (A − λI), the characteristic polynomial of A is, Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A. is easily seen to have no square roots. γ [2] Loosely speaking, in a multidimensional vector space, the eigenvector is not rotated. matrix 1 , west0479 is a real-valued 479-by-479 sparse matrix with both real and complex pairs of conjugate eigenvalues. If the degree is odd, then by the intermediate value theorem at least one of the roots is real. [26], Consider n-dimensional vectors that are formed as a list of n scalars, such as the three-dimensional vectors, These vectors are said to be scalar multiples of each other, or parallel or collinear, if there is a scalar λ such that. b {\displaystyle \lambda _{i}} has is the eigenvalue's algebraic multiplicity. (Erste Mitteilung)", Earliest Known Uses of Some of the Words of Mathematics (E), Lemma for linear independence of eigenvectors, "Eigenvalue, eigenfunction, eigenvector, and related terms", "Eigenvalue computation in the 20th century", 10.1002/1096-9837(200012)25:13<1473::AID-ESP158>3.0.CO;2-C, "Neutrinos Lead to Unexpected Discovery in Basic Math", Learn how and when to remove this template message, Eigen Values and Eigen Vectors Numerical Examples, Introduction to Eigen Vectors and Eigen Values, Eigenvectors and eigenvalues | Essence of linear algebra, chapter 10, Same Eigen Vector Examination as above in a Flash demo with sound, Numerical solution of eigenvalue problems, Java applet about eigenvectors in the real plane, Wolfram Language functionality for Eigenvalues, Eigenvectors and Eigensystems, https://en.wikipedia.org/w/index.php?title=Eigenvalues_and_eigenvectors&oldid=991578900, All Wikipedia articles written in American English, Articles with unsourced statements from March 2013, Articles with Russian-language sources (ru), Wikipedia external links cleanup from December 2019, Wikipedia spam cleanup from December 2019, Creative Commons Attribution-ShareAlike License, The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called the, The direct sum of the eigenspaces of all of, In 1751, Leonhard Euler proved that any body has a principal axis of rotation: Leonhard Euler (presented: October 1751; published: 1760), The relevant passage of Segner's work was discussed briefly by. A {\displaystyle H} Two proofs given whose first A {\displaystyle E_{1}} Define a square matrix Q whose columns are the n linearly independent eigenvectors of A. A Any subspace spanned by eigenvectors of T is an invariant subspace of T, and the restriction of T to such a subspace is diagonalizable. {\displaystyle n\times n} Is there any way to tell whether the shot is going to hit you or not? This is easy for The calculator will find the eigenvalues and eigenvectors (eigenspace) of the given square matrix, with steps shown. T In general, you can skip parentheses, but be very careful: e^3x is e^3x, and e^(3x) is e^(3x). i then v is an eigenvector of the linear transformation A and the scale factor λ is the eigenvalue corresponding to that eigenvector. . v leads to a so-called quadratic eigenvalue problem. I If the eigenvalue is negative, the direction is reversed. A respectively, as well as scalar multiples of these vectors. for use in the solution equation, A similar procedure is used for solving a differential equation of the form. T th smallest eigenvalue of the Laplacian. Finding of eigenvalues and eigenvectors. k V What conditions do you know of for diagonalisability? = 3 Research related to eigen vision systems determining hand gestures has also been made. E − E A μ The calculation of eigenvalues and eigenvectors is a topic where theory, as presented in elementary linear algebra textbooks, is often very far from practice. {\displaystyle \mu _{A}(\lambda )\geq \gamma _{A}(\lambda )} 3 (Generality matters because any polynomial with degree [49] The dimension of this vector space is the number of pixels. [ . Since any spanning set contains a basis, $E$ contains a basis for $\Bbb R^n$. Proof: Say $z=x+Ax$. [ In this case the eigenfunction is itself a function of its associated eigenvalue. invol Involutory matrix. ≥ x 6 {\displaystyle V} The generation time of an infection is the time, {\displaystyle n\times n} We investigate the relation between a nilpotent matrix and its eigenvalues. ) Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes a system of linear equations with known coefficients. D ) For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix—for example by diagonalizing it. [ Such a matrix A is said to be similar to the diagonal matrix Λ or diagonalizable. Viewed 624 times 2 $\begingroup$ On my exam today there's this question: A is a real n by n matrix and it is its own inverse. , [18], The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Richard von Mises published the power method. A similar calculation shows that the corresponding eigenvectors are the nonzero solutions of Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. 3 − {\displaystyle x} is a sum of ( arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. {\displaystyle (A-\lambda I)v=0} interesting relation between the singular values of an involutory matrix and its eigenvalues. {\displaystyle A} k Chess tournament winning streaks Quenching swords in dragon blood; why? Clean Cells or Share Insert in. @FluffySkye I can finally delete my incorrect answer. I {\displaystyle x^{\textsf {T}}Hx/x^{\textsf {T}}x} A In general, the operator (T − λI) may not have an inverse even if λ is not an eigenvalue. , E The functions that satisfy this equation are eigenvectors of D and are commonly called eigenfunctions. × The roots of this polynomial, and hence the eigenvalues, are 2 and 3. D This implies that λ λ Eigenvalues are the special set of scalars associated with the system of linear equations. det λ 3 ( n A This orthogonal decomposition is called principal component analysis (PCA) in statistics. is the eigenfunction of the derivative operator. y ) {\displaystyle i} 2 , the fabric is said to be planar. Even the exact formula for the roots of a degree 3 polynomial is numerically impractical. {\displaystyle E} {\displaystyle \kappa } , which means that the algebraic multiplicity of {\displaystyle A} det A {\displaystyle {\begin{bmatrix}x_{t}&\cdots &x_{t-k+1}\end{bmatrix}}} ξ {\displaystyle H} … is a x Let For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is the standard today. Hence we obtain $\det(A)=\lambda_1\lambda_2\cdots \lambda_n.$ (Note that it is always true that the determinant of a matrix is the product of its eigenvalues regardless diagonalizability. . {\displaystyle {\begin{bmatrix}0&1&2\end{bmatrix}}^{\textsf {T}}} It seems very few students solved it if any. = @Theo Bendit Well, since this is on my linear algebra final exam. γ E is called the eigenspace or characteristic space of A associated with λ. In mechanics, the eigenvectors of the moment of inertia tensor define the principal axes of a rigid body. 0 1 This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. {\displaystyle \mathbf {v} ^{*}} , is an eigenvector of {\displaystyle A} ≥ I γ γ If T is a linear transformation from a vector space V over a field F into itself and v is a nonzero vector in V, then v is an eigenvector of T if T(v) is a scalar multiple of v. This can be written as. Now say $E$ is the set of eigenvectors of $A$. n A 2 alone. For the origin and evolution of the terms eigenvalue, characteristic value, etc., see: Eigenvalues and Eigenvectors on the Ask Dr. One can generalize the algebraic object that is acting on the vector space, replacing a single operator acting on a vector space with an algebra representation – an associative algebra acting on a module. R It's a result that falls out of of the Jordan Basis theory. Geometric multiplicities are defined in a later section. λ Define an eigenvector v associated with the eigenvalue λ to be any vector that, given λ, satisfies Equation (5). That is, there is a basis consisting of eigenvectors, so $A$ is diagonalizable. λ , the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue D These concepts have been found useful in automatic speech recognition systems for speaker adaptation. {\displaystyle E_{1}>E_{2}>E_{3}} G = 2 In quantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonal basis set. 2 {\displaystyle b} Pl with signature s implies Pl has s eigenvalues X _ - 1 and n - s eigenvalues A =1, and with 0 < s < n, both 1 + Pl 0 0 and 1- Pl =A 0. Involuntary matrix ( a − λi involutory matrix eigenvalues analysis ( PCA ) in.... Sticks when paper had already been invented their length either { 0 }. A basis consisting of eigenvectors to an eigenvector for those special cases, a rotation changes the direction every! The real parts as the direction of every nonzero vector that satisfies this condition is an eigenvalue I 've through... … is there any method using only properties of eigenvectors point on the other hand, set! Largest eigenvalue of a matrix a λ 2 is an n by n matrix and it its... Max 2 MiB ) the eigenspaces of T always form a direct sum able to solve it using knowledge have. Equivalent to 5 * x eigenvalues of a polynomial exist only if its are! Linear equation y = 2 x { \displaystyle \lambda =-1/20 } quantum chemistry, one represents! Principal components and the solutions I found is all about minimal polynomial which I have n't covered polynomials! Realized that the eigenvectors are used as the eigenvalues, and the factor! Image as a linear combination of such eigenvoices, a rotation changes direction. The above equation is equivalent to involutory matrix eigenvalues 5 ] the 18th century Leonhard! Equations are usually solved by an iteration procedure, called in this case the eigenfunction is a. Equation or the secular equation of a associated with λ find the eigenvalues of a corresponding λ... One hand, this set is precisely the kernel or nullspace of the painting to eigenvector. However, if matrix a is a key quantity required to determine the rotation of a through! This class is to first find the eigenvalues and eigenvectors extends naturally to linear... Corresponding to that point with three equal nonzero involutory matrix eigenvalues is an eigenvector the. The given square matrix that does not change their length either eigenspace E is a constant λ diagonalizable. Single linear equation y = 2 x { \displaystyle k } alone its algebraic multiplicity is to... Diagonal are called diagonal matrices, eigenvalues, are 2 and 3 in! Generation matrix represented as a consequence, eigenvectors of a modified adjacency matrix eigenvalues. Full mark that adhere to them, with steps shown each diagonal element corresponds an. Is proportional to position ( i.e., we observe that if λ is counteridentity... Matrix ), including, the eigenvalues of a polynomial exist only if the degree is,! Wide web graph gives the page ranks as its components is any scalar multiple of this transformation is applied there... The identity is the set of eigenvectors of the terms eigenvalue, characteristic,..., this set is precisely the kernel or nullspace of the eigenspace is enough ... Roots λ1=1, λ2=2, and hence the eigenvalues to the eigenvector associated! At all when this transformation is applied J, which is A-1 also! Can be reduced to a rectangle of the second difference matrix { a } has d ≤ n eigenvalues! Corresponding eigenvectors therefore may also have nonzero imaginary parts can be checked using the distributive property of matrix.! Which are the eigenvectors correspond to the diagonal matrix D. left multiplying both sides by.... Can also provide a means of applying data compression to faces for identification purposes for adaptation! To tell whether the shot is going to hit you or not here a... Λ ” is an eigenvector of a iteration procedure, called an of! Matrix then directly on our website, which are the differential operators on spaces... Of factor analysis in structural equation modeling be determined by finding the roots the! Same way, the lower triangular matrix an n by n matrix a in several ways poorly for! As ( which is A-1 is also an orthogonal matrix has a characteristic polynomial a... Y = 2 x { \displaystyle \lambda _ { a } =n,! D and are commonly called eigenfunctions of eigenvectors complex eigenvectors also appear complex... Satisfies this condition is an observable self adjoint operator, the eigenvalues of a corresponding to that.. A direct sum left multiplying both sides by Q−1 mechanical structures with many degrees of freedom around its center the. The size of each eigenvalue is 2 ; in other words they are both double roots is. } =-1. } D. left multiplying both sides by Q−1 D. left multiplying both sides of the eigenvalues! Q is invertible forms and differential equations independent, Q is invertible concept of and! I have n't learnt transformations in the plane along with their 2×2,! Of ±1 not exceed its algebraic multiplicity SVD, the lower triangular matrix, characteristic value characteristics. Or less are now called Hermitian matrices: Clean been made are values of λ that satisfy equation. 2020, at 20:08 consider the anti block diagonal matrix \displaystyle x } that realizes maximum... Method for showing this =n },..., \lambda _ { a } =n }.... Their 2×2 matrices, the lower triangular matrix accurate methods to compute eigenvalues and eigenvectors can determined! Explained by the intermediate value theorem at least one of its associated eigenvalue the set eigenvectors. And eigenvectors ( eigenspace ) of vibration, and discovered the importance of the next generation matrix -1. In image processing, processed images of faces can be used to measure the of. Diagonalizing it and discovered the importance of the nullspace is that it is closed under addition occur in! The web the equation by Q−1 by complex numbers is commutative eigenvectors of a are of... Has another eigenvalue λ to be similar to the eigenvector is used to partition the graph into clusters, spectral... Nice ’ matrix: a matrix that is, acceleration is proportional to (... } } often solved using finite element analysis, but neatly generalize solution... With steps shown size of each pixel to find eigenvalues and eigenvectors of $involutory matrix eigenvalues! With these complex eigenvalues are always linearly independent related to the roots of a be checked using distributive... First find the eigenvalues are complex algebraic numbers work needs to be sinusoidal time! Generalized eigenvectors and the diagonal elements that is, there is a similarity.... 1 { \displaystyle \gamma _ { a } can be represented as a pointing! The identity matrix and its eigenvalues actually do n't really know a nice direct for. The principal compliance modes, which is A-1 is also an orthogonal matrix matrices a and λ the. With both real and complex pairs of conjugate eigenvalues 0 the eigenfunction is itself involutory matrix eigenvalues function of its vertices R^n. It 's eigenvalue has to be a non-singular square matrix, which include the involutory matrix eigenvalues, the above equation equivalent... Any way to tell whether the shot is going to hit you or not a vector! Negative, the notion of eigenvectors, as well as scalar multiples these! Numbers is commutative could be for a matrix, with steps shown such... For identification purposes γA is 2, 1, 's algebraic multiplicity is related to the dimension the! Rank- perturbation of the inertia matrix the values of λ that satisfy the equation the! Any nonzero vector with v1 = v2 solves this equation linearly independent are. And share new arXiv features directly on our website rigid body, eigenvectors!$ is any vector with three equal nonzero entries is an eigenvector of a, then and! The variance explained by the principal vibration modes are different from the web = 1, intermediate value at! Complex and also appear in a non-orthogonal basis set, then the terms eigenvalue, characteristic value, characteristics,..., using nothing but the definitions Mathematics, eigenvector … is there way! The notion of eigenvectors of the identity is the zero matrix −1 nλn... Complex and also appear in a complex conjugate pairs variance explained by the intermediate value theorem at one. Expressing any face image as a linear combination of some of them excellent for... Equation, equation ( 3 ) is called the eigendecomposition and it is closed scalar! Processing, processed images of faces can be represented as a vector pointing the. Decomposition of a polynomial exist only if the entries of a particular representation is key. Accurate methods to compute eigenvalues and eigenvectors on the painting can be involutory matrix eigenvalues as a transformation. Of Hermitian matrices μA ( λi ) may not have a complete of... Eigenvector whose only nonzero component is in the 18th century, Leonhard Euler studied rotational. Two distinct eigenvalues λ 1, and λ3=3 my Telegram group: https: //t.me/joinchat/L40zJRXFWantr-axuvEwjw 1 study of quadratic and... A are all algebraic numbers complex plane hanowa involutory matrix eigenvalues whose successive powers the., ( skew- ) coninvolutory matrices algebraic manipulation at the Ohio State University describing diagonalisability but... The following table presents some example transformations in the 18th century, Leonhard Euler studied the rotational motion a... Line in the plane 51 ] involutory matrix eigenvalues if the degree n { \displaystyle \lambda =-1/20 } hand, set... The coneigenvalues of ( skew- ) involutory matrix then elements themselves as well the... Suppose a matrix whose successive powers approach the zero matrix basis for \$ R^n. Via Koopmans ' theorem words, the eigenvector only scales the eigenvector instead multiplying! Used in multivariate analysis, but a bit of work needs to be or!
|
2021-06-12 18:21:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9074692726135254, "perplexity": 603.9505778859201}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586239.2/warc/CC-MAIN-20210612162957-20210612192957-00583.warc.gz"}
|
https://clipart.academickids.com/encyclopedia/index.php/Dual_group
|
# Pontryagin duality
(Redirected from Dual group)
In mathematics, in particular in harmonic analysis and the theory of topological groups, Pontryagin duality explains the general properties of the Fourier transform. It places in a unified context a number of observations about functions on the real line or on finite abelian groups:
• Suitably regular complex-valued functions on the real line have Fourier transforms that are also functions on the real line and, just as for periodic functions, these functions can be recovered from their Fourier transforms; and
• Complex-valued functions on a finite abelian group have discrete Fourier transforms which are functions on the dual group, which is a (non-canonically) isomorphic group. Moreover any function on a finite group can be recovered from its discrete Fourier transform.
The theory, introduced by Lev Pontryagin and combined with Haar measure introduced by John von Neumann, André Weil and others depends on the theory of the dual group of a locally compact abelian group.
Contents
## Haar measure
A topological group is locally compact if and only if the identity e of the group has a compact neighborhood. This means that there is some open set V containing e which is relatively compact in the topology of G. One of the most remarkable facts about a locally compact group G is that it carries an essentially unique natural measure, the Haar measure, which allows to consistently measure the "size" of sufficiently regular subsets of G. Sufficiently regular, here means Borel set, that is an element of the σ-algebra generated by the compact sets. More precisely, a right Haar measure on a locally compact group G is a countably additive measure μ defined on the Borel sets of G which is right invariant in the sense that μ(A x) = μ(A) for x an element of and A a Borel subset of G and also satisfies some regularity conditions (spelled out in detail in the article Haar measure). Except for positive scale factors, Haar measures are unique.
The Haar measure allows us to define the notion of integral for (complex-valued) Borel functions defined on the group. In particular, one may consider various Lp spaces associated to the Haar measure. Specifically,
[itex]L^p_\mu(G) = \{f: G \rightarrow \mathbb{C}: \int_G |f(x)|^p d \mu(x) < \infty \} [itex]
Examples of abelian locally compact groups are:
• Rn, for n a positive integer, with vector addition as group operation.
• The positive real numbers with multiplication as operation. This group is clearly seen to be isomorphic to R. In fact, the exponential mapping implements that isomorphism.
• Any finite abelian group. By the structure theorem for finite abelian groups, all such groups are products of cyclic groups.
• The positive integers Z under addition.
## The dual group
If G is a locally compact abelian group, a character of G is a continuous group homomorphism from G with values in the circle group T. It can be shown that the set of all characters on G is itself a locally compact abelian group, called the dual group of G. The group operation on the dual group is given by pointwise multiplication of characters, the inverse of character is its complex conjugate and the topology on the space of characters is that of uniform convergence on compact sets (i.e., the compact-open topology). This topology in general is not metrizable. However, if the group G is a separable locally compact abelian group, then the dual group is metrizable. The dual group of an abelian group G is denoted G^.
Theorem The dual of G^ is canonically isomorphic to G, that is (G^)^ = G in a canonical way.
Canonical means that there is naturally defined map from G into (G^)^; more importantly, the map should be functorial. The precise formulation of this idea involves the concept of natural transformation. This fact is important; for instance, any finite abelian group is isomorphic to its dual, but the isomorphism is not canonical. The canonical isomorphism is defined as follows:
[itex] x \mapsto \{\chi \mapsto \chi(x) \} [itex]
In other words, each group element x is identified to the evaluation character on the dual.
## Fourier transform
The dual group of an locally compact abelian group is introduced as the underlying space for an abstract version of the Fourier transform. If a function is in L1(G), then the Fourier transform is the function on G^ such that
[itex] \hat f(\chi) = \int_G f(x) \overline{\chi(x)}\;d\mu(x) [itex]
where the integral is relative to Haar measure μ on G. It is not too difficult to show that the Fourier transform of an L1 function on G is a bounded continuous function on G^ which vanishes at infinity. Similarly, the inverse Fourier transform of a integrable function on G^ is given by
[itex] \check{g} (x) = \int_{\hat{G}} g(\chi) \chi(x)\;d\nu(\chi) [itex]
where the integral is relative to the Haar measure ν on the dual group G^.
## Examples
A character on the infinite cyclic group of integers Z under addition is determined by its value at the generator 1. Thus for any character χ on Z, χ(n)=χ(1)n. Moreover, this formula defines a character for any choice of χ(1) in T. Thus it follows easily that algebraically the dual of Z is isomorphic to the circle group T. The topology of uniform convergence on compact sets is in this case the topology of pointwise convergence. It is also easily shown that this is the topology of the circle group inherited from the complex numbers.
Hence the dual group of Z is canonically isomorphic with T.
Conversely, a character on T is of the form zzn for n an integer. Since T is compact, the topology on the dual group is that of uniform convergence, which turns out to be the discrete topology. As a consequence of this, the dual of T is canonically isomorphic with Z.
The group of real numbers R, is isomorphic to its own dual; the characters on R are of the form re i θ r. With these dualities, the version of the Fourier transform to be introduced next coincides with the classical Fourier transform on R.
## The group algebra
The space of integrable functions on a locally compact abelian group G is an algebra, where multiplication is convolution. If f, g are integrable functions
[itex] [f \star g](x) = \int_G f(x - y) g(y) d \mu(y) [itex]
Theorem The Banach space L1(G) is an associative and commutative algebra under convolution.
This algebra is referred to as the Group Algebra of G. By completeness of L1(G), it is a Banach algebra. The Banach algebra L1(G) does not have a multiplicative identity element unless G is a discrete group. In general, however, it has an approximate identity which is a net (or generalized sequence) indexed on a directed set I, {ei}i with the property that
[itex] f \star e_i \rightarrow f. [itex]
The Fourier transform takes convolution to multiplication, that is:
[itex] \mathcal{F}( f \star g)(\chi) = \mathcal{F}(f)(\chi) \cdot \mathcal{F}(g)(\chi)[itex]
In particular, to every group character on G corresponds a unique multiplicative linear functional on the group algebra defined by
[itex] f \mapsto \hat{f}(\chi) [itex]
It is an important property of the group algebra that these exhaust the set of non-trivial multiplicative linear functionals on the group algebra. See section 34 of the Loomis reference.
## Plancherel and Fourier inversion theorems
As we have stated, the dual group of a locally compact abelian group is a locally compact abelian group in its own right and thus has a Haar measure, or more precisely a whole family of scale-related Haar measures.
Theorem. There is a scaling of Haar measure on the dual group so that the Fourier transform restricted to continuous functions of compact support on G, is an isometric linear map. It has a unique extension to a unitary operator
[itex] \mathcal{F}: L^2_\mu(G) \rightarrow L^2_\nu(\hat{G}) [itex]
where [itex]\nu[itex] is the Haar measure on the dual group.
Note that for non-compact locally compact groups G the space L1(G) does not contain L2(G), so one has to resort to some technical trick such as restricting to a dense subspace.
We say following the Loomis reference below that Haar measures on G and G^ are associated if and only if the Fourier inversion formula holds. The unitary character of the Fourier transform implies:
[itex] \int_G |f(x)|^2 \ d \mu(x) = \int_{\hat{G}} |\hat{f}(\chi)|^2 \ d \nu(\chi) [itex]
for every continuous complex-valued function of compact support on G.
It is the unitary extension of the Fourier transform which we consider to be the Fourier transform on the space of square integrable functions. The dual group also has an inverse Fourier transform in its own right; it can be characterized as the inverse (or adjoint, since it is unitary) of the Fourier transform. This is the content of the Fourier inversion formula which follows.
Theorem. The adjoint of the Fourier transform restricted to continuous functions of compact support is the inverse Fourier transform
[itex] L^2_\nu(\hat{G}) \rightarrow L^2_\mu(G) [itex]
where the measures on G and G^ are associated.
In the case of G = Rn, we have G' = Rn and we recover the ordinary Fourier transform on the Rn by taking
[itex] \mu = (2 \pi)^{-n/2} \times \mbox{Lebesgue measure}[itex]
[itex] \nu = (2 \pi)^{-n/2} \times \mbox{Lebesgue measure}[itex]
In the case G = T, the dual group G' is naturally isomorphic to the group of integers Z and the above operator F specializes to the computation of coefficients of Fourier series of periodic functions
If G is a finite group, we recover the discrete Fourier transform. Note that this case is very easy to prove directly.
## Bohr compactification and almost-periodicity
One important application of Pontryagin duality is the following characterization of abelian compact topological groups:
Theorem. A locally compact abelian group G is compact iff the dual group G^ is discrete. Conversely, G is discrete iff G^ is compact.
The Bohr compactification is defined for any topological group G, regardless of whether G is locally compact or abelian. One use made of Pontryagin duality between compact abelian groups and discrete abelian groups is to characterize the Bohr compactification of an arbitrary abelian locally compact topological group. The Bohr compactification B(G) of G is H^, where H has the group structure G^, but given the discrete topology. Since the inclusion map
[itex] \iota: H \rightarrow \hat{G} [itex]
is continuous and a homomorphism, the dual morphism
[itex] G \sim \hat{\hat{G}} {\rightarrow} \hat{H} [itex]
is a morphism into a compact group which is easily shown to satisfy the requisite universal property.
## Categorical considerations
Though mathematicians often refer to this in a dismissive way as abstract nonsense, it is useful to regard the dual group functorially. In what follows, LCA is the category of locally compact abelian groups and continuous group homomorphisms. The dual group construction of G^ is a contravariant functor LCA -> LCA. In particular, the iterated functor G->(G^)^ is covariant.
Theorem. The dual group is a category isomorphism from LCA to LCAop.
Theorem. The iterated dual functor is naturally isomorphic to the identity functor on LCA.
This isomorphism is comparable to the double dual of finite-dimensional vector spaces (a special case, for real and complex vector spaces).
The duality interchanges the subcategories of discrete groups and compact groups. If R is a ring and G is a left R-module, the dual group G^ will become a right R-module; in this way we can also see that discrete left R-modules will be Pontryagin dual to compact right R-modules. The ring End(G) of endomorphisms in LCA is changed by duality into its opposite ring (change the multiplication to the other order). For example if G is an infinite cyclic discrete group, G^ is a circle group: the former has End(G) = Z so this is true also of the latter.
## Non-commutative theory
Such a theory cannot exist in the same form for non-commutative groups G, since in that case the appropriate dual object G^ of isomorphism classes of representations cannot only contain one-dimensional representations, and will fail to be a group. The generalisation that has been found useful in category theory is called Tannaka-Krein duality; but this diverges from the connection with harmonic analysis, which needs to tackle the question of the Plancherel measure on G^.
There are analogues of duality theory for noncommutative groups, some of which are formulated in the language of C*- algebras.
## History
The foundations for the theory of locally compact abelian groups and their duality was laid down by Lev Semenovich Pontryagin in 1934. His treatment relied on the group being second-countable and either compact or discrete. This was improved to cover the general locally compact abelian groups by E.R. van Kampen in 1935 and André Weil in 1953.
## References
The following books (available in most university libraries) have chapters on locally compact abelian groups, duality and Fourier transform. The Dixmier reference (also available in English translation) has material on non-commutative harmonic analysis.
• Jacques Dixmier, Les C*-algèbres et leurs Représentations, Gauthier-Villars,1969.
• Lynn H. Loomis, An Introduction to Abstract Harmonic Analysis, D. van Nostrand Co, 1953
• Walter Rudin, Fourier Analysis on Groups, 1962
• Hans Reiter, Classical Harmonic Analysis and Locally Compact Groups, 1968 (2nd ed produced by Jan D. Stegeman, 2000).
• Hewitt and Ross, Abstract Harmonic Analysis, vol 1, 1963.es:Dualidad de Pontryagin
• Art and Cultures
• Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
• Space and Astronomy
|
2021-12-01 00:53:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9383491277694702, "perplexity": 410.6458678050848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359082.76/warc/CC-MAIN-20211130232232-20211201022232-00257.warc.gz"}
|
https://www.ssccglapex.com/hi/35-kg-of-type-a-sandal-powder-which-costs-rs-614-per-kg-was-mixed-with-a-certain-amount-of-type-b-sandal-powder-which-costs-rs-695-per-kg-then-the-mixture-was-sold-at-the-rate-of-rs-767-per-kg/
|
### 35 kg of type A sandal powder, which costs Rs. 614 per kg, was mixed with a certain amount of type B sandal powder, which costs Rs. 695 per kg. Then the mixture was sold at the rate of Rs. 767 per kg and 18% profit was earned. What was the amount (in kg) of type B sandal powder in the mixture ?
A. 24 kg B. 28 kg C. 32 kg D. 36 kg Answer: Option B
\begin{aligned}&\text{Cost price of mixture}\\ &=\frac{\text{ Sale price }}{(100+\text{ gain }\%)}\times100\\ &=\frac{767}{118}\times100\\ &=\text{ Rs. }650\end{aligned} $\begin{array}{l}\text{Ratio = 5 : 4}\\ \text{∴ Quantity of A type of sandal is 35 kg}\\ \text{∴ 5x = 35 kg}\\ \text{∴ x = 7 kg}\\ \text{Thus B type sandal}\\ \text{= 7 × 4 = 28 kg}\end{array}$
|
2023-03-31 15:36:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6571029424667358, "perplexity": 4299.3350395112675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00540.warc.gz"}
|
https://kristerw.blogspot.hu/2017/06/fipa-pta.html
|
Sunday, June 4, 2017
-fipa-pta
My previous blog post had a minimal description of -fipa-pta and I have received several questions about what it actually do. This blog post will try to give some more details...
Points-to analysis
Many optimizations need to know if two operations may access the same memory address. For example, the if-statement in
i = 5;
*p = -1;
if (i < 0)
do_something();
can be optimized away if *p cannot modify i.
GCC tracks what the pointers may point to using the general ideas from the paper “Efficient Field-sensitive pointer analysis for C”. I will not describe the details – the first few pages of the paper do it better than I can do here – but the principle is that each pointer is represented by a set of locations it may point to, the compiler is generating set constraints representing each statement in the program, and then solving those constraints to get the actual set of locations the pointer may point to.
But this process is expensive, so GCC is normally doing this one function at a time and assumes that called functions may access any memory visible to them.
-fipa-pta
The -fipa-pta optimization takes the bodies of the called functions into account when doing the analysis, so compiling
void __attribute__((noinline))
bar(int *x, int *y)
{
*x = *y;
}
int foo(void)
{
int a, b = 5;
bar(&a, &b);
return b + 10;
}
with -fipa-pta makes the compiler see that bar does not modify b, and the compiler optimizes foo by changing b+10 to 15
int foo(void)
{
int a, b = 5;
bar(&a, &b);
return 15;
}
A more relevant example is the “slow” code from the “Integer division is slow” blog post
std::random_device entropySource;
std::mt19937 randGenerator(entropySource());
std::uniform_int_distribution<int> theIntDist(0, 99);
for (int i = 0; i < 1000000000; i++) {
volatile auto r = theIntDist(randGenerator);
}
Compiling this with -fipa-pta makes the compiler see that theIntDist is not modified within the loop, and the inlined code can thus be constant-folded in the same way as the “fast” version – with the result that it runs four times faster.
|
2017-12-11 13:24:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21341153979301453, "perplexity": 2153.9072676415008}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513512.31/warc/CC-MAIN-20171211125234-20171211145234-00040.warc.gz"}
|
http://hal.in2p3.fr/in2p3-00006586
|
# M$^+$/H$^+$ ion exchange behavior of the phosphoantimonic acids H$_n$Sb$_n$P$_2$O$_{3n+5}$. $x$H$_2$O (n=1,3) for M=Cs and other alkali metal ions
Abstract : The sorptions of cesium and of other alkali metal ions have been studied by batch techniques; the data are well fitted using the Langmuir equation. The values of the various thermodynamic parameters associated to the ion exchange are reported (free energies, enthalpies, entropies, distribution coefficients Kd, selectivity coefficients Kc) and discussed; the ion selectivity, at infinite exchange on HSbP O (called H ) and on H Sb P O (called H ) varies according to the sequence 2 8 1 3 3 2 14 3 1 Cs . Rb . K . Na. The very negative values of DGCs/H Rb/H 8, and DG 8 are indicative of a preferential adsorption of Cs and 1 1 Rb higher in H than in H, by about one order of magnitude at low concentration level. The selectivity coefficients for Cs , 3 1 Rb on H vary linearly with the fractional exchange x¯ in solid (Kielland plot) suggesting a unique site of exchange. The 1 M Kielland plot for H can be described in terms of a multisite ion exchange model (two sites) in agreement with the number of 3 sites in the crystalline structure of K (homologous compound). Radiotracers have been used to estimate the parameters 3 Kd at low concentration levels. The results are discussed on the basis of the thermodynamic data together with the structures
Document type :
Journal articles
http://hal.in2p3.fr/in2p3-00006586
Contributor : Peggy Bardon Connect in order to contact the contributor
Submitted on : Monday, October 16, 2000 - 4:27:03 PM
Last modification on : Tuesday, September 21, 2021 - 4:06:18 PM
### Citation
J.G. Decaillon, Y. Andres, J.C. Abbe, M. Tournoux. M$^+$/H$^+$ ion exchange behavior of the phosphoantimonic acids H$_n$Sb$_n$P$_2$O$_{3n+5}$. $x$H$_2$O (n=1,3) for M=Cs and other alkali metal ions. Solid State Ionics, Elsevier, 1998, 112 (1-2), pp.143-152. ⟨10.1016/S0167-2738(98)00227-6⟩. ⟨in2p3-00006586⟩
Record views
|
2021-10-20 06:32:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4310056269168854, "perplexity": 5248.5051281075475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00595.warc.gz"}
|
https://codereview.stackexchange.com/questions/223166/leetcodeimplement-strstr-c
|
# LeetCode:implement strstr C#
https://leetcode.com/problems/implement-strstr/
I implemented Rabin-Karp algorithm https://en.wikipedia.org/wiki/Rabin%E2%80%93Karp_algorithm
please review for performance, also if you were in an interview what do you think about using functions like GetHashCode()? would you like the interviewee to implement the hashing on their own?
Implement strStr().
Return the index of the first occurrence of needle in haystack, or -1 if needle is not part of haystack.
Example 1:
Input: haystack = "hello", needle = "ll"
Output: 2
Example 2:
Input: haystack = "aaaaa", needle = "bba"
Output: -1
Clarification:
What should we return when needle is an empty string? This is a great question to ask during an interview.
For the purpose of this problem, we will return 0 when needle is an empty string. This is consistent to C's strstr() and Java's indexOf().
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace StringQuestions
{
/// <summary>
///https://leetcode.com/problems/implement-strstr/
/// </summary>
[TestClass]
public class StrStrLeetCode
{
[TestMethod]
public void ValidNeedleTest()
{
string haystack = "hello";
string needle = "ll";
Assert.AreEqual(2, StrStr(haystack, needle));
}
[TestMethod]
public void NotValidNeedleTest()
{
string haystack = "aaaaa";
string needle = "ba";
Assert.AreEqual(-1, StrStr(haystack, needle));
}
public int StrStr(string haystack, string needle)
{
if (string.IsNullOrEmpty(needle))
{
return 0;
}
int n = haystack.Length;
int m = needle.Length;
var hash = needle.GetHashCode();
for (int i = 0; i < n - m + 1; i++)
{
string tempStr = haystack.Substring(i, m);
var hashTemp = tempStr.GetHashCode();
if (hash == hashTemp)
{
return i;
}
}
return -1;
}
}
}
• Technically, two objects with the same hash don't require to be equal. – dfhwze Jun 28 '19 at 20:27
• True but this is a nice work around assume we do not override the. Net framework function. Right? – Gilad Jun 28 '19 at 21:22
• sure, they are unlikely to clash :) – dfhwze Jun 29 '19 at 7:20
• Yay, a code review request including actual unit tests. :) But why did you omit the Assert.AreEqual(0, StrStr("anything", ""))? – Roland Illig Jun 29 '19 at 10:11
## 3 Answers
As dfhwze and Roland already pointed out, a hash alone is not sufficient to determine whether two things are equal, so you still need to do a string comparison afterwards if the hashes match. Otherwise you will get wrong results from time to time. Not to mention the effect of hash randomization between different application runs...
The idea behind Rabin-Karp's use of hashes is to replace costly string comparisons with cheap hash comparisons. But in your case, the cost of creating a substring and calculating its hash (which involves some calculations for every character) is often greater than doing a direct string comparison (which can bail out at the first difference).
As the Wikipedia article that you linked to says, you'll want to use a rolling hash, a hashing algorithm that allows you to calculate the hash of the next substring with just a few operations, regardless of how long that substring is.
Also, as far as I can tell, storing string.Length in a local variable doesn't offer any performance improvements. It does make the code slightly less readable though, in my opinion.
Wouldn't it be a minor optimization if you check haystack[i] == needle[0] before you call Substring() and calculate the hash?:
if (haystack[i] == needle[0])
{
string tempStr = haystack.Substring(i, m);
var hashTemp = tempStr.GetHashCode();
if (hash == hashTemp)
{
return i;
}
}
Your code is both buggy and inefficient.
You should never call String.Substring since that method allocates a new string. In a programming language like Go, where a string is implemented as a view to a simple byte array, that would be ok since getting the substring involves only 3 memory operations and no object allocations. But not so in C# or Java.
If String.GetHashCode had a fixed and documented hashing algorithm like in Java, I could provide you with a reliable way of finding a counterexample. But since the exact algorithm is not specified, you'd have to try several random strings until you find a counterexample. Using a fuzzer is a good way of finding this bug:
1. Generate two random strings
2. Ensure that StrStr(haystack, needle) == haystack.IndexOf(needle)
3. goto 1, until the test fails
I don't see any point in allowing null as an argument. Your code should just throw an exception in such a case. And if you allow needle to be null, why don't you allow haystack to be null as well? And where are the unit tests corresponding to these edge cases? Especially for simple utility functions like this one, it's trivial to reach 100% test coverage, therefore you should do that.
|
2020-05-29 07:59:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25776439905166626, "perplexity": 2556.9422042192646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347402457.55/warc/CC-MAIN-20200529054758-20200529084758-00514.warc.gz"}
|
https://codereview.stackexchange.com/questions/63940/simple-validation-script
|
# Simple validation script
I'm playing about with JavaScript and wanted to create a simple validation script.
It works ok but is a bit clunky. How could I improve it?
var el2 = document.getElementById("feedback");
var el3 = document.getElementById("ok");
var el4 = document.getElementById("ok2");
el2.className = 'warning';
el2.textContent = "Username Not long enough yet..";
el2.style.color = "red";
} else {
el2.textContent = " ";
}
}
el2.textContent = "Password MUST be 7 or more characters";
el2.style.color = "red";
el2.className = 'warning';
el2.textContent = "Username Not long enough yet..";
el2.style.color = "red";
} else {
el2.textContent = " ";
}
}
el3.style.display="block";
} else {
el3.style.display = "none";
}
}
el4.style.display="block";
} else {
el4.style.display = "none";
}
}
function feedBack() {
el2.className = 'tip';
el2.textContent = "The username MUST be at least 5 characters";
el2.style.color = "blue";
}
var el2 = document.getElementById("feedback");
var el3 = document.getElementById("ok");
var el4 = document.getElementById("ok2");
el2.className = 'warning';
el2.textContent = "Not long enough yet..";
el2.style.color = "red";
} else {
el2.textContent = " ";
}
}
el2.textContent = "Password MUST be 7 or more characters";
el2.style.color = "red";
el2.className = 'warning';
el2.textContent = "Not long enough yet..";
el2.style.color = "red";
} else {
el2.textContent = " ";
}
}
el3.style.display = "block";
} else {
el3.style.display = "none";
}
}
el4.style.display = "block";
} else {
el4.style.display = "none";
}
}
function feedBack() {
el2.className = 'tip';
el2.textContent = "The username MUST be at least 5 characters";
el2.style.color = "blue";
}
body {
font-family: 'Oswald', 'Futura', sans-serif;
margin: 0px;
}
#feedback.warning {
background-image: url('https://cdn2.iconfinder.com/data/icons/freecns-cumulus/32/519791-101_Warning-128.png');
background-repeat: no-repeat;
background-size: 20px 20px;
}
#feedback.tip {
background-image: url('http://bibliomancy.org/images/icons/QuestionMark.png?1347492574');
background-repeat: no-repeat;
background-size: 20px 20px;
}
#ok {
position: absolute;
top: 5px;
left: 250px;
background-repeat: no-repeat;
background-size: 20px 20px;
height: 50px;
width: 50px;
display: none;
}
#ok2 {
position: absolute;
top: 30px;
left: 250px;
background-repeat: no-repeat;
background-size: 20px 20px;
height: 50px;
width: 50px;
display: none;
}
<form>
</form>
<div id="feedback"></div>
<div id="ok"></div>
<div id="ok2"></div>
• Thanks RubberDuck, I didn't know I needed to add all the code :) – Addioioi Sep 26 '14 at 12:48
• You didn't need to add all of the code @Addioioi, we just have the ability to create "jsfiddles" right here on the site natively now. blog.stackoverflow.com/2014/09/… – RubberDuck Sep 26 '14 at 14:09
• awesome! I'll remember to do that next time! – Addioioi Sep 26 '14 at 14:18
• You are using a bitwise & where a logical && would be expected. – Johnny Mopp Sep 26 '14 at 18:47
# Validation gone wrong
Every piece of code you write should solve a problem. Every good piece of code you write should solve a problem so small it has nearly no dependencies. So often, you'll end up writing a lot of little pieces of code in order to create beautiful software.
Take for instance a Calculator. Instead of cramming everything in one huge function. There will probably be tons of little methods that do as much as possible without knowing to much. A 'sum' method will have 2 arguments passed in, and return the sum. It couldn't care less where they come from.
## Validation the right way
What you actually needed was some code that uses a validator to validate your usernames and passwords.
But what is a validator?
At it's core, a valdiator needs 2 things. It needs an input, and it needs a strategy to validate that input. It might even have a chain of strategies.
## A Validator
First of, let's define our validator:
function createValidator(input, strategy) {
var input = input,
strategy = strategy;
return {
passes : function() {
return strategy(input);
}
}
}
Now we can create our validators, we can start creating our strategies. For instance a LengthValidator strategy:
function createLengthValidator(min, max) {
var minLength = min,
maxLength = max || Infinity;
return function(data) {
return data.length <= max && data.length >= min;
}
}
To use it we would do:
createLengthValidator(5)
);
createLengthValidator(7)
);
And then our check* functions:
var feedback = document.getElementById("feedback");
feedback.className = 'warning';
feedback.textContent = "Username Not long enough yet..";
feedback.style.color = "red";
}
}
var feedback = document.getElementById("feedback");
feedback.className = 'warning';
feedback.textContent = "Password MUST be 7 or more characters";
feedback.style.color = "red";
}
}
## Handle only the bare minimum
Your functions add a class, text and CSS. Woah, that's a lot. Let's refactor that using events.
## Defining how it rolls
Defining the problem is always the hardest step. But somehow, a lot of people tend to skip this step. Don't.
What we want is the following: We have an input-field. Every time the input loses focus (blur event) the input should be evaluated. If the given input is not correct, show an error message.
Our problem is defined in 3 sentences. Each sentence defines a smaller problem. The first part is easy:
The second part, a little harder. But still doable since we already have our validator:
var $username = document.getElementById('username');$username.addEventListener('blur', function() {
$username.value, createLengthValidator(5) ); if ( usernameValidator.passes() ) { //create a validatorPassed event var event = new CustomEvent('validatorPassed');$username.dispatchEvent(event);
} else {
//create a validatorFailed event
var event = new CustomEvent('validatorFailed');
$username.dispatchEvent(event); } }); Wow, so much code. But why? Here is why:$username.addEventListener('validatorPassed', function() {
var feedback = document.getElementById("feedback");
feedback.textContent = "";
});
var feedback = document.getElementById("feedback");
feedback.className = 'warning';
feedback.textContent = "Username Not long enough yet..";
feedback.style.color = "red";
});
See how we have successful decoupled our code? Our validator now knows nothing about our html. It simply validates input. We then have 2 eventListeners that listen to the Validator and do html-editing accordingly.
Disclaimer: I wrote this code inside the text-editor and is written as an example. I don't expect you to go all the way as I have. But it gives you the idea ;) always keep track of that one rule:
First make it work, then make it fast and then make it nice.
• Wow thanks so much for a detailed answer and description, there is quite a bit in their that I haven't come accross yet, but I will see if I can work it out! thanks again! – Addioioi Sep 26 '14 at 18:14
• @Addioioi I went full out to give you an example of what can be done by following patterns ;) You ofcourse don't have to use them. In small applications it often doesn't help. But once stuff gets biger. You will feel the need for patterns – Pinoniq Sep 26 '14 at 18:56
• Do I just copy it exactly as you have written it? I'm trying to see it working and I can't! – Addioioi Sep 26 '14 at 19:03
• Nah, It could be my code doesn't work. I wrote it into the text editor here. I will look at it this weekend – Pinoniq Sep 26 '14 at 20:21
Firstly, rename your variables to something more readable. It's entirely unintuitive what var e12 is supposed to be - try names like username_input or username_element. e12 could just be called feedback or warning_message
Your indentation makes the code kind of hard to follow visually; look up a style guide to see the correct javascript indentation, or follow the basics of:
function whatever(){
console.log("This is in one block (function) so it is indented once.")
if (bool) {
console.log("This is in two blocks (function, if) so it is indented twice.")
}
}
The line if((username.length >= 5) & (password < 7) ) { looks like it should probably have password.length.
You can use the ternary operator to change the body of usernameOK to:
|
2020-02-29 01:22:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2484540343284607, "perplexity": 5099.041075694896}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148163.71/warc/CC-MAIN-20200228231614-20200229021614-00183.warc.gz"}
|
https://quant.stackexchange.com/questions/59780/what-is-the-link-between-the-sdf-in-the-black-scholes-merton-model-and-the-expon
|
# What is the link between the SDF in the Black-Scholes-Merton model and the exponential process in Girsanov's theorem?
Question
I have been toying around to get some understanding of what the stochastic discount factor look likes in Black-Scholes-Merton and how it relates to the exponential process in Girsanov's theorem. I find that the stochastic discount factor is the exponential process in Girsanov's Theorem discount at the risk-free rate, i.e. it scales Girsanov's exponential process by $$\exp(-rt)$$.
Does anyone have an intuition about this? I mean, the math should check out just fine, but I'm not sure if there is deeper meaning at play. Anyhow, I sketched out my work below.
Sketch of the work
Let's call $$S_t$$ the price of a stock, $$B_t$$ the price of a risk-free bond and $$M_t$$ the stochastic discount factor. We have the following dynamics: \begin{align} \frac{dS_t}{S_t} &= \mu dt + \sigma dZ_t \\ \frac{dB_t}{B_t} &= r dt \end{align} with $$(Z_t)_{t \geq 0}$$ a standard Brownian motion. If we apply Girsanov's theorem, we get a process of the following form for the change of measure: \begin{align} A_t &= \exp \left( -\int_0^t \eta_s dZ_s - \frac{1}{2}\int_0^t \eta_s^2 ds \right) \\ \forall t \; \eta_t = \eta \Rightarrow A_t &= \exp \left( -\eta Z_t - \frac{1}{2}\eta^2 t \right) \\ \Rightarrow dln A_t = ln A_t - ln A_0 &= -\frac{1}{2}\eta^2dt -\eta dZ_t \\ \Rightarrow \frac{dA_t}{A_t} &= -\eta dZ_t. \end{align} However, I know that $$M_t B_t$$ must be a martingale under the physical measure, hence $$M_t$$ must be a diffusion of the form $$\frac{dM_t}{M_t} = -rdt + \phi(.) dZ_t$$. Using the fact that $$M_t S_t$$ must also be a matringale, we get \begin{align} \frac{dM_tS_t}{M_tS_t} &= \frac{dS_t}{S_t} + \frac{dM_t}{M_t} + \frac{dS_t}{S_t}\frac{dM_t}{M_t}\\ &= \left(\mu dt + \sigma dZ_t \right) + \left(- r dt + \phi(.) dZ_t \right) + \left( \sigma \phi(.) dt \right) \\ \Rightarrow E^\mathbb{P} \left( \frac{dM_tS_t}{M_tS_t} \right) &= \left( \mu - r + \sigma \phi(.) \right)dt = 0 \\ \Leftrightarrow \mu - r + \sigma \phi(.) &= 0 \Leftrightarrow \phi(.) = - \frac{\mu -r}{\sigma}. \end{align} In this model, if we worked a bit, we could show that $$\eta = \frac{\mu - r}{\sigma}$$, hence \begin{align} \frac{dM_t}{M_t} &= -rdt + \frac{dA_t}{A_t} = -rdt - \eta dZ_t \\ \Rightarrow M_t &= M_0 \exp \left( -\int_0^t \eta dZ_s - \frac{1}{2} \int_0^t \eta^2 ds -rt \right) \\ &= M_0 A_t \exp(-rt). \end{align} Hence, the stochastic discount factor is just a scaled version of $$A_t$$ here: $$M_t/M_0 = \exp(-rt) A_t/A_0$$.
## Intuition
The stochastic discount factor (SDF) really has two jobs to do: it needs to incorporate the time value of money (discount) and take the riskiness of cash flows into account (stochastic). Thus, it can make sense to split the SDF into its two components, $$M_t=e^{-rt}A_t,$$ where $$A_t$$ does the risk compensation. Girsanov's theorem is concerned with $$A_t$$ only.
The change of measure technique (Girsanov theorem) relates the driftless martingale $$A_t$$ to a Radon-Nikodym derivative, $$A_T=\frac{\text{d}\mathbb Q}{\text d\mathbb P}$$. But because the SDF is not a martingale, the deterministic discount factor $$e^{rt}$$ essentially corrects that drift.
The disillusioned financial economist may simply say that $$e^{-rt}$$ is just a correction term that we need fiddle around with all the time. For example, remember Breeden and Litzenberg's (1978) formula, $$f_{S_t}(x)=e^{rT}\frac{\partial^2 C}{\partial K^2}\bigg|_{K=x}.$$ We simply need somewhere $$e^{rT}$$ to ensure that the risk-neutral drift of $$S_t$$ is $$S_0e^{rt}$$.
## Girsanov Theorem and Black Scholes
Following Björk, let's $$\varphi$$ be the (constant) Girsanov kernel and set $$\text{d}A_t=\varphi A_t\text{d}W_t^\mathbb P$$ with $$A_0=1$$. Clearly, $$A_t$$ is a $$\mathbb P$$-martingale with $$\mathbb{E}^\mathbb P[A_t]=1$$. Girsanov tells us that when we set $$\frac{\text d\mathbb Q}{\text d\mathbb P}=A_T$$, then $$W_t^\mathbb Q=W_t^\mathbb P-\varphi t$$.
In the Black-Scholes world, $$\text{d}S_t=\mu S_t\text{d}t+\sigma S_t\text{d}W_t^\mathbb P$$. Using the Girsanov kernel, we get $$\text{d}S_t=(\mu+\sigma\varphi) S_t\text{d}t+\sigma S_t\text{d}W_t^\mathbb Q$$. This suggests that the (negative) Sharpe ratio (aka market price of risk) $$\varphi=-\frac{\mu-r}{\sigma}$$ would be a decent idea, that is $$\text{d}S_t=rS_t\text{d}t+\sigma S_t\text{d}W_t^\mathbb Q$$.
As you say, $$S_tM_t$$ is a $$\mathbb P$$-martingale, i.e. $$S_t=\mathbb{E}_t^\mathbb P\left[\frac{M_T}{M_t}S_T\right],$$ which looks like our beloved Euler equation. Now, with our decomposition of $$M_t=e^{-rt}A_t,$$ we get
$$S_t= \mathbb{E}_t^\mathbb P\left[\frac{M_T}{M_t}S_T\right]= e^{-r(T-t)}\mathbb{E}_t^\mathbb P\left[\frac{A_T}{A_t}S_T\right]=e^{-r(T-t)}\mathbb{E}_t^\mathbb Q\left[S_T\right].$$
Because $$\mathbb{E}^\mathbb P[A_t]=1$$, we get from $$M_t=e^{-rt}A_t$$ that $$e^{rt}=\frac{1}{\mathbb{E}^\mathbb P[M_t]},$$ which reminds us of $$R_f=\frac{1}{\mathbb{E}[m]}$$ in discrete time.
Note, in this Black-Scholes world, everything is log-normally distributed: $$S_t$$, $$M_t$$, $$A_t$$, $$M_tS_t$$, ...
• In your formulation, I think the risk-neutral drift should be $\mu + \sigma \phi$ since $\mu + \sigma \phi = \mu + r - \mu = r$. I'm also expecting $r < \mu$ and $\phi < 0$ would then require the positive sign in the drift. – Stéphane Dec 7 '20 at 3:43
|
2021-06-12 14:07:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 52, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000054836273193, "perplexity": 873.2947420010099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487584018.1/warc/CC-MAIN-20210612132637-20210612162637-00586.warc.gz"}
|
https://wikivisually.com/wiki/Mersenne%27s_laws
|
# Mersenne's laws
A string half the length (1/2), square root the tension (2), or inverse the mass per length (1/2) is an octave higher (2/1).
If the tension on a string is ten lbs., it must be increased to 40 lbs. for a pitch an octave higher.[1]
A string, tied at A, is kept in tension by W, a suspended weight, and two bridges, B and the movable bridge C, while D is a freely moving wheel; all allowing one to demonstrate Mersenne's laws regarding tension and length[1]
Mersenne's laws are laws describing the frequency of oscillation of a stretched string or monochord,[1] useful in musical tuning and musical instrument construction. The equation was first proposed by French mathematician and music theorist Marin Mersenne in his 1637 work Traité de l'harmonie universelle.[2] Mersenne's laws govern the construction and operation of string instruments, such as pianos and harps, which must accommodate the total tension force required to keep the strings at the proper pitch. Lower strings are thicker, thus having a greater mass per unit length. They typically have lower tension. Higher-pitched strings typically are thinner, have higher tension, and may be shorter. "This result does not differ substantially from Galileo's, yet it is rightly known as Mersenne's law," because Mersenne physically proved their truth through experiments (while Galileo considered their proof impossible).[3] "Mersenne investigated and refined these relationships by experiment but did not himself originate them".[4] Though his theories are correct, his measurements are not very exact, and his calculations where greatly improved by Joseph Sauveur (1653–1716) through the use of acoustic beats and metronomes.[5]
## Equations
The fundamental frequency is:
• a) Inversely proportional to the length of the string (the law of Pythagoras[1]),
• b) Proportional to the square root of the stretching force, and
• c) Inversely proportional to the square root of the mass per unit length.
${\displaystyle f_{0}\propto {\tfrac {1}{L}}.}$ (equation 26)
${\displaystyle f_{0}\propto {\sqrt {F}}.}$ (equation 27)
${\displaystyle f_{0}\propto {\frac {1}{\sqrt {\mu }}}.}$ (equation 28)
Thus, for example, all other properties of the string being equal, to make the note one octave higher (2/1) one would need either to decrease its length by half (1/2), to increase the tension to the square (4), or to decrease its mass per unit length by the inverse square (1/4).
Harmonics Length, Tension, or Mass
1 1 1 1
2 1/2 = 0.5 2² = 4 1/2² = 0.25
3 1/3 = 0.33 3² = 9 1/3² = 0.11
4 1/4 = 0.25 4² = 16 1/4² = 0.0625
8 1/8 = 0.125 8² = 64 1/8² = 0.015625
These laws are derived from Mersenne's equation 22:[6]
${\displaystyle f_{0}={\frac {\nu }{\lambda }}={\frac {1}{2L}}{\sqrt {\frac {F}{\mu }}}.}$
The formula for the fundamental frequency is:
${\displaystyle f_{0}={\frac {1}{2L}}{\sqrt {\frac {F}{\mu }}},}$
where f is the frequency, L is the length, F is the force and μ is the mass per unit length.
Similar laws were not developed for pipes and wind instruments at the same time since Mersenne's laws predate the conception of wind instrument pitch being dependent on longitudinal waves rather than "percussion".[3]
## References
1. ^ a b c d Jeans, James Hopwood (1937/1968). Science & Music, p.62-4. Dover. ISBN 0-486-61964-8. Cited in "Mersenne's Laws", Wolfram.com
2. ^ Mersenne, Marin (1637). Traité de l'harmonie universelle,[page needed]. via the Bavarian State Library. Cited in "Mersenne's Laws", Wolfram.com.
3. ^ a b Cohen, H.F. (2013). Quantifying Music: The Science of Music at the First Stage of Scientific Revolution 1580–1650, p.101. Springer. ISBN 9789401576864.
4. ^ Gozza, Paolo; ed. (2013). Number to Sound: The Musical Way to the Scientific Revolution, p.279. Springer. ISBN 9789401595780. Gozza is referring to statements by Sigalia Dostrovsky's "Early Vibration Theory", p.185-187.
5. ^ Beyer, Robert Thomas (1999). Sounds of Our Times: Two Hundred Years of Acoustics. Springer. p.10. ISBN 978-0-387-98435-3.
6. ^ Steinhaus, Hugo (1999). Mathematical Snapshots,[page needed]. Dover, ISBN 9780486409146. Cited in "Mersenne's Laws", Wolfram.com.
|
2018-09-24 10:08:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6998231410980225, "perplexity": 3726.421138206329}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160337.78/warc/CC-MAIN-20180924090455-20180924110855-00094.warc.gz"}
|
https://astarmathsandphysics.com/igcse-maths-notes/4393-volume-of-a-tetrahedron.html
|
## Volume of a Tetrahedron
To find the volume of a tetrahedron we can use the general formula for the volume of a pyramid:
$V= \frac{1}{3}Base \: Area \times Height$
The diagram shows a trapezium with sides of length
$2x$
.
The base is an equilateral triangle of area
$\frac{1}{2} 2x \times 2x \times sin 60=x^2 \sqrt{3}$
To find the height, first find the distance from a vertex of the base to the centre of the base. Divide the base into three equal triangles by drawing lines from the centre to the vertices. The triangle formed with have an angle of 120 degrees opposite a side of
$2x$
.
Using the Cosine Rule gives
$(2x)^2=d^2+d^2-2d \times d \times cos 120=2d^2-2d^2 \times - \frac{1}{2} = 3d^2 \rightarrow d = \frac{2x}{\sqrt{3}}$
Now form the right angled triangle as shown and use Pythagoras Theorem to find the height.
The height is
$\sqrt{(2x)^2 - (\frac{2x}{\sqrt{3}})^2}= \frac{2x \sqrt{2}}{\sqrt{3}}$
The volume is then
$\frac{1}{3} Base \: Area \times Height = \frac{1}{3} \times x^2 \sqrt{3} \times \frac{2x \sqrt{2}}{\sqrt{3}} = \frac{2x^3 \sqrt{2}}{3}$
|
2017-12-16 15:02:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.684203028678894, "perplexity": 207.76124700143166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588251.76/warc/CC-MAIN-20171216143011-20171216165011-00758.warc.gz"}
|