url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://leetcode.ca/2021-08-13-1957-Delete-Characters-to-Make-Fancy-String/
Formatted question description: https://leetcode.ca/all/1957.html # 1957. Delete Characters to Make Fancy String Easy ## Description A fancy string is a string where no three consecutive characters are equal. Given a string s, delete the minimum possible number of characters from s to make it fancy. Return the final string after the deletion. It can be shown that the answer will always be unique. Example 1: Input: s = “leeetcode” Output: “leetcode” Explanation: Remove an ‘e’ from the first group of ‘e’s to create “leetcode”. No three consecutive characters are equal, so return “leetcode”. Example 2: Input: s = “aaabaaaa” Output: “aabaa” Explanation: Remove an ‘a’ from the first group of ‘a’s to create “aabaaaa”. Remove two ‘a’s from the second group of ‘a’s to create “aabaa”. No three consecutive characters are equal, so return “aabaa”. Example 3: Input: s = “aab” Output: “aab” Explanation: No three consecutive characters are equal, so return “aab”. Constraints: • 1 <= s.length <= 10^5 • s consists only of lowercase English letters. ## Solution Loop over s and find the size of each group with consecutive equal characters. If a group has size less than three, than append all the characters to the result string. Otherwise, only append two of the characters to the result string. Finally, return the result string. class Solution { public String makeFancyString(String s) { StringBuffer sb = new StringBuffer(); int length = s.length(); char prev = ' '; int count = 1; for (int i = 0; i < length; i++) { char curr = s.charAt(i); if (curr == prev) count++; else { count = 1; prev = curr; } if (count < 3) sb.append(curr); } return sb.toString(); } }
2022-10-07 10:26:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1770433485507965, "perplexity": 4764.703533788083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00311.warc.gz"}
https://en.wikipedia.org/wiki/252_(number)
# 252 (number) ← 251 252 253 → Cardinal two hundred fifty-two Ordinal 252nd (two hundred and fifty-second) Factorization 22× 32× 7 Divisors 1, 2, 3, 4, 6, 7, 9, 12, 14, 18, 21, 28, 36, 42, 63, 84, 126, 252 Roman numeral CCLII Binary 111111002 Ternary 1001003 Quaternary 33304 Quinary 20025 Senary 11006 Octal 3748 Duodecimal 19012 252 is the central binomial coefficient $\tbinom{10}{5}$,[1] and is $\tau(3)$, where $\tau$ is the Ramanujan tau function.[2] 252 is also $\sigma_3(6)$, where $\sigma_3$ is the function that sums the cubes of the divisors of its argument:[3] $1^3+2^3+3^3+6^3=(1^3+2^3)(1^3+3^3)=252.$
2016-02-13 07:25:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7473851442337036, "perplexity": 1669.0997796505583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166222.10/warc/CC-MAIN-20160205193926-00286-ip-10-236-182-209.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/29220-yet-another-integration-problem.html
# Math Help - Yet another integration problem. 1. ## Yet another integration problem. It tells me to use substitution, but I have the problem (like many other people, probably), of being totally lost once actual expressions and numbers are removed. So, can anyone give me some help with this integral, using the method of substitution: $\ \int_1^S \frac{k_1}{k_2-n} \ dn $ 2. Originally Posted by quarks It tells me to use substitution, but I have the problem (like many other people, probably), of being totally lost once actual expressions and numbers are removed. So, can anyone give me some help with this integral, using the method of substitution: $\ \int_1^S \frac{k_1}{k_2-n} \ dn $ $k_1 \mbox{ and }k_2$ are constants. make the substitution $u = k_2 - n$ and continue remember, treat $k_1 \mbox{ and } k_2$ as if they were the numbers you love 3. So, then: $\int_1^S \frac{k_1}{k_2-n} \ dn$ $u=k_2-n$ $du=-dn$ $-k_1$ $\int_1^S \frac{1}{u} \ dn$ $-k_1 \ln\frac{1}{k_2-n}$, evaluated from 1 to S. 4. Originally Posted by quarks So, then: $\int_1^S \frac{k_1}{k_2-n} \ dn$ $u=k_2-n$ $du=-dn$ $-k_1$ $\int_1^S \frac{1}{u} \ dn$ $-k_1 \ln\frac{1}{k_2-n}$, evaluated from 1 to S. no. it's $-k_1 \ln \left| k_2 - n\right|$ and the integral is with respect to u
2016-06-28 12:26:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.929517388343811, "perplexity": 829.9989736437029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00026-ip-10-164-35-72.ec2.internal.warc.gz"}
https://tildedave.com/2015/01/07/i-find-gulp-extremely-frustrating.html
This is going to be a post where I complain. Those of you who have worked with me know that this can a bit of a default state but I try to be a bit more positive on my blog - unfortunately this is not going to be one of those times. So at the new job we use Gulp.js. Gulp is a streaming build system which its webpage advertises as “easy to use”. The basic idea behind Gulp is that your build is a set of streams; you describe the source and destinations and the Gulpfile - code over configuration - sets it up so that your assets are quickly generated by operating directly on the streaming data and not writing any temporary files to disk. For example here’s how you would concatenate all your JavaScript files, pipe them through a minifier (here uglify), and write them to a single generated file. var gulp = require('gulp'), uglify = require('gulp-uglify'); return gulp.src('**/*.js') .pipe(uglify()) .pipe(gulp.dest('./dist/js/built.js')); }); It’s so easy! Everything is just streams! I really appreciate Gulp as a “crazy idea”. Could we make our build just an asset pipeline? Would that be a good idea? However, Gulp isn’t just a “crazy idea” - it is being recommended as a “getting started” tool for new frontend developers and judging from my Twitter and Reddit feeds more and more people are using it. Looking closer at it I find that Gulp has a number of default behaviors that fail a number of very basic tests of what I want out of a build system. ## What Happens on an Error? Let’s say you make a mistake in setting up your build. For example maybe you have made the silliest of mistakes and specified the wrong directory. var gulp = require('gulp'), concat = require('gulp-concat'); return gulp.src('css/**/*.js') .pipe(concat('vendor.js')) .pipe(gulp.dest('./dist/')); }); Does gulp help you here? NO. It fails silently. After all you have provided an empty set of files, which generates no stream data, and so the concatenated result of your build is an empty stream. Working as intended. It doesn’t even generate the vendor.js file. dave@margaret:~/scratch/gulp$gulp build [05:27:07] Using gulpfile ~/scratch/gulp/gulpfile.js [05:27:07] Starting 'build'... [05:27:07] Finished 'build' after 12 ms dave@margaret:~/scratch/gulp$ ls -la dist/ total 8 drwxr-xr-x 2 dave dave 4096 Jan 7 05:26 . drwxr-xr-x 6 dave dave 4096 Jan 7 05:26 .. dave@margaret:~/scratch/gulp\$ While this makes sense from its “everything is just a stream” point of view it is extremely frustating to have a command gulp build exit with a status code of 0 but it has actually not done the work it is supposed to! Hope you weren’t acting on that gulp exit code to trigger anything that might go live for your users! ## Why is the Build Slow? Gulp uses some nifty filesystem magic to make sure that it’s not processing too many files that it runs into operating system limits. That’s great but what happens if you have a set of gulp tasks that call out to another command-line utility (e.g. uglify.js)? Because node.js is single-threaded you can only invoke one of these at a time! You may have a fancy 4 core MacBook Pro but by default your interface to these command line tools is being filtered through a single-threaded reactor that is controlled through some plugin that isn’t even part of the core build system. Hope every single one of your plugins supports using the parallelism that your build needs! It seems a basic tenet that providing the same inputs to a build system should produce the same outputs. Is this how gulp works? No! Directory order in Gulp is nondeterministic! Building the same input may produce an output with a completely different hash each time, based on how the gulp vinyl-fs wrapper file system responded. This means that if you make a server-side only change to your website, you should still upload new assets to S3 because your gulp build command returned a different set of files! Can you easily roll back to an old developed version on your website? Well gulp build on the reverted code appears to have returned different sha files for our main bundle files … so I guess it’s time to upload new assets. I don’t think this is a crazy demand. Files being returned in a consistent order is the default behavior of both Bash and Node’s glob library. Why is this not the default behavior of every build tool? ## Wrapper Libraries Everywhere Files (on the filesystem!) are the basic building block of software development. You code in files, you serve files from a web server, your editor uses files for configuration, your operating system uses files to know which services to start up on boot. There is so much invested in utilities that operate on files and every developer productivity tool that you have has a mode to operate on a file in the filesystem. Of course when it comes to your gulp build you don’t get to use any of these command-line tools, because they may not have a node.js API or that API may not be event stream compatible. You have to instead use a wrapper library that turns that command-line tool (that works perfectly well on its own) into a stream. As an example, the gulp-webpack plugin does the work of wrapping the webpack node.js API and turning it into a more idiomatic “gulp way” of doing executing. If you’re using this plugin and webpack releases a new command line argument - guess what, it’s time to upgrade your plugin too so that you can access that argument in your build! If you’re using gulp, these plugins are the way to get things done. Almost every problem you have is solved by just adding another plugin - code over configuration. But what problem are these plugins solving? It’s not to use Webpack, or Closure Compiler, or Uglify - it’s to use these off the shelf tools with gulp. I want my build system to help me use my tools rather than demand that I use other ones! ## Impedence Mismatch on Testing and Server-Side Code Are your unit tests just a set of streams? Well, you want to find all test files and then run them through your test runner, but that test runner is probably file based and don’t really have an output other than a test report which is only valuable in the case that it fails, and the test report isn’t really uploaded to be served to your users statically … so … probably no, tests don’t really match a set of streams very well. Is your server-side code a set of streams? Well, as it’s not served to clients through the website so - no, probably not there either. Because there’s not a lot of benefit to thinking about these essential components of your infrastructure as streams, gulp doesn’t really give you any help here. Because of this if you’re using gulp for your build, you are either: • Writing task-based build steps in a system that really isn’t made for them • Using another build system anyways ## Oh, You’re Having a Problem? Install Another Plugin Of course the answer for each of my problems is to install a gulp plugin! gulp-expect-file lets you add assertions to your pipes so that you get files that you’re expecting and not files that you’re not expecting (for example, that your build actually produced a result). gulp-natural-sort rearranges your file streams so that files are sorted in a consistent order. gulp-plumber fixes the default node.js pipe behavior to not break the pipe on error so that you can use gulp watch without needing to restart gulp all the time after generating a syntax error. This may be just an useless cry of rage but I don’t feel like I should have to install all these off-the-shelf plugins just to get a basic build utility working! Event streams may be an awesome concept but I don’t feel like I should need to deeply understand them just in order to generate an asset build. ## Please, Just Learn and Use Shell Scripts I think I understand why people like Gulp - frontend builds all kind of look the same if you squint at them. Build JS, build CSS, concatenate, minify, upload to CDN, all with various options that are project specific. Because of this it’s easy to find something that kind of works online, adapt it to your needs, and slowly add onto it with extra plugins. So if you’re not using Gulp what are you using instead? Maybe this is the hipster answer but just use shell scripts. • Use command line utilities without relying on wrapper plugins • Full power of the UNIX command line - yes this isn’t “just JavaScript” and it’s definitely not code over configuration but these tools have been battle-tested for decades. The knowledge you gain from putting your build into a shell script will help you be a more effective programmer. • Full transparency as to what is going on - want to see what’s happening? just read your shell output. Why is it slow? Run everything through time. Need to run two things at the same time? GNU parallel. Rant over - go back to your day!
2019-06-27 10:02:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2785845994949341, "perplexity": 2018.131254326499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001089.83/warc/CC-MAIN-20190627095649-20190627121649-00064.warc.gz"}
https://mathhelpboards.com/threads/year-10-maths-find-the-length-and-width-that-will-maximize-the-area-of-rectangle.26453/
Year 10 Maths Find the length and width that will maximize the area of rectangle liang123993 New member The question is in the image. Working out with every step would be much appreciated. Greg Perseverance Staff member Here's a start: Let $W$ be width, $L$ be length an $A$ be the desired area. Then, $$\displaystyle 5W+2L=550$$ $$\displaystyle LW=A$$ Can you make any progress from there? Greg Perseverance Staff member Here's a start: Let $W$ be width, $L$ be length an $A$ be the desired area. Then, $$\displaystyle 5W+2L=550$$ $$\displaystyle LW=A$$ Can you make any progress from there? $$\displaystyle W=\frac AL$$ $$\displaystyle \frac{5A}{L}+2L=550$$ $$\displaystyle 5A+2L^2=550L$$ $$\displaystyle A=110L-\frac{2L^2}{5}$$ $A$ has a maximum at the vertex of this inverted parabola, so $L=\frac{275}{2}$. Finding $A$ and $W$ from here should be straightforward.
2020-07-14 11:03:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7562704086303711, "perplexity": 717.1756141324474}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657149819.59/warc/CC-MAIN-20200714083206-20200714113206-00414.warc.gz"}
https://raweb.inria.fr/rapportsactivite/RA2018/hiepacs/uid95.html
PDF e-Pub ## Section: New Results ### Parallel Low-Rank Linear System and Eigenvalue Solvers Using Tensor Decompositions At the core of numerical simulations for scientific computing applications, one typically needs to solve an equation either in the form of a linear system (Ax = b) or an eigenvalue problem (Ax = ƛx) to determine the course of the simulation. A major breakthrough in this solution step is exploiting the inherent low-rank structure in the problem; an idea stemming from the observation that particles in the same spatial locality exhibit similar interactions with others in a distant cluster/region. This property has been exploited in many contexts such as fast multipole methods (FMM) and hierarchical matrices (H-matrices) in applications ranging from n-body simulations to electromagnetics, which amount to numerically compressing the matrix in order to reduce computational and memory costs. Recent theory along this direction involves employing tensor decomposition to quantize the matrix in the form of a tensor (through logical restructuring/reshaping) and use tensor decomposition to approximate it with a controllable global error. Once the matrix and vectors are compressed this way, one can similarly use the compressed tensor to carry out matrix-vector operations with significantly better compression rate than the H-matrix approach. Despite these major recent breakthroughs in the theory and application of tensor-based methods, addressing large-scale real-world problems with these methods requires immense computational power, which necessitates highly optimized parallel algorithms and implementations. To this end, we have initiated the development of a tensor-based linear system and eigenvalue solver library called Celeste++(C++ library for Efficient low-rank Linear and Eigenvalue Solvers using Tensor decomposition) providing a complete framework for expressing a problem in tensor form, then effectuating all matrix-vector operations under this compressed form with tremendous computational and memory efficiency. The fruits of our preliminary studies led two project submissions at the national scale (ANR JCJC and CNRS PEPS JCJC, currently under evaluation) and one Severo Ochoa Mobility Grant for a collaboration visit to Barcelona Supercomputing Center (BSC). We also supervised an internship on the application of tensor solvers in the context of electromagnetic applications with very promising results for future work.
2020-04-02 09:39:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8580302596092224, "perplexity": 948.840462670812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506870.41/warc/CC-MAIN-20200402080824-20200402110824-00304.warc.gz"}
https://msp.org/agt/2018/18-1/agt-v18-n1-p15-p.pdf
#### Volume 18, issue 1 (2018) Recent Issues The Journal About the Journal Subscriptions Editorial Board Editorial Interests Editorial Procedure Submission Guidelines Submission Page Ethics Statement Author Index To Appear ISSN (electronic): 1472-2739 ISSN (print): 1472-2747 Other MSP Journals Classifying spaces for $1$–truncated compact Lie groups ### Charles Rezk Algebraic & Geometric Topology 18 (2018) 525–546 ##### Abstract A $1\phantom{\rule{0.3em}{0ex}}$–truncated compact Lie group is any extension of a finite group by a torus. In this note we compute the homotopy types of ${Map}_{\ast }\left(BG,BH\right)$, $Map\left(BG,BH\right)$, and $Map\left(EG,{B}_{G}H\right)$ for compact Lie groups $G$ and $H$ with $H$ $1\phantom{\rule{0.3em}{0ex}}$–truncated, showing that they are computed entirely in terms of spaces of homomorphisms from $G$ to $H$. These results generalize the well-known case when $H$ is finite, and the case when $H$ is compact abelian due to Lashof, May, and Segal. However, your active subscription may be available on Project Euclid at https://projecteuclid.org/agt We have not been able to recognize your IP address 18.215.159.156 as that of a subscriber to this journal. Online access to the content of recent issues is by subscription, or purchase of single articles. or by using our contact form. ##### Keywords classifying spaces, equivariant ##### Mathematical Subject Classification 2010 Primary: 55R91 Secondary: 55P92, 55R35, 55R37
2019-04-19 05:02:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2604353725910187, "perplexity": 2177.6220176307297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527135.18/warc/CC-MAIN-20190419041415-20190419063415-00506.warc.gz"}
https://proxies-free.com/integer-power-c-function-with-base-and-exponent-parameters/
# Integer Power C++ Function with base and exponent parameters If you are going to give the result as a `double`, then this function isn’t really a pure integer power calculator. And in that case, just use `std::pow()`. It has overloads for integer exponents that the compiler can optimize for. If the base and the exponent are both negative, then mathematically the result will be a complex number. Either you want to return a `std::complex<double>`, or you should somehow deal with this situation and return a NaN, or signal an error in some other way. If you just want to work purely on integers, then the typical algorithm to calculate a number raised to an arbitrary power efficiently is by recognizing that, for example, $$x^4 = ((x * x) * (x * x))$$, and so you can calculate $$y = x * x$$, and then $$x^4 = y * y$$. This only needs two multiplications instead of 3. So basically, you can divide and conquer the problem: ``````int integerPower(int base, int exponent) { if (exponent == 0) return 1; int result = integerPower(base, exponent / 2); int result *= result; if (exponent & 1) result *= base; return result; } `````` The above doesn’t work for negative exponents, but then again that is not very useful if the result is just an integer.
2020-10-21 02:26:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8318871259689331, "perplexity": 596.9666384706098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874637.23/warc/CC-MAIN-20201021010156-20201021040156-00423.warc.gz"}
http://mathhelpforum.com/calculus/273615-summation-properties.html
1. ## Summation properties Hi, I'm new to this forum and it's been frankly over 2 decades since I graduated from university... so I'm pretty rusty and hoping to get a little help with a certain math problem that has popped into my work life. I'd be forever grateful for any advice in solving this one that I've been wracking my old brain with for the past couple hours but getting nowhere. Basically I'm trying to normalize a value, and in order to do that I need to know the maximum value it can have. I have two summations that both are equal to 1: (n is a constant to all) (o[1] + ... + o[n]) = 1 (c[1] + ... + c[n]) = 1 and I am taking the difference of each variable and summing their absolute values: |o[1] - c[1]| + ... + |o[n] - c[n]| Does anyone know what the maximum value that thing above can have? Jeremy 2. ## Re: Summation properties Since |a- b|< |a|+ |b|, |o[1] - c[1]| + ... + |o[n] - c[n]|< |o[1]|+ ...+ |o[n]|+ c[1]+ c[2]+ ...+ c[n]= 2. Since this is a sum of absolute values the minimum value is 0. 3. ## Re: Summation properties Hi and thanks for your answer and time, however I'm actually looking for its maximum value (not minimum value)... 4. ## Re: Summation properties are the o's and c's any real numbers or are they all non-negative? 5. ## Re: Summation properties Sorry I should have clarified, they are all real non-negative numbers. 6. ## Re: Summation properties If they are non-negative I'm pretty sure that the maximum will be achieved with $o_k = \dfrac 1 n \left(1 + (-1)^{k-1}\right),~k=0,1,\dots, n-1$ $c_k = \dfrac 1 n \left(1 + (-1)^k\right),~k=0,1,\dots, n-1$ $\displaystyle{\sum_{k=0}^n}~|o_k - c_k| = 2$ but maybe I'm misunderstanding the question. 7. ## Re: Summation properties Basically what i'm trying to do is take an original set of percentages (that add up to 100%) and then take a secondary set of percentages and compare them to the original sets and come up with an error value from 0 to 1. 1 being the maximum error, 0 no error (ie: the percentages are exactly the same across the board). I'm basically building a cost function as part of a larger algorithm. For example: suppose we had a set of percentages like this: o[1] = 20%, o[2] = 30%, o[3] = 50% and we are given a chosen values of (say in this case we are wildly different than the original values) c[1] = 85%, c[2] = 1%, c[3] = 14% and we had to calculate an "error": |o[1]-c[1]| = 65% |o[2]-c[2]| = 29% |o[3]-c[3]| = 36% Summed together comes to 1.30, but I want to normalize this number so that it's expressed between 0-1. So I'm looking for a way to calculate what the maximum number can be in order to normalize the "error" calculation. 8. ## Re: Summation properties After simming this a bit it's sure looking like 2 is a good normalizing factor. 9. ## Re: Summation properties Hi Romsek, thanks for your answer it checks out perfectly after some extensive tests I ran!
2018-05-24 14:25:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8790736198425293, "perplexity": 1241.7696779982232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866326.60/warc/CC-MAIN-20180524131721-20180524151721-00379.warc.gz"}
http://www-old.newton.ac.uk/programmes/SAS/seminars/2012041214301.html
# SAS ## Seminar ### Where Delegation Meets Einstein Tauman Kalai, Y Thursday 12 April 2012, 14:30-15:30 Seminar Room 1, Newton Institute #### Abstract We show a curious connection between the problem of *computation delegation* and the model of *no-signalling multi-prover interactive proofs*, a model that was studied in the context of multi-prover interactive proofs with provers that share quantum entanglement, and is motivated by the physical law that information cannot travel instantly (and is, like matter, limited by the speed of light). We consider the method suggested by Aiello et. al. for converting a 1-round multi-prover interactive proof (MIP) into a 1-round delegation scheme, by using a computational private information retrieval (PIR) scheme. On one direction, we show that if the underlying MIP protocol is sound against statistically no-signalling cheating provers then the resulting delegation protocol is secure (under the assumption that the underlying PIR is secure for attackers of sub-exponential size). On the other direction, we show that if the resulting delegation protocol is secure for every PIR scheme, and the proof of security is a black-box reduction that reduces the security of the protocol to the security of any standard'' cryptographic assumption, and such that the number of calls made by the reduction to the cheating prover is independent of the security parameter, then the underlying MIP protocol is sound against statistically no-signalling cheating provers. This is joint work with Ran Raz and Ron Rothblum. #### Video The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
2015-08-03 19:22:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43634656071662903, "perplexity": 1459.3535983672057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990112.92/warc/CC-MAIN-20150728002310-00004-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.ques10.com/p/33715/find-r_1-and-r_2-is-lossy-integrator-1/
Question: Find $R_1$ and $R_2$ is lossy integrator 0 Find $R_1$ and $R_2$ is lossy integrator so that the peak gain is 20 dB and gain is 3dB down from its peak when $ω=10000$ rad/s . Use capacitance of 0.01 MF Given: Max gain = 20 dB, 3dB frequency. $ω_s=10000 \frac{rad}{s} ,Cr=0.01Mf$ Subject: Computer Engineering Topic: Electronic Circuits and Communication Fundamentals Difficulty: Medium / High $∴\text{DC gain} = 20 dB= 20 \log_{10⁡}|Av| \\ ∴|Av|=10 \\ \text{But} |Av|=\frac{R_2}{R_1} \\ ∴R_2=10 R_1 \\ \text{Also,} w_a=10000 \frac{rad}{\sec} \\ ∴f_a=\frac{10000}{2π}=1591.55 Hz \\ \text{But} f_a=\frac{1}{2πR_2 C_f} \\ ∴C_f=0.01 \text{MF given} \\ ∴R_2=\frac{1}{2πf_a cf}=\frac{1}{10000 \times 0.01 \times 10^{-6}} \\ ∴R_2=10 kΩ \\ ∴ \text{substituting,} 10 kΩ=10 R_1 \\ ∴R_1=1 kΩ$
2019-09-20 23:26:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7152684330940247, "perplexity": 5551.664145420497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574084.88/warc/CC-MAIN-20190920221241-20190921003241-00190.warc.gz"}
https://cs.stackexchange.com/tags/graphs/hot
People who code: we want your input. Take the Survey # Tag Info 4 The complete digraph of $n$ nodes, $K_n$ has $n(n-1)$ edges. Describe a digraph of $n$ nodes with $n(n-1)-\delta$ edges as a digraph "with $\delta$ edges removed". A proof by induction The following is an outline to prove by induction that every digraph of $n$ nodes with $n-2$ edges removed contains a Hamiltonian cycle. The base case, when $n=2$ or ... 2 You can prove that any longest path contains the center vertices of the tree. It will then follow that all longest paths intersect not only at some vertex, but at the center. 2 Your explanation is correct. And, you can not do better than $f(a) = 2a$. For example, take a complete graph on $4$ vertices: $a,b,c,d$. The Arboricity is $2$ since $(a,b), (b,c),$ and $(c,d)$ forms the first tree, and the remaining edges form the second tree. The graph is colorable with exactly four colors. In fact, the Arboricity of a complete graph on $n$ ... 2 There is a theorem that can be used to characterize the optimal solution, which then makes algorithm design trivial. In particular, you can solve the problem in logarithmic time. Theorem Suppose we are hoping for a solution that uses $k$ time steps. Let $v_0$ denote the starting velocity vector, $s$ the starting position, and $t$ the target position. Let $... 1 The MST can be unique even if edge weights repeat. For example, consider the graph on$\{1,\ldots,n\}$in which the weight of$(1,2),(2,3),\ldots,(n-1,n)$is$1$and the weight of all other edges is$2\$. Only top voted, non community-wiki answers of a minimum length are eligible
2021-06-14 13:18:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8595872521400452, "perplexity": 255.44975310672226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612154.24/warc/CC-MAIN-20210614105241-20210614135241-00474.warc.gz"}
https://diego.assencio.com/?index=63f2ce1a5c3bb291485c0ef5fb8223f2
An interpretation of the Lagrange multipliers method Posted by Diego Assencio on 2015.11.14 under Mathematics (Calculus) The Lagrange multipliers method is a technique which solves the following problem: find the local maxima and minima of a function $f(x_1, \ldots, x_n)$ over values of $(x_1, \ldots, x_n)$ which satisfy $m$ constraint equations: $$g_k(x_1, \ldots, x_n) = 0, \quad k = 1, 2, \ldots, m. \label{post_63f2ce1a5c3bb291485c0ef5fb8223f2_constraints}$$ Each constraint equation defines an $(n-1)$-dimensional surface. The intersection of all these surfaces defines the set of points over which we must find local maxima and minima of $f({\bf x})$, where ${\bf x} = (x_1, \ldots, x_n)$. In what follows, we will refer to this set of points as the feasible surface. Without the constraints defined in \eqref{post_63f2ce1a5c3bb291485c0ef5fb8223f2_constraints}, we could use standard techniques from calculus to solve the problem by finding and analyzing the stationary points of $f({\bf x})$. With the imposed constraints, however, things are not as simple. Our approach here will be to study the properties of the local maxima and minima of $f({\bf x})$ subject to the constraints above. This will lead us naturally to the Lagrange multipliers method and therefore illustrate why it works. To start, notice that each constraint equation $g_k({\bf x}) = 0$ allows us to express one coordinate $x_j$ in terms of the others. In principle, we can start with $g_1({\bf x}) = 0$ and determine $x_n$ in terms of $(x_1, \ldots, x_{n-1})$, then use this equation for $x_n$ on $g_2({\bf x}) = 0$ to determine $x_{n-1}$ in terms of $(x_1, \ldots, x_{n-2})$ and so on. With $m$ constraints, we can determine $(x_{n-m+1}, \ldots, x_n)$ in terms of $(x_1, \ldots, x_{n-m})$, so the feasible surface can be parameterized using $(n-m)$ parameters, one for each of the coordinates $x_j$ for $j = 1, 2, \ldots, (n-m)$. To be more precise, we can set $x_j = \alpha_j$ for $j = 1, 2, \ldots, (n-m)$ and express the remaining coordinates $x_j$ for which $j = (n-m+1), \ldots, n$ in terms of $(\alpha_1, \ldots, \alpha_{n-m})$. In the general case, we might not be able to proceed exactly as described above (for instance, if $g_1({\bf x})$ does not explicitly depend on $x_n$, we cannot use it to determine $x_n$ in terms of $(x_1, \ldots, x_{n-1})$), but the main idea remains valid: ignoring pathological cases, we can express one of the free variables in terms of the other free variables for each constraint equation, so with $m$ constraints, only $(n-m)$ variables will remain free. This means the feasible surface is an $(n-m)$-dimensional surface. Let's give an example to clarify what we just discussed. Consider a function $f(x,y,z)$ and two constraints $g_1(x,y,z) = 0$ and $g_2(x,y,z) = 0$. We can use $g_1(x,y,z) = 0$ to determine $z$ as a function of $x$ and $y$: $z = z(x,y)$. By replacing $z$ with $z(x,y)$ on $g_2(x,y,z) = 0$, we remain with only two free variables $x$ and $y$, so we can determine $y$ as a function of $x$: $y = y(x)$, meaning the feasible surface is a curve which can be parameterized using $x = \alpha$. This curve is the set of points $(\alpha, y(\alpha), z(\alpha, y(\alpha)))$ for all valid values of $\alpha$. Therefore, with $n = 3$ dimensions and $m = 2$ constraints, we get a feasible surface with $(n-m) = 1$ dimension. Now consider a point ${\bf x}$ on the feasible surface such that $f({\bf x})$ is a local maximum or minimum. Consider also an infinitesimal displacement $\delta{\bf x}$ away from ${\bf x}$, with $\delta{\bf x}$ being tangent to the feasible surface. We have that: $$f({\bf x} + \delta{\bf x}) \approx f({\bf x}) + \nabla f({\bf x})\cdot \delta{\bf x} \label{post_63f2ce1a5c3bb291485c0ef5fb8223f2_f_x_plus_dx}$$ Since $\nabla f({\bf x})$ points along the direction at which $f({\bf x})$ increases the fastest, $\nabla f({\bf x})$ must be locally perpendicular to the feasible surface and therefore $\nabla f({\bf x})\cdot \delta{\bf x} = 0$. If this would not be the case, then either $\nabla f({\bf x})\cdot \delta{\bf x} \gt 0$ or $\nabla f({\bf x})\cdot \delta{\bf x} \lt 0$. Assuming $\nabla f({\bf x})\cdot \delta{\bf x} \gt 0$, we see from equation \eqref{post_63f2ce1a5c3bb291485c0ef5fb8223f2_f_x_plus_dx} that $f({\bf x} + \delta{\bf x}) \gt f({\bf x})$. But for such a displacement $\delta{\bf x}$, we would also have $\nabla f({\bf x})\cdot (-\delta{\bf x}) \lt 0$ and equation \eqref{post_63f2ce1a5c3bb291485c0ef5fb8223f2_f_x_plus_dx} with $\delta{\bf x}$ replaced by $(-\delta{\bf x})$ implies $f({\bf x} - \delta{\bf x}) \lt f({\bf x})$. In other words, $f$ increases if we move to ${\bf x} + \delta{\bf x}$ but decreases if we move to ${\bf x} - \delta{\bf x}$, meaning $f({\bf x})$ cannot be a local maximum or minimum. The argument is similar for $\nabla f({\bf x})\cdot \delta{\bf x} \lt 0$. To summarize, $\nabla f({\bf x})$ is locally perpendicular to the feasible surface at all points ${\bf x}$ of the surface where $f({\bf x})$ is a local maximum or minimum. This leads us to the question: how can we mathematically determine what "perpendicular to the feasible surface" is? The answer lies in the constraints from \eqref{post_63f2ce1a5c3bb291485c0ef5fb8223f2_constraints}: for each function $g_k({\bf x})$, the constraint $g_k({\bf x}) = 0$ defines a level set of $g_k({\bf x})$, i.e., a set of points over which $g_k({\bf x})$ is equal to a constant value. Since the constant in our case is zero, we will refer to this level set as the zero level set of $g_k({\bf x})$. Using the fact that $\nabla g_k({\bf x})$ points along the direction of fastest growth of $g_k({\bf x})$ at ${\bf x}$, we can infer that it must be locally perpendicular to the zero level set of $g_k({\bf x})$ since, for a given point ${\bf x}$ on the zero level set and an infinitesimal displacement vector $\delta{\bf x}$ which is tangent to this level set, we have that ${\bf x} + \delta{\bf x}$ is still on the zero level set and therefore: $$g_k({\bf x} + \delta{\bf x}) = 0 \approx g_k({\bf x}) + \nabla g_k({\bf x})\cdot \delta{\bf x} = 0 + \nabla g_k({\bf x})\cdot \delta{\bf x}$$ so $\nabla g_k({\bf x})\cdot \delta{\bf x} = 0$. In other words, for every ${\bf x}$ in the zero level set of $g_k({\bf x})$, $\nabla g_k({\bf x})$ is locally perpendicular to this surface at ${\bf x}$. Therefore, for a point ${\bf x}$ on the zero level set of $g_k({\bf x})$, $\nabla g_k({\bf x})$ defines an $(n-1)$-dimensional plane which is tangent to this level set at ${\bf x}$. For any infinitesimal displacement $\delta{\bf x}$ which is parallel to this plane, ${\bf x} + \delta{\bf x}$ is still on the zero level set of $g_k({\bf x})$. So if ${\bf x}$ belongs to the feasible surface, it is on the zero level set of all functions $g_k({\bf x})$ and therefore there are $m$ planes which pass through ${\bf x}$ and are perpendicular to the zero level sets of each $g_k({\bf x})$ for $k = 1, 2, \ldots m$ respectively. The intersection of these $m$ planes defines an $(n-m)$-dimensional surface which is locally equal (tangent) to the feasible surface. Indeed, for an infinitesimal displacement $\delta{\bf x}$ away from ${\bf x}$ such that ${\bf x} + \delta{\bf x}$ remains on the intersection of the $m$ planes, ${\bf x} + \delta{\bf x}$ remains on the zero level set of every function $g_k({\bf x})$. In other words, ${\bf x} + \delta{\bf x}$ is on the feasible surface, so the $(n-m)$-dimensional surface defined by the intersection of the $m$ planes at ${\bf x}$ is locally tangent to the feasible surface. As an example, if $n = 3$ and $m = 2$, for any point $(x,y,z)$ on the feasible surface, there are two planes perpendicular to $\nabla g_1(x,y,z)$ and $\nabla g_2(x,y,z)$ respectively which pass through $(x,y,z)$; the intersection of these planes defines a line passing through $(x,y,z)$. For small displacements around $(x,y,z)$ on this line, we stay on both of these planes and therefore on the zero level sets of both $g_1(x,y,z)$ and $g_2(x,y,z)$, i.e., on the feasible surface. This implies that points on this line within an infinitesimal neighborhood of ${\bf x}$ are part of the feasible surface as well. For the displacement vector $\delta{\bf x}$ considered above, since ${\bf x} + \delta{\bf x}$ is on each of the $m$ planes which pass through ${\bf x}$ and are perpendicular to $\nabla g_k({\bf x})$ for $k = 1, 2, \ldots m$ respectively, then $\delta{\bf x}$ is perpendicular to all $\nabla g_k({\bf x})$ and therefore to any linear combination of these vectors, i.e., for any set of values $(\lambda_1, \ldots, \lambda_m)$, we have that: $$\sum_{k=1}^m \lambda_k \nabla g_k({\bf x})\cdot\delta{\bf x} = 0$$ As a concrete example, consider again the case $n = 3$ and $m = 2$. The feasible surface is a one-dimensional curve: at each point $(x,y,z)$ on this curve, there are two distinct planes passing through $(x,y,z)$ which are locally tangent to the zero level sets of $g_1(x,y,z)$ and $g_2(x,y,z)$ and perpendicular to $\nabla g_1(x,y,z)$ and $\nabla g_2(x,y,z)$ respectively. The intersection of these planes forms a line which is locally tangent to the feasible surface at $(x,y,z)$. An infinitesimal displacement vector $\delta{\bf x}$ which is parallel to this line is parallel to both planes and therefore simultaneously perpendicular to $\nabla g_1(x,y,z)$ and $\nabla g_2(x,y,z)$. This means $\delta{\bf x}$ is perpendicular to $\lambda_1 \nabla g_1(x,y,z) + \lambda_2 \nabla g_2(x,y,z)$ for any values of $\lambda_1$ and $\lambda_2$. As we showed above, for a point ${\bf x}$ on the feasible surface at which $f({\bf x})$ is a local maximum or minimum, $\nabla f({\bf x})$ is locally perpendicular to any infinitesimal displacement vector $\delta{\bf x}$ which is tangent to the feasible surface at ${\bf x}$. Since the feasible surface has dimension $(n-m)$ and the set of $m$ perpendicular vectors $\nabla g_k({\bf x})$ spans locally an $m$-dimensional vector space (assuming they are linearly independent, which is the case here since we agreed on ignoring pathological cases), then we must be able to express $\nabla f({\bf x})$ as a linear combination of any $(n-m)$ linearly independent vectors which are tangent to the feasible surface and the $m$ vectors $\nabla g_k({\bf x})$ for $k = 1, 2, \ldots m$ which are locally perpendicular to it. With $\nabla f({\bf x})$ being perpendicular to the feasible surface, we can only have: $$\nabla f({\bf x}) = \sum_{k=1}^m \lambda_k \nabla g_k({\bf x}) \label{post_63f2ce1a5c3bb291485c0ef5fb8223f2_eq_nabla_f}$$ In other words, if we can determine all points ${\bf x}$ for which both \eqref{post_63f2ce1a5c3bb291485c0ef5fb8223f2_constraints} and \eqref{post_63f2ce1a5c3bb291485c0ef5fb8223f2_eq_nabla_f} hold for any set of values $(\lambda_1, \ldots, \lambda_m)$, then all maxima and minima of $f({\bf x})$ over the feasible surface must be contained in this set of points because, as we showed above, every maximum or minimum of $f({\bf x})$ on the feasible surface must satisfy equation \eqref{post_63f2ce1a5c3bb291485c0ef5fb8223f2_eq_nabla_f} for some set of values $(\lambda_1, \ldots, \lambda_m)$. But this is exactly what the Lagrange multiplier method computes! Indeed, the Lagrangian multiplier method tells us to solve the following problem: compute all stationary points of the following function (commonly called the "Lagrangian" for the problem): $$L(x_1, \ldots, x_n, \lambda_1, \ldots, \lambda_m) = f(x_1, \ldots, x_n) - \sum_{k=1}^m \lambda_k g_k(x_1, \ldots, x_n) \label{post_63f2ce1a5c3bb291485c0ef5fb8223f2_eq_L}$$ Notice that this problem has no constraints: we just have to determine the stationary points of $L$, i.e, solve a standard calculus problem. At the stationary points of $L$, all its first order partial derivatives vanish: $$\begin{eqnarray} \label{post_63f2ce1a5c3bb291485c0ef5fb8223f2_partial_L_1}\displaystyle\frac{\partial L}{\partial x_i} &=& \frac{\partial f}{\partial x_i} - \sum_{k=1}^m \lambda_k \frac{\partial g_k}{\partial x_i} = 0 & \quad \textrm{ for } i = 1, 2, \ldots, n \\[5pt] \label{post_63f2ce1a5c3bb291485c0ef5fb8223f2_partial_L_2}\displaystyle\frac{\partial L}{\partial \lambda_k} &=& g_k(x_1, \ldots, x_n) = 0 &\quad \textrm{ for } k = 1, 2, \ldots, m \end{eqnarray}$$ Equation \eqref{post_63f2ce1a5c3bb291485c0ef5fb8223f2_partial_L_2} shows that the stationary points $(x_1, \ldots, x_n, \lambda_1, \ldots, \lambda_m)$ of $L$ are such that $(x_1, \ldots, x_n)$ satisfy all the constraints $g_k(x_1, \ldots, x_n) = 0$ for $k = 1, 2, \ldots, m$, so these points fall inside the feasible surface defined by equation \eqref{post_63f2ce1a5c3bb291485c0ef5fb8223f2_constraints}. Additionally, equation \eqref{post_63f2ce1a5c3bb291485c0ef5fb8223f2_partial_L_1} can be expressed as below: $$\nabla_{\bf x} L({\bf x}, {\pmb\lambda}) = \nabla f({\bf x}) - \sum_{k=1}^m \lambda_k \nabla g_k({\bf x}) = 0 \Longrightarrow \nabla f({\bf x}) = \sum_{k=1}^m \lambda_k \nabla g_k({\bf x})$$ where above $\nabla_{\bf x} L$ is used to indicate that it is the gradient of $L$ computed only over the coordinates ${\bf x} = (x_1, \ldots, x_n)$, with $\pmb\lambda = (\lambda_1, \ldots, \lambda_m)$. To summarize, the Lagrangian multiplier method determines all points $({\bf x}, {\pmb\lambda})$ such that the constraints \eqref{post_63f2ce1a5c3bb291485c0ef5fb8223f2_constraints} are satisfied and such that \eqref{post_63f2ce1a5c3bb291485c0ef5fb8223f2_eq_nabla_f} also holds. By studying the properties of each of these points, we can then determine which ones are local maxima, minima or just saddle points, but this task is outside the scope of this post and will be left to the reader. Notice, however, that if all we care about is finding the global maximum/minimum of $f({\bf x})$ over the feasible surface, all we need to do is compute $f({\bf x})$ for every point ${\bf x}$ obtained from the Lagrange multipliers method and only keep the largest/smallest one. References [1] Dan Klein, Lagrange Multipliers without Permanent Scarring [2] T. M. Apostol, Calculus, Volume II, John Wiley & Sons; 2nd edition (1969)
2022-07-03 12:08:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9025050401687622, "perplexity": 84.89841032767039}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104240553.67/warc/CC-MAIN-20220703104037-20220703134037-00188.warc.gz"}
https://plainmath.net/7604/verify-the-identity-secx-secx-sin-2x-equal-cosx
# Verify the identity secx-secx sin^2x= cosx Question Verify the identity $$\displaystyle{\sec{{x}}}-{\sec{{x}}}{{\sin}^{{2}}{x}}={\cos{{x}}}$$ 2021-02-12 $$\displaystyle{\sec{{x}}}-{\sec{{x}}}{{\sin}^{{2}}{x}}={\cos{{x}}}$$ $$\displaystyle{\sec{{x}}}{\left({1}-{{\sin}^{{2}}{x}}\right)}={\cos{{x}}}$$ because $$\displaystyle{{\sin}^{{x}}+}{{\cos}^{{x}}=}{1},{1}-{{\sin}^{{2}}{x}}={{\cos}^{{2}}{x}}$$ $$\displaystyle{\sec{{x}}}{{\cos}^{{2}}{x}}={\cos{{x}}}$$ $$\displaystyle{\left(\frac{{I}}{{\cos{{x}}}}\right)}{{\cos}^{{2}}{x}}={\cos{{x}}}$$ $$\displaystyle{\cos{{x}}}={\cos{{x}}}$$ ### Relevant Questions Verify this triganomic identity you can only use 1 side to solve $$\displaystyle\frac{{{\cos{{x}}}-{\tan{{x}}}}}{{{\sin{{x}}}+{\cos{{x}}}}}={\cos{{x}}}-{\sec{{x}}}$$ Prove the identity. $$\displaystyle\frac{{{\tan}^{{2}}{x}}}{{\sec{{x}}}}={\sec{{x}}}-{\cos{{x}}}$$ Verify the identity $$\displaystyle\frac{{\cos{{x}}}}{{1}}-{\sin{{x}}}-{\tan{{x}}}=\frac{{1}}{{\cos{{x}}}}$$ Prove that: $$\displaystyle{1}+\frac{{\cos{{x}}}}{{1}}-{\cos{{x}}}=\frac{{{\tan}^{{2}}{x}}}{{\left({\sec{{x}}}-{1}\right)}^{{2}}}$$ Identity $$\displaystyle{2}+{2}{{\cot}^{{2}}{x}}={2}{\cot{{x}}}{\sec{{x}}}{\csc{{x}}}$$ $$\displaystyle{{\tan}^{{2}}{x}}+{1}+{\tan{{x}}}{\sec{{x}}}={1}+\frac{{\sin{{x}}}}{{{\cos}^{{2}}{x}}}$$ Simplify the expression $$\displaystyle\frac{{\sec{{x}}}}{{\sin{{x}}}}-\frac{{\sin{{x}}}}{{\cos{{x}}}}$$ Prove $$\displaystyle\frac{{{\left({\csc{{x}}}-{\cot{{x}}}\right)}{\left({\csc{{x}}}+{\cot{{x}}}\right)}}}{{\sec{{x}}}}={\cos{{x}}}$$ If $$\displaystyle{\cos{{x}}}=-\frac{{12}}{{13}}{\quad\text{and}\quad}{\csc{{x}}}{<}{0}$$, find $$\displaystyle{\cot{{\left({2}{x}\right)}}}$$ solve the following equation for all values of x: $$\displaystyle{{\sin}^{{2}}{x}}+{\sin{{x}}}{\cos{{x}}}$$
2021-06-15 10:57:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9560644626617432, "perplexity": 5117.397433712745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00463.warc.gz"}
https://www.aimspress.com/article/doi/10.3934/mbe.2015.12.135
### Mathematical Biosciences and Engineering 2015, Issue 1: 135-161. doi: 10.3934/mbe.2015.12.135 # Analysis of SI models with multiple interacting populations using subpopulations • Received: 01 January 2014 Accepted: 29 June 2018 Published: 01 December 2014 • MSC : Primary: 92D30; Secondary: 37C75. • Computing endemic equilibria and basic reproductive numbers for systems of differential equations describing epidemiological systems with multiple connections between subpopulations is often algebraically intractable. We present an alternative method which deconstructs the larger system into smaller subsystems and captures theinteractions between the smaller systems as external forces using an approximate model. We bound the basic reproductive numbers of the full system in terms of the basic reproductive numbers of the smaller systems and use the alternate model to provide approximations for the endemic equilibrium. In addition to creating algebraically tractable reproductive numbers and endemic equilibria, we can demonstrate the influence of the interactions between subpopulations on the basic reproductive number of the full system. The focus of this paper is to provide analytical tools to help guide public health decisions with limited intervention resources. Citation: Evelyn K. Thomas, Katharine F. Gurski, Kathleen A. Hoffman. Analysis of SI models with multiple interacting populations using subpopulations[J]. Mathematical Biosciences and Engineering, 2015, 12(1): 135-161. doi: 10.3934/mbe.2015.12.135 ### Related Papers: • Computing endemic equilibria and basic reproductive numbers for systems of differential equations describing epidemiological systems with multiple connections between subpopulations is often algebraically intractable. We present an alternative method which deconstructs the larger system into smaller subsystems and captures theinteractions between the smaller systems as external forces using an approximate model. We bound the basic reproductive numbers of the full system in terms of the basic reproductive numbers of the smaller systems and use the alternate model to provide approximations for the endemic equilibrium. In addition to creating algebraically tractable reproductive numbers and endemic equilibria, we can demonstrate the influence of the interactions between subpopulations on the basic reproductive number of the full system. The focus of this paper is to provide analytical tools to help guide public health decisions with limited intervention resources. [1] Oxford Science Publications, 1991. [2] Lecture Notes in Biomathematics, 86 (1990), 14-20. [3] in Sea Fisheries: Their Investigation in the United Kingdom (ed. M. Graham), Edward Arnold, London, (1956), 372-441. [4] HIV Med., 3 (2002), 195-199. [5] $2^{nd}$ edition, Springer, New York, 2012. [6] Mathematical Biosciences and Engineering, 5 (2008), 713-727. [7] in Mathematical Approaches for Emerging and Reemerging Infectious Diseases: An Introduction, The IMA Volumes in Mathematics and its Applications, 125 (2002), 229-250. [8] Lecture Notes in Biomathematics, 83, 1989. [9] Journal of Theoretical Biology, 263 (2010), 169-178. [10] BMC Infect Dis., 1 (2001), p1. [11] PLoS Med., 3 (2006), e42. [12] Am. J. Med., 113 (2002), 91-98. [13] Mathematical Biosciences, 16 (1973), 75-101. [14] J Acquir. Immune Dec. Syndr., 41 (2006), 194-200. [15] J. Animal Ecology, 11 (1942), 215-244. [16] PLoS Med., 3 (2005), e7. [17] Trans. Amer. Math. Soc., 104 (1962), 154-178. [18] Lecture Notes in Biomathematics, 56. Springer-Verlag, Berlin, 1984. [19] SIAM, 1975. [20] Mathematical Biosciences, 155 (1999), 77-109. [21] Math. Biosci., 181 (2003), 17-54. [22] Journal of Theoretical Biology, 208 (2001), 227-249. [23] Mathematical Biosciences, 90 (1988), 415-473. [24] Proc. Roy. Soc. London B Biol. Sci., 115 (1927), 700-721. [25] Proc. Roy. Soc. London B Biol. Sci., 138 (1932), 55-83. [26] Proc. Roy. Soc. London B Biol. Sci., 141 (1933), 94-112. [27] Mathematical Biosciences, 28 (1976), 221-236. [28] in IMA Volumes in Mathematics and its Applications (eds. C. Castillo-Ch\'avez et al.), 126 (2002), 295-311. [29] J. Phys. Chem., 14 (1910), 271-274. [30] Proc. Natl. Acad. Sci. U.S., 6 (1920), 410-415. [31] Williams and Wilkins, 1925. [32] Theoretical Population Biology, 70 (2006), 174-182. [33] Computational and Mathematical Methods in Medicine, 11 (2010), 201-222. [34] BMJ, 343 (2011), d6016. [35] Nature, 261 (1976), 459-467. [36] Doubleday, 1976. [37] AIDS, 16 (2002), 1663-1671. [38] Lancet, 362 (2003), 22-29. [39] Mathematical and Computer Modelling, 49 (2009), 1869-1882. [40] Bulletin of Mathematical Biology, 69 (2007), 2061-2092. [41] $3^rd$ edition, Springer, 2002. [42] AIDS, 26 (2012), 335-343. [43] Curr. Opin. Infect. Dis., 26 (2013), 17-25. [44] SIAM Journal of Applied Mathematics, 65 (2005), 964-982. [45] Bulletin of the Torrey Botanical Club, 82 (1955), 400-401. [46] in On Global Univalence Theorems, Lecture Notes in Mathematics, 977 (1983), 59-467. [47] N. Engl. J. Med., 338 (1998), 853-860. [48] Differential Equations and Dynamical Systems: International Journal for Theory, Real World Modelling and Simulations, 19 (2011), 283-302. [49] Mathematical Population Studies, 8 (2000), 205-229. [50] $5^{th}$ edition, Oxford University Press, 2008. [51] J. Fisher. Res. Board Can., 5 (1954), 559-623. [52] $2^{nd}$ edition, E.P Dutton and Co., 1910. [53] J. Hepatol., 42 (2005), 799-805. [54] Bulletin of the Inter-American Tropical Tuna Commission, 1 (1954), 27-56. [55] Lancet, 372 (2008), 293-299. [56] Princeton Series in Theoretical and Computational Biology, 2003. [57] AIDS, 24 (2010), 1527-1535. [58] Journal of Mathematical Biology, 32 (1994), 233-249. [59] Mem. R. Acad. Naz. dei Lincei, 4 (1926), 31-113. [60] J. Cons. int. Explor. Mer., 3 (1928), 3-51. [61] SIAM J. Appl. Math, 68 (2008), 1495-1502. • © 2015 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0) ###### 通讯作者: 陈斌, bchen63@163.com • 1. 沈阳化工大学材料科学与工程学院 沈阳 110142 1.285 1.3
2021-01-28 10:48:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6008855104446411, "perplexity": 2954.6516340848098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704843561.95/warc/CC-MAIN-20210128102756-20210128132756-00611.warc.gz"}
https://zbmath.org/?q=an:1295.14020
# zbMATH — the first resource for mathematics Brauer group and integral points of two families of affine cubic surfaces. (Groupe de Brauer et points entiers de deux familles de surfaces cubiques affines.) (French) Zbl 1295.14020 The main objects of the paper under review are two families of affine cubic surfaces: $$x^3+y^3+z^3=a$$ and $$x^3+y^3+2z^3=a$$. The authors are interested in the existence of integer points and thus assume that for the first family $$a$$ is not of the form $$9n\pm 4$$ (otherwise there are no points modulo 9). In the spirit of the papers by J.-L. Colliot-Thélène and F. Xu [Compos. Math. 145, No. 2, 309–363 (2009; Zbl 1190.11036)] and A. Kresch and Y. Tschinkel [Bull. Lond. Math. Soc. 40, No. 6, 995–1001 (2008; Zbl 1161.14019)], they consider the integral Brauer-Manin obstruction to strong approximation. The main result of the paper (Theorem 4.1) states that under the assumption on $$a$$ mentioned above, there is no integral Brauer-Manin obstruction to the existence of integer points. Apart from this statement, experts will find many auxiliary results which are interesting by their own right. Such are all lemmas and propositions of Section 5.1, allowing one to compare, in a much more general set-up, the Brauer groups of an affine variety and of its projective closure. Section 5.2 contains many interesting examples, among which one can mention Remark 5.7 with an interpretation, in terms of the Brauer-Manin obstruction, of a computation by [J. W. S. Cassels, Math. Comput. 44, 265–266 (1985; Zbl 0556.10007)]. ##### MSC: 14F22 Brauer groups of schemes 11D25 Cubic and quartic Diophantine equations 11G05 Elliptic curves over global fields ##### Keywords: cubic surface; integral points; Brauer group Full Text:
2021-10-20 13:24:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7242791652679443, "perplexity": 540.8407888780118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00533.warc.gz"}
https://artofproblemsolving.com/wiki/index.php/Mock_AIME_II_2012_Problems/Problem_2
# Mock AIME II 2012 Problems/Problem 2 ## Problem Let $\{a_n\}$ be a recursion defined such that $a_1=1, a_2=20$, and $a_n=\sqrt{\left| a_{n-1}^2-a_{n-2}^2 \right|}$ where $n\ge 3$, and $n$ is an integer. If $a_m=k$ for $k$ being a positive integer greater than $1$ and $m$ being a positive integer greater than 2, find the smallest possible value of $m+k$. ## Solution The sequence goes as follows: $1, 20, \sqrt{399}, 1, \sqrt{398}, \sqrt{397}, 1, \sqrt{396}, \sqrt{395}, 1, \cdots$ Note that this sequence repeats, because once we get to $1$, we subtract $1$ from the number under the square root sign and take the square root of that, then when you take the difference of the squares of these two numbers, they have a difference of $1$, so you get back to $1$. The first perfect square less than $20^2$ is $19^2=361$. Therefore $k=\sqrt{361}=19$. We present three ways to finding $m$: Way 1: Now notice that, for $n\equiv1(\bmod3)$, we have $a_n=1$. Also, the terms under the radical counts down one for every $a_n$ such that $a_n\not\equiv1(\bmod3 )$. The number of $n\equiv1(\bmod 3)$ before a number $n$ can be seen to be $\lceil \frac{n}{3}\rceil$. Therefore, we can see that $a_n=\sqrt{401-n+\lceil\frac{n}{3}\rceil}$. Setting this equal to $19$, we see that $n-\lceil\frac{n}{3}\rceil=40$. We can see that both $n=60$ and $n=61$ satisfy this, however, since $61\equiv1(\bmod3)$, we disregard this, and we find that $a_{60}=19$, so $m=60$. Way 2: Look at the sequence by grouping terms as follows: $1, (20, \sqrt{399}, 1), (\sqrt{398}, \sqrt{397}, 1), (\sqrt{396}, \sqrt{395}, 1), \cdots (\sqrt{362}, \sqrt{361}, 1)$. Note that the number of these groups of three that we have is equal to the number of terms in the sequence $(361, 363, 365, \cdots 399)$. This has $\frac{399-359}{2}=20$ terms in it. Before the last group, we have $19*3+1=58$ terms, and $\sqrt{361}$ is therefore the $60$th term, and $m=60$. Both ways lead to $m=60$ and $k=19$, so the answer is $60+19=\boxed{79}$. To show that no smaller sums can be created, note that the next perfect square will be $k=18$ and will take more than $60+(19^2-18^2)=97$ as the value of $m$, and hence all other perfect squares will take more than $97$, and we have $97+k$ as our sum which is greater than $79$ since $k\ge 2$. Way 3 We must ignore the cases where $a_n=1$, as required by the problem statement. Write $a_n=\sqrt{400-k}$ for some integers $k,a_n$. We want to minimize $a_n+n$. Note if $a_n$ is even ($k$ is even) then $n=1.5k+2$, and if $a_n$ is odd ($k$ is odd), then $n=1.5k+1.5$. Rewrite $k=400-a_n^2$ and note that $a_n$ is at most $19$. In the case that $a_n$ is odd, we have $a_n+n=a_n+1.5(400-a_n^2)+1.5$. This is a quadratic in $a_n$ and is strictly decreasing for $a_n>0$, so to minimize we should choose $a_n=19$, which gives $a_n+n=79$. Do the case for $a_n$ even as well, you find that $79$ is the lowest.
2021-06-23 00:22:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 72, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9637486934661865, "perplexity": 59.926034876628925}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488525399.79/warc/CC-MAIN-20210622220817-20210623010817-00127.warc.gz"}
https://answers.opencv.org/questions/22406/revisions/
# Revision history [back] ### Different range of len captured by one webcam I encounter a weird problem when capturing frames of my webcam. When I preview the webcam using other's application, which needs to be reverse engineered, the frame is complete: complete len's contour and normal size objects. However, in my program (see below), the frame is incomplete: the len seems to be biased and the frame is enlarged a little compared to the normal size. Fig.1 and Fig.2 shows the screenshots of the same object captured by two programs I just mentioned. (The frame resolution is 422x314) Fig.1 Other's application's preview Fig.2 My program's preview My code #include <iostream> #include <opencv2/core/core.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/highgui/highgui.hpp> using namespace std; using namespace cv; int main(int argc, char** argv) { // Current frame Mat frame; // Capture video from the camera int device = 1; VideoCapture capture(device); while (1) { // Query for frame from camera capture >> frame; // Display the captured image resize(frame, frame, Size(422, 314)); namedWindow("frame", CV_WINDOW_AUTOSIZE); imshow("frame", frame); char ch = waitKey(25); if (ch == 27) break; } return 0; } The main OpenCV function employed in my code is VideoCapture capture(device) and I resize frame to 422x314 in order to keep same as the other's. Actually, the default resolution captured by OpenCV is 640x480 that still has problem of incomplete frame. I don't know why one webcam can produce two different ranges of len. Is that caused by the software? Perhaps not To go further, I use MATLAB image acquisition tool to preview the webcam in different resolutions. I surprisedly find that the two previews can be seen when choosing different resolutions! In default (written in MATLAB) resolution 352x288, the result is same as the normal one, but in resolution like 640x480, 320x240, the result is same as my program. So the problem is caused by the hardware? I can't change the hardware (webcam), so how can I modify the OpenCV code to make my frame be complete and normal? Please tell me what you want to know about my webcam specification if it matters. My working is done is Windows 7, Visual Studio 2008 and OpenCV 2.4.5. 2 retagged berak 32993 ●7 ●81 ●312 ### Different range of len captured by one webcam I encounter a weird problem when capturing frames of my webcam. When I preview the webcam using other's application, which needs to be reverse engineered, the frame is complete: complete len's contour and normal size objects. However, in my program (see below), the frame is incomplete: the len seems to be biased and the frame is enlarged a little compared to the normal size. Fig.1 and Fig.2 shows the screenshots of the same object captured by two programs I just mentioned. (The frame resolution is 422x314) Fig.1 Other's application's preview Fig.2 My program's preview My code #include <iostream> #include <opencv2/core/core.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/highgui/highgui.hpp> using namespace std; using namespace cv; int main(int argc, char** argv) { // Current frame Mat frame; // Capture video from the camera int device = 1; VideoCapture capture(device); while (1) { // Query for frame from camera capture >> frame; // Display the captured image resize(frame, frame, Size(422, 314)); namedWindow("frame", CV_WINDOW_AUTOSIZE); imshow("frame", frame); char ch = waitKey(25); if (ch == 27) break; } return 0; } The main OpenCV function employed in my code is VideoCapture capture(device) and I resize frame to 422x314 in order to keep same as the other's. Actually, the default resolution captured by OpenCV is 640x480 that still has problem of incomplete frame. I don't know why one webcam can produce two different ranges of len. Is that caused by the software? Perhaps not To go further, I use MATLAB image acquisition tool to preview the webcam in different resolutions. I surprisedly find that the two previews can be seen when choosing different resolutions! In default (written in MATLAB) resolution 352x288, the result is same as the normal one, but in resolution like 640x480, 320x240, the result is same as my program. So the problem is caused by the hardware? I can't change the hardware (webcam), so how can I modify the OpenCV code to make my frame be complete and normal? Please tell me what you want to know about my webcam specification if it matters. My working is done is Windows 7, Visual Studio 2008 and OpenCV 2.4.5. ### Different range of len captured by one webcam I encounter a weird problem when capturing frames of my webcam. When I preview the webcam using other's application, which needs to be reverse engineered, the frame is complete: complete len's contour and normal size objects. However, in my program (see below), the frame is incomplete: the len seems to be biased and the frame is enlarged a little compared to the normal size. Fig.1 and Fig.2 shows the screenshots of the same object captured by two programs I just mentioned. (The frame resolution is 422x314) Fig.1 Other's application's preview Fig.2 My program's preview My code #include <iostream> #include <opencv2/core/core.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/highgui/highgui.hpp> using namespace std; using namespace cv; int main(int argc, char** argv) { // Current frame Mat frame; // Capture video from the camera int device = 1; VideoCapture capture(device); while (1) { // Query for frame from camera capture >> frame; // Display the captured image resize(frame, frame, Size(422, 314)); namedWindow("frame", CV_WINDOW_AUTOSIZE); imshow("frame", frame); char ch = waitKey(25); if (ch == 27) break; } return 0; } The main OpenCV function employed in my code is VideoCapture capture(device) and I resize frame to 422x314 in order to keep same as the other's. Actually, the default resolution captured by OpenCV is 640x480 that still has problem of incomplete frame. I don't know why one webcam can produce two different ranges of len. Is that caused by the software? Perhaps not. To go further, I use MATLAB image acquisition tool to preview the webcam in different resolutions. I surprisedly find that the two previews can be seen when choosing different resolutions! In default (written in MATLAB) resolution 352x288, the result preview is same as the normal one, but in resolution like 640x480, 320x240, the result preview is same as my program. So is the problem is caused by the hardware? I can't change the hardware (webcam), so how can I modify the OpenCV code to make my frame be complete and normal? Please tell me what you want to know about my webcam specification if it matters. My working is done is on Windows 7, Visual Studio 2008 and OpenCV 2.4.5. ### Different range of len captured by one webcam I encounter a weird problem when capturing frames of my webcam. When I preview the webcam using other's application, which needs to be reverse engineered, the frame is complete: complete len's contour and normal size objects. However, in my program (see below), the frame is incomplete: the len seems to be biased and the frame is enlarged a little compared to the normal size. Fig.1 and Fig.2 shows the screenshots of the same object captured by two programs I just mentioned. (The frame resolution is 422x314) Fig.1 Other's application's preview Fig.2 My program's preview My code #include <iostream> #include <opencv2/core/core.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/highgui/highgui.hpp> using namespace std; using namespace cv; int main(int argc, char** argv) { // Current frame Mat frame; // Capture video from the camera int device = 1; VideoCapture capture(device); while (1) { // Query for frame from camera capture >> frame; // Display the captured image resize(frame, frame, Size(422, 314)); namedWindow("frame", CV_WINDOW_AUTOSIZE); imshow("frame", frame); char ch = waitKey(25); if (ch == 27) break; } return 0; } The main OpenCV function employed in my code is VideoCapture capture(device) and I resize frame to 422x314 in order to keep same as the other's. Actually, the default resolution captured by OpenCV is 640x480 that still has problem of incomplete frame. I don't know why one webcam can produce two different ranges of len. Is that caused by the software? Perhaps not.not. To go further, I use MATLAB image acquisition tool to preview the webcam in different resolutions. I surprisedly find that the two previews can be seen when choosing different resolutions! In default (written in MATLAB) resolution 352x288, the preview is same as the normal one, but in resolution like 640x480, 320x240, the preview is same as my program. So is the problem caused by the hardware? I can't change the hardware (webcam), so how can I modify the OpenCV code to make my frame be complete and normal? Please tell me what you want to know about my webcam specification if it matters. My working is on Windows 7, Visual Studio 2008 and OpenCV 2.4.5.
2021-06-19 21:55:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2748452126979828, "perplexity": 5554.513156848185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649731.59/warc/CC-MAIN-20210619203250-20210619233250-00449.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3749323
# Cantor's Diagonalization Proof of the uncountability of the real numbers by Leucippus Tags: cantor, diagonalization, numbers, proof, real, uncountability P: 39 Quote by Deveno you don't seem to grasp a key point. the numbers listed are not just rational numbers. the vast majority of them (as it turns out) are aperiodic non-terminating decimals. if we stop the process after 3 steps, all we have is a decimal representation that differs from the first 3 numbers. if we continue, then after n steps, we just have a decimal that is different from the first n numbers listed. now, if the real numbers between 0 and 1 COULD be put in a 1-1 correspondence with the natural numbers, at some finite time, we would reach any given real number. that is what it means to be "countable" (eventually you can "count up" to any given element of the set). but the ever-increasing number we are creating at the bottom, doesn't match any number reached after a finite number of steps. so no matter how long we "count" (how far down the list we go), let's say we go down 3 million places, the number at the bottom is different from the first 3 million numbers. you could write a computer program, that checked the number at the bottom against the first k numbers at step k, and if it found a match within the first k, instruct it to halt. it will never halt. if the real numbers between 0 and 1 WERE countable, eventually, at some point, we would find a match, because at some point, we'd get to any real number we cared to name. but the way we are constructing the number (it grows by one digit, as we eliminate the possible matches from the "top down"), it's guaranteed not to match as many numbers as it has digits. no match = not countable. and we're rigging it so there won't BE a match. which number on the list can it possibly match? the 1st? no. the 2nd? no. the 3rd? no. the 4th? no. the 3,567,281st? no. all our number on the bottom has to do, to "not match" is be different in ONE digit-place from any given number. how do you think that will fail to happen? With all due respect Deveno, your argument here makes no sense to me at all. Why would you expect to ever get a number that's already on the list that you are generating? That would never happen in this process. That's obvious. But that doesn't prove anything in light of my observations. You're always going to be generating a number that is "further down" the list than where you are currently working. That's a given. Even my finite examples clearly show that this will always be the case. The very process that is being used to generate your "new number" demands that this be the case. My point is that it doesn't matter. Numerical representations of numbers like this simply aren't square to begin with. So just because the new number you've created isn't on the list above where you are currently working in this process doesn't mean a thing. That's my whole POINT. The numbers above where you are working cannot be a "complete list" with respect to where you are at in the columns, because the slope of the diagonal line simply isn't steep enough to make that guarantee. And this is why this diagonal method can't prove what Cantor claims it proves. This diagonal method of creating a "new number" simply isn't anywhere near "fast enough" (i.e. it doesn't have a steep enough slope) to even deal with finite lists of numbers. It will always be "behind" the list. And it just gets further and further behind with every digit that is crossed out. We can clearly see this in the finite examples I've given. Why should this change if we try to take this process out to infinity? The situation is only going to get worse with ever digit we cross out. P: 799 Quote by Leucippus That would never happen in this process. That's obvious. Leu, one point of confusion here is that you are thinking of Cantor's proof as a process. You are adding one row at a time and looking at the antigiagonal and then talking about rectangles. All of that is erroneous thinking and has nothing to do with Cantor's proof. Cantor asks us to consider any complete list of real numbers. Such a list is infinite, and we conceptualize it as a function that maps a number, such as 47, to the 47-th element on the list. There's a first element, a 2nd element, and DOT DOT DOT. We assume that ALL of these list entries exist, all at once. Then we construct the antidiagonal. It's clear that the antidiagonal can't be on the list ... because it differs from the n-th item on the list in the n-th decimal place. (Or binary place if you do the proof in base-2). Since we started by assuming we had a list of all reals; and we just showed that any such list must necessarily be missing a real; then it follows that there can be no such complete list in the first place. I suggest that you make an attempt to fully understand this beautiful proof on its own terms. There is nothing here about processes and nothing here about rectangles. You are introducing those irrelevant concepts on your own and losing sight of the proof itself. P: 159 Quote by Leucippus But I have demonstrated it. You claim that I just provided some sort of intuitive graphical argument. But that is precisely what Cantor's diagonal proof is. It's a graphical argument based on using a numerical representation of numbers. Um, no, it's not a "graphical argument", it is (or can be made into) a precise formal argument. I'm demonstrating (using precisely the same kind of numerical graphical representations of numbers) why Cantor's graphical diagonalization proof has no merit. No, you are demonstrating why it's impossible for a list of numbers (finite or infinite) to be square. That's exactly what Cantor is saying. That is literally the content of his theorem: the width of the list (the cardinality of the naturals) is less than the height of the list (the cardinality of the reals). You are literally repeating Cantor's theorem while at the same time claiming you're denying it. Jesus Christ. P: 159 A: Let us prove that 2 is irrational. We use proof by contradiction. Thus, suppose that 2 is the ratio of two integers..." B: But 2 is not a ratio of two integers! We can prove it's not." A: Um, yeah, that's what I'm trying to prove, using the method of contradiction. Again, suppose that 2 is the ratio of two integers..." B: But it's not! Your theorem is false." A: *facepalm* P: 39 Quote by SteveL27 Leu, one point of confusion here is that you are thinking of Cantor's proof as a process. You are adding one row at a time and looking at the antigiagonal and then talking about rectangles. All of that is erroneous thinking and has nothing to do with Cantor's proof. Cantor asks us to consider any complete list of real numbers. Such a list is infinite, and we conceptualize it as a function that maps a number, such as 47, to the 47-th element on the list. There's a first element, a 2nd element, and DOT DOT DOT. We assume that ALL of these list entries exist, all at once. Then we construct the antidiagonal. It's clear that the antidiagonal can't be on the list ... because it differs from the n-th item on the list in the n-th decimal place. (Or binary place if you do the proof in base-2). Since we started by assuming we had a list of all reals; and we just showed that any such list must necessarily be missing a real; then it follows that there can be no such complete list in the first place. I suggest that you make an attempt to fully understand this beautiful proof on its own terms. There is nothing here about processes and nothing here about rectangles. You are introducing those irrelevant concepts on your own and losing sight of the proof itself. "There is nothing here about processes and nothing here about rectangles. You are introducing those irrelevant concepts on your own and losing sight of the proof itself." I disagree. Cantor is using a numerical presentation of numbers and crossing them out using a diagonal line. My observations of why this cannot be used in the way he is using it are valid observations, IMHO. You can't just tell me to ignore the very things that Cantor's proof rely upon. As far thinking of the thing as a "process", that too is irrelevant. It doesn't matter whether it's thought of as a process, or as some sort of miraculous completed object. The slope of the diagonal line is too shallow to prove what Cantor claims to have proved in either case. Trying to imagine a "Completed infinite process" isn't going to help. If you're imagining that you could have actually made it to the bottom of any list at all, then you are imagining something that simply won't work in this situation. The innate rectangular nature of numerical systems of representation cannot be ignored either. In fact, if you are ignoring that, it's no wonder the proof appears to be so beautiful to you. You're just wrongfully assuming that such a list could be square and that a diagonal line could traverse the whole list from top to bottom. You absolutely need to realize that it makes no sense to think of lists of numbers in that way before you can even begin to see why this proof cannot hold. Recognizing the innate rectangular nature of numerical representations of numbers is paramount to understanding why this proof has no validity and cannot work. Pretending that such lists could be square is the only thing that 'saves' the proof. Why would I want to pretend that such lists could be square? ~~~~ So basically you're telling me that if I ignore that numerical lists are innately rectangular, and pretend that they are square, I too could see the "beauty" of Cantor's proof? Well, I guess so. But that's not reality, so why should I go there? Mentor P: 16,633 Cantor's argument has NOTHING to do with squares and rectangles. I know that there are often fancy pictures of squares in books, but those are ILLUSTRATIONS of the argument. The real formal argument is indisputable. I'll number the steps so you can specifically say where you disagree. Here is the formal argument: 1) Assume that ]0,1[ is countable, then we can write $]0,1[=\{x_1,x_2,x_3,x_4,...\}$. 2) Every number has a decimal expansion. So we can write $x_i=x_i^1\frac{1}{10}+x_i^2\frac{1}{10^2}+...+x_i^n\frac{1}{10^n}+...$. 3) Put $y_n=x_n^n$ for all n. Put $z_n=4$ if $y_n=5$ and put $z_n=5$ otherwise. 4) Put $z=z_1\frac{1}{10}+z_2\frac{1}{10^2}+...+z_n\frac{1}{10^n}+...$. 5) Notice that $z_n$ does not equal $x_n^n$ for all n. We can use this to prove that z does not equal any $x_n$. (I won't write this proof down. If the problem is here then you should say so and I will write the explicit proof down of this fact). This argument is the pure formal argument. You should find a mistake in this proof, not in the illustration of the proof. P: 159 Quote by micromass Cantor's argument has NOTHING to do with squares and rectangles. I know that there are often fancy pictures of squares in books, but those are ILLUSTRATIONS of the argument. I disagree. Personally, I've never thought of Cantor's argument in terms of squares and rectangles previously but I do find it to be a rather nice way of thinking about it. Cantor's diagonal proof is precisely proof of the fact that the rectangles never become squares. That's just a very straightforward reformulation of Cantor's point - the rectangle is as wide as N and as high as R. The only thing that's bizarre here is that Leucippus for some reason doesn't seem to understand that showing that an assumption cannot hold is precisely what you're supposed to do in a proof by contradiction. Math Emeritus Sci Advisor Thanks PF Gold P: 38,886 Quote by Preno I disagree. Personally, I've never thought of Cantor's argument in terms of squares and rectangles previously but I do find it to be a rather nice way of thinking about it. Cantor's diagonal proof is precisely proof of the fact that the rectangles never become squares. That's just a very straightforward reformulation of Cantor's point - the rectangle is as wide as N and as high as R. No, it isn't. The whole point of Cantor's argument is that this list doesn't exist to begin with! The only thing that's bizarre here is that Leucippus for some reason doesn't seem to understand that showing that an assumption cannot hold is precisely what you're supposed to do in a proof by contradiction. Mentor P: 16,633 Quote by Preno I disagree. Personally, I've never thought of Cantor's argument in terms of squares and rectangles previously but I do find it to be a rather nice way of thinking about it. Cantor's diagonal proof is precisely proof of the fact that the rectangles never become squares. That's just a very straightforward reformulation of Cantor's point - the rectangle is as wide as N and as high as R. The only thing that's bizarre here is that Leucippus for some reason doesn't seem to understand that showing that an assumption cannot hold is precisely what you're supposed to do in a proof by contradiction. I agree that it's a rather nice way of thinking about it. But it's not rigorous. You have to make a fundamental distinction in mathematics between a rigorous formal argument and "nice ways of thinking about it". A pure formal argument uses axioms and inference rules, nothing else. It is often a very abstract argument, but it is always completely correct. An illustration or a picture are NOT valid proofs. They're ok to form your intuition, but you have to realize that they are NOT proofs. Realizing what a valid proof is, is very fundamental in mathematics. P: 39 Quote by micromass Cantor's argument has NOTHING to do with squares and rectangles. I know that there are often fancy pictures of squares in books, but those are ILLUSTRATIONS of the argument. The real formal argument is indisputable. I'll number the steps so you can specifically say where you disagree. Here is the formal argument: 1) Assume that ]0,1[ is countable, then we can write $]0,1[=\{x_1,x_2,x_3,x_4,...\}$. 2) Every number has a decimal expansion. So we can write $x_i=x_i^1\frac{1}{10}+x_i^2\frac{1}{10^2}+...+x_i^n\frac{1}{10^n}+...$. 3) Put $y_n=x_n^n$ for all n. Put $z_n=4$ if $y_n=5$ and put $z_n=5$ otherwise. 4) Put $z=z_1\frac{1}{10}+z_2\frac{1}{10^2}+...+z_n\frac{1}{10^n}+...$. 5) Notice that $z_n$ does not equal $x_n^n$ for all n. We can use this to prove that z does not equal any $x_n$. (I won't write this proof down. If the problem is here then you should say so and I will write the explicit proof down of this fact). This argument is the pure formal argument. You should find a mistake in this proof, not in the illustration of the proof. Hello, Micromass. I am not refuting the ultimate conclusion that reals are uncountable. I am totally convinced that the real numbers cannot be placed into a one-to-one correspondence with the natural numbers. I'm not questioning that at all. I'm addressing only what I set out to address: Cantor's Diagonalization Proof. It is indeed a graphical proof, based on listing numerals and running a diagonal line through them. This is how Cantor originally proposed it, and it is the specific proof that I'm interested in addressing. ~~~~ I'll take a look at the formal abstract proof you offered too, and comment on that proof later. But that's really not what I'm addressing. I'm not concerned with proving or disproving the countability of the reals in general. All I'm concerned with is whether Cantor's Diagonalization proof is valid. That's what I'm looking at, and that's what I'm addressing specifically. I'm addressing the validity of this specific proof. I'm not challenging the results of the proof in general, or whether the reals are uncountable. I'm more interested in methods of proofs, than in what they are trying to prove. And this is what led me to finding the flaw in this particular proof. So it is this specific method of proof that I'm addressing. And the flaws associated specifically with it. ~~~~ I've recently watched a course by the Teaching Company Course on Great theorems. I was able to follow all of the theorems and proofs without a hitch until it came to this proof by Cantor. As I tried to better understand it I actually realized why it doesn't prove anything. The ultimate conclusions may very well be true coincidentally. But this diagonalization method (as it was presented in this course) is not a valid proof. It fails for the reasons I've already stated in previous posts. I don't understand why people think this is a valid proof? It would require that numerical lists of numbers are square, but they aren't. Mentor P: 16,633 Quote by Leucippus Hello, Micromass. I am not refuting the ultimate conclusion that reals are uncountable. I am totally convinced that the real numbers cannot be placed into a one-to-one correspondence with the natural numbers. I'm not questioning that at all. I'm addressing only what I set out to address: Cantor's Diagonalization Proof. It is indeed a graphical proof, based on listing numerals and running a diagonal line through them. This is how Cantor originally proposed it, and it is the specific proof that I'm interested in addressing. ~~~~ I'll take a look at the formal abstract proof you offered too, and comment on that proof later. But that's really not what I'm addressing. I'm not concerned with proving or disproving the countability of the reals in general. All I'm concerned with is whether Cantor's Diagonalization proof is valid. That's what I'm looking at, and that's what I'm addressing specifically. I'm addressing the validity of this specific proof. I'm not challenging the results of the proof in general, or whether the reals are uncountable. I'm more interested in methods of proofs, than in what they are trying to prove. And this is what led me to finding the flaw in this particular proof. So it is this specific method of proof that I'm addressing. And the flaws associated specifically with it. ~~~~ I've recently watched a course by the Teaching Company Course on Great theorems. I was able to follow all of the theorems and proofs without a hitch until it came to this proof by Cantor. As I tried to better understand it I actually realized why it doesn't prove anything. The ultimate conclusions may very well be true coincidentally. But this diagonalization method (as it was presented in this course) is not a valid proof. It fails for the reasons I've already stated in previous posts. I don't understand why people think this is a valid proof? It would require that numerical lists of numbers are square, but they aren't. IF you're talking about the graphical proof, then it is not a valid proof. Proofs should never be graphical. Notice that how Cantor originally proposed the proof is irrelevant, mathematics has changed a lot in 100 years. The abstract proof I just gave IS commonly known NOW as Cantor diagonalization. The "graphical proof" is not valid these days. P: 1,351 Quote by Leucippus The ultimate conclusions may very well be true coincidentally. But this diagonalization method (as it was presented in this course) is not a valid proof. It fails for the reasons I've already stated in previous posts. I don't understand why people think this is a valid proof? It would require that numerical lists of numbers are square, but they aren't. I don't understand this at all. The proposed list will have an countably infinite number of reals, and each real will have an countably infinite number of digits (adding an infinite number of zero's if necessary). For the proof, it's only necessary that for each number N, there's an N'th row, and there's an N'th column. Since both the number of rows and the number of columns is infinite, this is obviously true, so it's always possible to find the N'th number of the diagonal and change it. Since this works for all N, we can find the complete diagonal. P: 39 Quote by micromass IF you're talking about the graphical proof, then it is not a valid proof. Proofs should never be graphical. That's certainly a controversial statement. I just finished taking a course on "The Shape of Nature" which is a cutting-edge course on Topology by Professor Satyan L. Devadoss of Williams College. He not only enthusiastically supports graphical proofs, but he even cites one that he personally introduced into mathematics and he suggests that it could not be stated any other way than graphically. But then again, he addresses topology which is clearly different from set theory concepts. Here's a link to his course if you would like to watch it. The Shape of Nature By the way, you don't need to buy it. Usually you can get it through inter-library loan. Notice that how Cantor originally proposed the proof is irrelevant, mathematics has changed a lot in 100 years. The abstract proof I just gave IS commonly known NOW as Cantor diagonalization. The "graphical proof" is not valid these days. His diagonalization proof is still being presented to people and taught to people in many math courses and books today. If it's a flawed proof then that flaw should be addressed and exposed. So I would disagree with you that it's not a valid concern today. Seems to me that all you're basically suggesting is that it's irrelevant if I might have found a flaw in Cantor's original proof. I personally don't think that's irrelevant at all. Like I say. I'll look into the proof you provided and see if I can relate it back to Cantor's original idea. Mentor P: 16,633 Quote by Leucippus That's certainly a controversial statement. It's not a controversial statement at all. The idea of what a proof is and is not, is very well established. If you study mathematical logic, then you see that mathematicians have a very very very precise idea of what a proof really is. That said, the formal logical proof is often to hard to comprehend. So that is why people make illustrations to proofs, and formulate proofs to make them easier. It is important to know that that is NOT what the proof is. It's like somebody pointing to the moon and the people are all looking at the finger. The logical proof is what mathematics is all about. but he even cites one that he personally introduced into mathematics and he suggests that it could not be stated any other way than graphically. THAT is the controversial statement. If a proof can only be stated graphically, then it's wrong. Period. But then again, he addresses topology which is clearly different from set theory concepts. Topology is not different from set theoretic concepts. Topology is well-established and formalized. All proofs in topology can be expressed in formal form. His diagonalization proof is still being presented to people and taught to people in many math courses and books today. If it's a flawed proof then that flaw should be addressed and exposed. His diagonalization is being presented to people today. But those people usually KNOW that the graphical proof is not the proof itself. When I first saw Cantor diagonalization, I realized very well that one should formalize the proof and write it in formal statements. When they first showed me that $A\cap (B\cup C)=(A\cap B)\cup (A\cap C)$ using a picture, it was IMMEDIATELY made clear that this was not a valid proof. It simply SUGGESTS a valid proof. So I would disagree with you that it's not a valid concern today. Well, you got your answer: the proof is wrong. Is there anything else you would like to discuss? P: 39 Quote by willem2 I don't understand this at all. The proposed list will have an countably infinite number of reals, and each real will have an countably infinite number of digits (adding an infinite number of zero's if necessary). For the proof, it's only necessary that for each number N, there's an N'th row, and there's an N'th column. Since both the number of rows and the number of columns is infinite, this is obviously true, so it's always possible to find the N'th number of the diagonal and change it. Since this works for all N, we can find the complete diagonal. Yes, I understand what you are saying. But aren't you completely tossing out everything that I had previously pointed out? You're basically assuming a "square" situation. You're basically saying that since it's infinitely many rows and infinitely many columns that somehow makes it doable or even "square" in the sense that it's infinity by infinity in dimension. But you're not taking into account the observations I've given. That lists of numbers represented by numerals cannot be made into 'square' lists. Pretending that "infinity by infinity" represents a square list that can be completed is the folly. You'd also have to ignore the limitations on the slope of any line that needs to go down these lists crossing out a digit at time. The slope of that line cannot be made steep enough to keep up with a list that would be required to represent even the natural numbers let alone the reals. The very assumption that this problem can be viewed as a "square" infinity-by-infinity list is indeed the folly. That's the false assumption right there that can't hold true. P: 39 Quote by micromass Well, you got your answer: the proof is wrong. Is there anything else you would like to discuss? No, I guess I'm done here then. I wasn't aware the proof was considered to be wrong. I only wish the people who teach these things would be more clear about that. Then I wouldn't need to go around making myself look like an idiot proclaiming to have discovered things that everyone else already knows. P: 1,351 Quote by Leucippus Yes, I understand what you are saying. But aren't you completely tossing out everything that I had previously pointed out? You're basically assuming a "square" situation. The term square or rectangle makes no sense at all for an array of numbers that's infinite in 2 dimensions. Both are countably infinite, and thats ALL you can say. But you're not taking into account the observations I've given. That lists of numbers represented by numerals cannot be made into 'square' lists. Pretending that "infinity by infinity" represents a square list that can be completed is the folly. I'll agree with you that the lists cannot be made into square lists, but that's just because an infinite square list is nonsense. But you don't need that. All you need is to be able to extend the diagonal arbitrarily far. You'd also have to ignore the limitations on the slope of any line that needs to go down these lists crossing out a digit at time. The slope of that line cannot be made steep enough to keep up with a list that would be required to represent even the natural numbers let alone the reals. Note that an infinite list of numbers with an infinite number of digits only has a corner at (1,1) (the first digit of the first number). It does NOT have an opposite corner with the diagonal running between those corners. The diagonal does just fine with a slope of 1. P: 39 Thank you for sharing your views Willem, but as Micromass points out, this historical "proof" doesn't have the mathematical rigor worth arguing over. It wouldn't be accepted as a valid "proof" today precisely because the type of issues that I've brought up (and the types of issues that you offer in return). This "proof" cannot be made rigorous enough to settle these concerns in a clear and definitive way. I guess Micromass has a point. It just doesn't measure up to the rigor required of modern mathematics. I most certainly will agree with that observation. Related Discussions Calculus & Beyond Homework 8 Set Theory, Logic, Probability, Statistics 2 Calculus & Beyond Homework 5 General Physics 177 General Math 36
2014-04-20 23:40:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6300959587097168, "perplexity": 397.0813744039554}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/351985/help-in-understanding-the-ridge-regression-solution-break-down
# Help in understanding the ridge regression solution break down? I tried to follow Jann Goschenhofer's answer here, but I don't understand 1. How $x_i^T$ in $Criterion_{Ridge} = \sum_{i=1}^{n}(y_i-x_i^T\beta)^2 + \lambda \sum_{j=1}^p\beta_j^2$ became just $X$ without transpose in $Criterion_{Ridge} = (y-X\beta)^T(y-X\beta) + \lambda\beta^T\beta$ 2. How did he just replace $y^TX\beta$ with $\beta^TX^Ty$ in the break down of the $Criterion_{Ridge}$? He wrote $= y^Ty - \beta^TX^Ty - y^TX\beta+ \beta^Tx^TX\beta + \lambda\beta^T\beta$ is equal to $= y^Ty - \beta^TX^Ty - \beta^TX^Ty + \beta^TX^TX\beta + \beta^T\lambda I\beta$ ? If he just used the fact that $(AB)^T=B^TA^T$ then he should have written $(\beta^TX^Ty)^T$ and not just $\beta^TX^Ty$ • For numbers (considered as $1\times 1$ matrices) $x$, it is obvious that $x^\prime=x.$ This relation is exploited repeatedly in the algebra. – whuber Jun 18 '18 at 18:44 • @whuber, can you please explain in more details? Also, what about my second question? – theateist Jun 18 '18 at 20:29 How did he just replace $y^T X \beta$ with $\beta^TX^Ty$ As @Whuber points out - the trick is to see that the two terms you are referring to are actually scalar (not vectors) and so transposing them has no effect as $x^T = x$ for scalar $x$. ### You can see these are scalar from their dimensions: Let $m$ be the number of observations and $n$ be the number of features • $y: m \times 1$ • $\beta: n \times 1$ • $X: m \times n$ • $y^T X \beta: (1 \times m) \ (m \times n) \ (n \times 1) = 1 \times 1$ • $\beta^T X^T y: (1 \times n) \ (n \times m) \ (m \times 1) = 1 \times 1$ ### Use vector transpose properties to see they are the same: • $(y^T X \beta)^T = \beta^T X^T y^{TT} = \beta^T X^T y$
2019-07-20 20:22:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.825122594833374, "perplexity": 385.5491161926753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526670.1/warc/CC-MAIN-20190720194009-20190720220009-00241.warc.gz"}
http://www.xaprb.com/blog/2007/10/10/proposed-bounty-on-mysql-table-sync-features/
Proposed bounty on MySQL Table Sync features I am considering taking some time off work to concentrate deeply on MySQL Table Sync, which has been getting usage in very large companies whose names we all know. There are a lot of bugs and feature requests outstanding for it. It is overly complex, needs a lot of work, and I can’t do it in one-hour or even three-hour chunks. I need to focus on it. I’m considering asking for a bounty of $2500 USD for this. Please let me know what you think of this; it seems to be a successful way to sponsor development on some other projects, like Vim. For the amount of time I think this will take,$2500 is far below my per-hour consulting rate; I considered setting the bounty higher, but I think this will be a fair amount. I would not begin this project before December at the earliest, so there’s some time to raise funds and time for me to continue working on High Performance MySQL. I would like a volunteer to coordinate the fund-raising for me. It should be trivial, but I don’t want to do it myself, for several reasons. I can publicize the bounty on this blog and the project mailing list, and contact some of the corporations that have asked me for features. I doubt it will be hard to raise the money. I’m not committing to this, just proposing it, though I did run it by my employer, who is very supportive. Here’s the list of features I propose to implement: • Writing a test suite • Bi-directional syncing • Syncing many tables • Syncing tables without a primary key
2016-09-30 23:47:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20282328128814697, "perplexity": 1171.1439209325604}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662430.97/warc/CC-MAIN-20160924173742-00235-ip-10-143-35-109.ec2.internal.warc.gz"}
https://lmcs.episciences.org/5328
## Janičić, Predrag and Marić, Filip and Maliković, Marko - Computer-Assisted Proving of Combinatorial Conjectures Over Finite Domains: A Case Study of a Chess Conjecture lmcs:4233 - Logical Methods in Computer Science, March 29, 2019, Volume 15, Issue 1 Computer-Assisted Proving of Combinatorial Conjectures Over Finite Domains: A Case Study of a Chess Conjecture Authors: Janičić, Predrag and Marić, Filip and Maliković, Marko There are several approaches for using computers in deriving mathematical proofs. For their illustration, we provide an in-depth study of using computer support for proving one complex combinatorial conjecture -- correctness of a strategy for the chess KRK endgame. The final, machine verifiable, result presented in this paper is that there is a winning strategy for white in the KRK endgame generalized to $n \times n$ board (for natural $n$ greater than $3$). We demonstrate that different approaches for computer-based theorem proving work best together and in synergy and that the technology currently available is powerful enough for providing significant help to humans deriving complex proofs. DOI : 10.23638/LMCS-15(1:34)2019 Volume: Volume 15, Issue 1 Published on: March 29, 2019 Submitted on: January 24, 2018 Keywords: Computer Science - Logic in Computer Science,03B35, 68T15
2020-04-05 22:10:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6491027474403381, "perplexity": 3206.2760470479952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371611051.77/warc/CC-MAIN-20200405213008-20200406003508-00253.warc.gz"}
https://tex.stackexchange.com/questions/212992/purpose-of-control-space/212996
Purpose of “control space” I could not find enough information on Google about control space (\). What is the purpose of it? Where should it be used? The following is taken directly from Knuth's TeXbook (Chapter 3: Controlling TeX, p 8): When a space comes after a control word (an all-letter control sequence), it is ignored by TeX; i.e., it is not considered to be a "real" space belonging to the manuscript that is being typeset. But when a space comes after a control symbol, it's truly a space. Now the question arises, what do you do if you actually want a space to appear after a control word? We will see later that TeX treats two or more consecutive spaces as a single space, so the answer is not going to be "type two spaces." The correct answer is to type "control space," namely \␣ (the escape character followed by a blank space); TeX will treat this as a space that is not to be ignored. Notice that control-space is a control sequence of the second kind, namely a control symbol, since there is a single nonletter (␣) following the escape character. Two consecutive spaces are considered to be equivalent to a single space, so further spaces immediately following \␣ will be ignored. But if you want to enter, say, three consecutive spaces into a manuscript you can type \␣\␣\␣. Incidentally, typists are often taught to put two spaces at the ends of sentences; but we will see later that TeX has its own way to produce extra space in such cases. Thus you needn't be consistent in the number of spaces you type. For example, compare \TeX\ ignores spaces after control words. to \TeX ignores spaces after control words. There are not many uses for it besides after control sequences or to ensure non extended spaces after periods that are not punctuation (but in these cases, a tie ~ would be better). The tie is defined in terms of \ : in Plain TeX it is \def~{\penalty\@M \ } % tie while in LaTeX we see \def~{\nobreakspace{}} \DeclareRobustCommand{\nobreakspace}{\leavevmode\nobreak\ } By the way, there is a difference between the tie in Plain and in LaTeX: if you have a ~ just after an empty line in a Plain TeX document, the penalty would be inserted in vertical mode (not really a big deal, actually). Knuth likes, sometimes, to add a “very extended” space. This is the end of chapter 15, on page breaking (taken from texbook.tex): [...] After the |\output| routine is finished, ^{held-over insertion} items are placed first on the list of recent contributions, followed by the vertical list constructed by |\output|, followed by the recent contributions beginning with the page break. \ (Deep breath.) \ You got that? These are two interword spaces plus the extra space due to the sentence ending period. Such a double space may be employed in headers. If \frenchspacing is in force, then \  and \space would be equivalent, as \space expands to a space token and the space factor wouldn't come into action to make a difference. However, \  is possibly clearer than \space and they are not equivalent under \nonfrenchspacing. In the Knuth example above, break. \space (Deep Breath.) would produce a wider space, because both space tokens would be extended because of the space factor after . which is 3000. A tiny quirk: if you have \  at the end of a line, the space would be removed and replaced by the \endlinechar. Indeed, both Plain and LaTeX define a meaning for \^^M \def\^^M{\ } % control <return> = control <space> An input such as \endlinechar=S abc\ def\bye% would produce If I find myself in need of \  at the end of a line, I usually add % and, indeed, \endlinechar=S abc\ % def\bye% would produce a space. • at the end of your first sentence, i'd say "a tie ~ would usually be better". some people insist on not putting commas after "i.e." or "e.g.", and in those cases, a line break is almost always okay. (i'd prefer the comma, myself, but some authors are adamant, and i prefer to save my energy for dealing with changes in situations where there is real confusion in the original text.) – barbara beeton Nov 19 '14 at 19:37 • @barbarabeeton I spoke about periods not being punctuation; if I had to avoid a comma after i.e., I'd certainly not want it at the end of a line. – egreg Nov 19 '14 at 20:29 • i've seen some convoluted sentences where "e.g." has been moved to the end of a sentence, such as "that's because aaa and bbb, e.g." (rather than "for example", which i would find preferable). in that situation, surely you wouldn't want to suppress a line break. (although you may want to severely chastise the author.) in that case, the period after "g" performs two functions -- both the abbreviation and the end of the sentence. (i helped keypunch the brown linguistic corpus, which contained some "dual-function" periods; i remember the discussion, but not the resolution.) – barbara beeton Nov 19 '14 at 20:43 • @barbarabeeton Sorry, I'm not indulging bad style. ;-) I consider a two function period worse than letterspacing lowercase. – egreg Nov 19 '14 at 20:48
2020-01-22 02:15:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9306637048721313, "perplexity": 1581.816025077194}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606269.37/warc/CC-MAIN-20200122012204-20200122041204-00301.warc.gz"}
http://blog.jgc.org/2008/03/retiring-from-anti-spam.html
Tuesday, March 25, 2008 "Retiring" from anti-spam Today, I'm "retiring" from anti-spam work. Practically, that means the following: • No more updates to The Spammers' Compendium or Anti-spam Tool League Table pages. These remain on line, but are not being maintained. • I'm looking for a new leader for the POPFile project. • I'm no longer active on any anti-spam mailing lists. • I am leaving all anti-spam conference committees. • My anti-spam newsletter is no longer being published. I will, however, be continuing with commercial anti-spam work where I have agreements currently in place with customers. No change to their support, terms or assistance. The obvious question is why? For me, the interest just isn't there. The battle against spam continues but is now about trench warfare rather than creating new weapons. We'll continue to see innovation, but for any hacker it's the new, new thing that's important. For me, spam is yesterday's news. Watching companies squabble and refuse to cooperate, seeing a decline in quality at anti-spam conferences, and major companies essentially killing their consumer anti-spam means anti-spam just isn't where I want to be. Of course, there are many really good people fighting spam out there. This post isn't meant to demean them. Thank you to everyone who has supported what I've done over the last 7 years, and good luck! Labels: If you enjoyed this blog post, you might enjoy my travel book for people interested in science and technology: The Geek Atlas. Signed copies of The Geek Atlas are available. <$BlogCommentBody$> <$BlogCommentDateTime$> <$BlogCommentDeleteIcon$> Links to this post: <$BlogBacklinkControl$> <$BlogBacklinkTitle$> <$BlogBacklinkDeleteIcon$> <$BlogBacklinkSnippet$>
2016-09-29 01:33:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18757881224155426, "perplexity": 6123.0854910033595}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661775.4/warc/CC-MAIN-20160924173741-00020-ip-10-143-35-109.ec2.internal.warc.gz"}
http://mathhelpforum.com/pre-calculus/32306-derivative-natural-logarithmic-funtion.html
# Math Help - Derivative of Natural Logarithmic Funtion 1. ## Derivative of Natural Logarithmic Funtion Find the equation of the tangent to the curve defined by y=ln(1+2^(-x)) at the point where x=0. Help will be appreciated 2. Take the derivative of y and evaluate it at x = 0. This will be your slope value. To find the equation to the tangent, you use the formula: $y - y_{1} = m(x - x_{1})$ where $(x_{1}, y_{1})$ is the point you're concerned with. As for taking the derivative, make sure you're careful about using your chain rule! Show us what you've gotten down and we'll help you from there 3. Originally Posted by rhhs11 Find the equation of the tangent to the curve defined by y=ln(1+2^(-x)) at the point where x=0. Help will be appreciated Note that $2^{-x}=e^{-xln(2)}$ if we take the derivative we get. $\frac{d}{dx}2^{-x}=-ln(2)e^{-xln(2)}=-ln(2)2^{-x} $ So now back to your derivative $\frac{dy}{dx}=\frac{-ln(2)2^{-x}}{1+2^{-x}}$ evaluating at zero we get... $\frac{dy}{dx}|_{x=0}=\frac{-ln(2)2^{-0}}{1+2^{-0}}=\frac{-ln(2)}{2}$ 4. So i got y = 1 / (1+e^-x) * e^-x * (-1) = - e^(-x) / (1+ e^-x) mtangent @ x=0 = -1 / (1+1) = -1/2 y - ln2 = 1/2(x-0) 2y - 2ln2 = x x - 2y + 2ln2 = 0 Im not sure if that's right but looks fair enough. I would like any advice though. I also have another question where it says: Find the equation of the tangent to the curve defined by y = e^x that is perpendicular to the line 3x + y = 1. 5. Im sorry but the question is y = ln(1+e^-x) 6. Originally Posted by rhhs11 So i got y = 1 / (1+e^-x) * e^-x * (-1) = - e^(-x) / (1+ e^-x) mtangent @ x=0 = -1 / (1+1) = -1/2 y - ln2 = 1/2(x-0) 2y - 2ln2 = x x - 2y + 2ln2 = 0 Im not sure if that's right but looks fair enough. I would like any advice though. I also have another question where it says: Find the equation of the tangent to the curve defined by y = e^x that is perpendicular to the line 3x + y = 1. $y=\ln(1+2^{-x})$ to $y=\ln(1+e^{-x})$
2014-10-20 22:52:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7334534525871277, "perplexity": 856.2562452023399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443451.12/warc/CC-MAIN-20141017005723-00320-ip-10-16-133-185.ec2.internal.warc.gz"}
http://www.j.sinap.ac.cn/nst/EN/10.1007/s41365-020-00787-6
# Nuclear Science and Techniques 《核技术》(英文版) ISSN 1001-8042 CN 31-1559/TL     2019 Impact factor 1.556 Nuclear Science and Techniques ›› 2020, Vol. 31 ›› Issue (8): 77 • NUCLEAR PHYSICS AND INTERDISCIPLINARY RESEARCH • ### Probing neutron–proton effective mass splitting using nuclear stopping and isospin mix in heavy-ion collisions in GeV energy region Fan Zhang1 • Jun Su2 1. 1 Department of Electronic Information and Physics, Changzhi University, Changzhi 046011, China 2 Sino-French Institute of Nuclear Engineering and Technology, Sun Yat-sen University, Zhuhai 519082, China • Received:2020-03-16 Revised:2020-05-30 Accepted:2020-06-02 • Contact: Fan Zhang E-mail:zhangfan@mail.bnu.edu.cn • Supported by: This work was supported by the National Natural Science Foundation of China (Nos. 11905018 and 11875328), and the Scientific and Technological Innovation Programs of Higher Education Institutions of Shanxi Province, China (No. 2019L0908). PDF ShareIt Export Citation Fan Zhang, Jun Su. Probing neutron–proton effective mass splitting using nuclear stopping and isospin mix in heavy-ion collisions in GeV energy region.Nuclear Science and Techniques, 2020, 31(8): 77 Citations Altmetrics Abstract: The ramifications of the effective mass splitting on the nuclear stopping and isospin tracer during heavy-ion collisions within the gigaelectron volt energy region are studied using an isospin-dependent quantum molecular dynamics model. Three isotope probes, i.e., a proton, deuteron, and triton, are used to calculate the nuclear stopping. Compared to the mn* > mp* case, the mn* < mp* parameter results in a stronger stopping for protons but a weaker stopping for tritons. The calculations of the isospin tracer show that the mn*> mp* parameter results in a higher isospin mix than the mn* < mp* parameter. The rapidity and impact parameter dependences of the isospin tracer are also studied. A constraining of the effective mass splitting using the free nucleons with high rapidity and in a central rather than peripheral collision is suggested.
2021-04-13 07:40:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3395530879497528, "perplexity": 7338.452403737018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072175.30/warc/CC-MAIN-20210413062409-20210413092409-00298.warc.gz"}
https://physics.stackexchange.com/questions/175235/prove-christoffel-symbol-identity
# Prove Christoffel Symbol Identity In a book I am reading, the following identity is claimed and then "left to the reader to prove." $g_{ij}$ is the metric tensor, and $\Gamma$ is the Christoffel symbol of the second kind with the appropriate indices. $$\partial_k g_{ij} = g_{jl}\Gamma^{l}_{ki}+g_{il}\Gamma^{l}_{kj}$$ I have tried expanding the $g_{ij}$ term using its definition, $g_{ij}=\epsilon_{i}\cdot\epsilon_{j}$, but then I don't really know if a vector identity should be used. Moreover, I'm not even sure if that's even on the right track. Could you possibly give me nudge in the right direction? Do I need to assume the covariant derivative of the metric tensor is zero? At the most basic level, you can just use the definition of the Christoffel symbols in terms of the metric: $\Gamma^i_{jk} = \frac{1}{2}g^{is} (\partial_j g_{sk} + \partial_k g_{sj} - \partial_s g_{jk})$. Plugging this into the right-hand side of your expression will yield the left-hand side. However, one can obtain your expression directly from one of the properties of the Christoffel symbols; namely, that they are the connection coefficients of a metric-compatible affine connection (i.e. they can be used to construct a covariant derivative operator $\nabla_i$ which satisfies $\nabla_i g_{jk} = 0$). Expanding the equation $0 = \nabla_i g_{jk}$ out explicitly, we obtain $0 = \nabla_i g_{jk} = \partial_i g_{jk} - \Gamma^s_{ij} g_{sk} - \Gamma^s_{ik} g_{js}$, which gives $\partial_i g_{jk} = \Gamma^s_{ij} g_{sk} + \Gamma^s_{ik} g_{js}$. This is precisely the equation you're after.
2022-08-17 11:38:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9688528180122375, "perplexity": 122.71998828958843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572898.29/warc/CC-MAIN-20220817092402-20220817122402-00238.warc.gz"}
https://plainmath.net/trigonometry/46306-what-does-x-equal-if-sin-x
Carole Yarbrough Answered 2021-12-19 What does x equal, if $\mathrm{sin}\left(x\right)$ Answer & Explanation Mollie Nash Expert 2021-12-20Added 33 answers If $\mathrm{sin}\left(x\right)=0$, we have $x=\pi ±k\pi$ for all k in the st of integers. If $k=0,1,2,3,\dots ,N$, then $\mathrm{sin}\left(x\right)=0$ for $x=0,±\pi ,±2\pi ,\dots ,±2N\pi$ David Clayton Expert 2021-12-21Added 36 answers If sin(x)=0, then x=0° nick1337 Expert 2021-12-28Added 573 answers $x=0°$ or it can be any multiple of $180°$ Do you have a similar question? Recalculate according to your conditions! Ask your question. Get your answer. Let our experts help you. Answer in as fast as 15 minutes. Didn't find what you were looking for?
2023-01-30 13:59:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 34, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.190290167927742, "perplexity": 4216.508560986998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00532.warc.gz"}
https://icml.cc/virtual/2021/session/12077
## Probabilistic Methods 2 Moderator: Matthias W Seeger Abstract: Chat is not available. Thu 22 July 5:00 - 5:20 PDT (Oral) ##### Differentiable Particle Filtering via Entropy-Regularized Optimal Transport Adrien Corenflos · James Thornton · George Deligiannidis · Arnaud Doucet Particle Filtering (PF) methods are an established class of procedures for performing inference in non-linear state-space models. Resampling is a key ingredient of PF necessary to obtain low variance likelihood and states estimates. However, traditional resampling methods result in PF-based loss functions being non-differentiable with respect to model and PF parameters. In a variational inference context, resampling also yields high variance gradient estimates of the PF-based evidence lower bound. By leveraging optimal transport ideas, we introduce a principled differentiable particle filter and provide convergence results. We demonstrate this novel method on a variety of applications. Thu 22 July 5:20 - 5:25 PDT (Spotlight) ##### DAGs with No Curl: An Efficient DAG Structure Learning Approach Yue Yu · Tian Gao · Naiyu Yin · Qiang Ji Recently directed acyclic graph (DAG) structure learning is formulated as a constrained continuous optimization problem with continuous acyclicity constraints and was solved iteratively through subproblem optimization. To further improve efficiency, we propose a novel learning framework to model and learn the weighted adjacency matrices in the DAG space directly. Specifically, we first show that the set of weighted adjacency matrices of DAGs are equivalent to the set of weighted gradients of graph potential functions, and one may perform structure learning by searching in this equivalent set of DAGs. To instantiate this idea, we propose a new algorithm, DAG-NoCurl, which solves the optimization problem efficiently with a two-step procedure: $1)$ first we find an initial non-acyclic solution to the optimization problem, and $2)$ then we employ the Hodge decomposition of graphs and learn an acyclic graph by projecting the non-acyclic graph to the gradient of a potential function. Experimental studies on benchmark datasets demonstrate that our method provides comparable accuracy but better efficiency than baseline DAG structure learning methods on both linear and generalized structural equation models, often by more than one order of magnitude. Thu 22 July 5:25 - 5:30 PDT (Spotlight) ##### Generalized Doubly Reparameterized Gradient Estimators Matthias Bauer · Andriy Mnih Efficient low-variance gradient estimation enabled by the reparameterization trick (RT) has been essential to the success of variational autoencoders. Doubly-reparameterized gradients (DReGs) improve on the RT for multi-sample variational bounds by applying reparameterization a second time for an additional reduction in variance. Here, we develop two generalizations of the DReGs estimator and show that they can be used to train conditional and hierarchical VAEs on image modelling tasks more effectively. We first extend the estimator to hierarchical models with several stochastic layers by showing how to treat additional score function terms due to the hierarchical variational posterior. We then generalize DReGs to score functions of arbitrary distributions instead of just those of the sampling distribution, which makes the estimator applicable to the parameters of the prior in addition to those of the posterior. Thu 22 July 5:30 - 5:35 PDT (Spotlight) ##### Whittle Networks: A Deep Likelihood Model for Time Series Zhongjie Yu · Fabrizio Ventola · Kristian Kersting hile probabilistic circuits have been extensively explored for tabular data, less attention has been paid to time series. Here, the goal is to estimate joint densities among the entire time series and, in turn, determining, for instance, conditional independence relations between them. To this end, we propose the first probabilistic circuits (PCs) approach for modeling the joint distribution of multivariate time series, called Whittle sum-product networks (WSPNs). WSPNs leverage the Whittle approximation, casting the likelihood in the frequency domain, and place a complex-valued sum-product network, the most prominent PC, over the frequencies. The conditional independence relations among the time series can then be determined efficiently in the spectral domain. Moreover, WSPNs can naturally be placed into the deep neural learning stack for time series, resulting in Whittle Networks, opening the likelihood toolbox for training deep neural models and inspecting their behaviour. Our experiments show that Whittle Networks can indeed capture complex dependencies between time series and provide a useful measure of uncertainty for neural networks. Thu 22 July 5:35 - 5:40 PDT (Spotlight) ##### On the Convergence of Hamiltonian Monte Carlo with Stochastic Gradients Difan Zou · Quanquan Gu Hamiltonian Monte Carlo (HMC), built based on the Hamilton's equation, has been witnessed great success in sampling from high-dimensional posterior distributions. However, it also suffers from computational inefficiency, especially for large training datasets. One common idea to overcome this computational bottleneck is using stochastic gradients, which only queries a mini-batch of training data in each iteration. However, unlike the extensive studies on the convergence analysis of HMC using full gradients, few works focus on establishing the convergence guarantees of stochastic gradient HMC algorithms. In this paper, we propose a general framework for proving the convergence rate of HMC with stochastic gradient estimators, for sampling from strongly log-concave and log-smooth target distributions. We show that the convergence to the target distribution in $2$-Wasserstein distance can be guaranteed as long as the stochastic gradient estimator is unbiased and its variance is upper bounded along the algorithm trajectory. We further apply the proposed framework to analyze the convergence rates of HMC with four standard stochastic gradient estimators: mini-batch stochastic gradient (SG), stochastic variance reduced gradient (SVRG), stochastic average gradient (SAGA), and control variate gradient (CVG). Theoretical results explain the inefficiency of mini-batch SG, and suggest that SVRG and SAGA perform better in the tasks with high-precision requirements, while CVG performs better for large dataset. Experiment results verify our theoretical findings. Thu 22 July 5:40 - 5:45 PDT (Spotlight) ##### Addressing Catastrophic Forgetting in Few-Shot Problems Pauching Yap · Hippolyt Ritter · David Barber Neural networks are known to suffer from catastrophic forgetting when trained on sequential datasets. While there have been numerous attempts to solve this problem in large-scale supervised classification, little has been done to overcome catastrophic forgetting in few-shot classification problems. We demonstrate that the popular gradient-based model-agnostic meta-learning algorithm (MAML) indeed suffers from catastrophic forgetting and introduce a Bayesian online meta-learning framework that tackles this problem. Our framework utilises Bayesian online learning and meta-learning along with Laplace approximation and variational inference to overcome catastrophic forgetting in few-shot classification problems. The experimental evaluations demonstrate that our framework can effectively achieve this goal in comparison with various baselines. As an additional utility, we also demonstrate empirically that our framework is capable of meta-learning on sequentially arriving few-shot tasks from a stationary task distribution. Thu 22 July 5:45 - 5:50 PDT (Spotlight) ##### Bayesian Algorithm Execution: Estimating Computable Properties of Black-box Functions Using Mutual Information Willie Neiswanger · Ke Alexander Wang · Stefano Ermon In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations. One example is budget constrained global optimization of f, for which Bayesian optimization is a popular method. Other properties of interest include local optima, level sets, integrals, or graph-structured information induced by f. Often, we can find an algorithm A to compute the desired property, but it may require far more than T queries to execute. Given such an A, and a prior distribution over f, we refer to the problem of inferring the output of A using T evaluations as Bayesian Algorithm Execution (BAX). To tackle this problem, we present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output. Applying this to Dijkstra’s algorithm, for instance, we infer shortest paths in synthetic and real-world graphs with black-box edge costs. Using evolution strategies, we yield variants of Bayesian optimization that target local, rather than global, optima. On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm. Our method is closely connected to other Bayesian optimal experimental design procedures such as entropy search methods and optimal sensor placement using Gaussian processes. Thu 22 July 5:50 - 5:55 PDT (Q&A)
2022-12-09 01:39:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49943214654922485, "perplexity": 1202.33938631692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711376.47/warc/CC-MAIN-20221209011720-20221209041720-00482.warc.gz"}
https://ai.stackexchange.com/tags/monte-carlo-methods/hot?filter=year
Tag Info Hot answers tagged monte-carlo-methods 7 Importance sampling is typically used when the distribution of interest is difficult to sample from - e.g. it could be computationally expensive to draw samples from the distribution - or when the distribution is only known up to a multiplicative constant, such as in Bayesian statistics where it is intractable to calculate the marginal likelihood; that is $$... 4 Famous example is AlphaZero. It doesn't do unrolls, but consults the value network for leaf node evaluation. The paper has the details on how the update is performed afterwards: The leaf s' position is expanded and evaluated only once by the network to gene-rate both prior probabilities and evaluation, (P(s′ , \cdot),V(s ′ )) = f_\theta(s′ ). Each edge ... 3 We estimate a value using sampling on whole episodes, and we take this values to construct the target policy. The crucial bit that you are missing is that there is no single value of V(s) (or Q(s,a)) of a state (or a state action pair). These value functions are always defined with respect to some policy \pi(a|s) and is given the notation of V^{\pi}(... 3 However, from the blogs and texts I read, the equations are expressed in terms of V and NOT Q. Why is that? MC and TD are methods for associating value estimates to time step based on experienced gained in later time steps. It does not matter what kind of value estimate is being associated across time, because all value functions are expressing the same ... 3 In Model Based Reinforcement learning, state and state-action values for all states can be calculated based on the bellman equations. The equations are taken from Andrew Ng's Algorithms for Inverse Reinforcement Learning$$V^{\pi}(s) = R(s) + \gamma \sum_{s'}P(s'|s,a)V^{\pi}(s') \\ Q^{\pi}(s,a) = R(s) + \gamma \sum_{s'}P(s'|s,a)V^{\pi}(s')$$In this setting, ... 2 The discussion uses poor notation, there should be a time index. You obtain a list of tuples (s_t, a_t, r_t, s_{t+1}) and then, for every visit MC, you update$$Q(s_t, a_t) = Q(s_t, a_t) + \alpha (G_t - Q(s_t, a_t))\;; where $G_t = \sum_{k=0}^\infty \gamma^k r_{t+k}$, for each $t$ in the episode. You can see that the returns for each time step are ... 2 You are right that the strict equality $q_\pi(s,\pi(s)) = v_\pi(s)$ is generally true for a deterministic policy $\pi$. The $\geq$ inequality is also correct, of course, and it could be that the authors' intention was to show that $\pi_{k+1}$ and $\pi_k$ satisfy the condition for the policy improvement theorem: Let $\pi$ and $\pi'$ be any pair of ... 2 For every visit MC you create a list for each state. Every time you enter a state you calculate the returns for the episode and append these returns to a list. Once you have done this for all the episode you want to average over you simply calculate the value of a state to be the average of this list of returns for the state. First visit MC is almost the ... 2 So why is constant-$\alpha$ being used? This is because control scenarios are inherently non-stationary with respect to value functions. Decaying alpha comes with a risk that improvements to the policy will occur progressively more slowly, because the impact to changing the policy will be learned slowly. From my understanding, in stationary environments, ... 1 My question is if I should select state_action pairs by theirs immediate reward or should I select them by the episode reward? By the return (sum of all rewards) from the whole episode. A lot of decisions made in "good" episodes do not lead to immediate rewards, but instead transition towards states where better rewards are possible. In retrospect,... 1 I think that this is an intentional subtle detail of the algorithm that ensures the convergence property. The claim in the book is that for any $b$ that provides us with "an infinite number of returns for each pair of state and action" the target policy $\pi$ will converge to optimal. Imagine now that we have such a bad policy $b$ that it never ... 1 External sampling and outcome sampling are two ways of defining the sets $Q_1, \dots, Q_n$. I think your mistake is that you think of the $Q_i$ as fixed and taken as input in these shampling schemes. It is not the case. In external sampling, there is as many sets $Q_{\tau}$ as there are pure strategies for the opponent and the chance player (a pure strategy ... 1 Q1. When expanding the choices at the leaf node L, do I expand all, a few or just one child? Expanding all nodes or expanding just one node are both possible. There are different advantages and disadvantages. The obvious disadvantage of immediately expanding them all is that your memory usage will grow more quickly. I suppose that the primary advantage is ... 1 There are a few different ways to improve on your simple heuristic approach, but they mostly resolve to these three things: Find a better heuristic. This could be done by calculating probabilities of results, or running loads of training simulations and somehow tuning the heuristic function. Look-ahead search/planning. There are many possible search ... 1 I am a novice in Reinforcement Learning and I have been struggling for several monthes about the TD()'s logic. Initially it seemed to me that it was a successfull purely heuristic formula without any theoretical foundation. But nowadays, I understand it simply as a mean's calculation, using the recurrent formula that states that when you a have a mean and a ... Only top voted, non community-wiki answers of a minimum length are eligible
2021-09-24 14:31:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7312411665916443, "perplexity": 517.4365384195214}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057558.23/warc/CC-MAIN-20210924140738-20210924170738-00014.warc.gz"}
http://mathhelpforum.com/discrete-math/219685-regarding-combination-question.html
1. ## Regarding Combination Question Good Evening, Hey guys I have a question here: Two drugs are to be tested on 60 lab mice. Each mouse receives either one drug or acts as a control. Drug A is to be tested on 22 of the mice. Drug B is to be tested on another 22 of the mice. The remaining 16 mice are to act as a control group.How many different ways can the tests performed? the answer provided by my teacher is : 60!/( 22! 22! 16! ) but may i know why?....can someone explain it? Thanks very much 2. ## Re: Regarding Combination Question Two drugs are to be tested on 60 lab mice. Each mouse receives either one drug or acts as a control. Drug A is to be tested on 22 of the mice. Drug B is to be tested on another 22 of the mice. The remaining 16 mice are to act as a control group. How many different ways can the tests performed? the answer provided by my teacher is : 60!/( 22! 22! 16! ) but may i know why?....can someone explain it? If you have the string $AAAAABBBBBNNN$ there are $\frac{13!}{(5!)^2(3!)}$ ways to rearrange that string. In your case there are 22A's, 22B's & 16N's. 3. ## Re: Regarding Combination Question It is hard to know how to justify that answer without knowing what you DO know about combinatorics. Can we assume that you know that n! is defined as n(n-1)(n-2)...(3)(2)(1)? First, do you know that there are 60! ways of arranging 60 different things? There are 60 ways of deciding which to put first, then 59 left so there are 59 ways to choose the second. There are 60(59) ways to choose the first two. That leaves 58 different objects so 58 ways of choosing the third, 57 ways to choose the fourth, etc. When you get down to only two objects left, there are, of course, 2 ways to choose the next to last object and then just 1 way to choose the last. That is, there are 60(59)(58)(57)...(3)(2)(1)= 60! ways to arrange 60 different things. But not all 60 "objects" are different here. Writing "A" for drug A, "B" for drug B, and "N" for neither, as Plato suggests, there are 22 "A"s, 22 "B"s, and 60- 22- 22= 16 "N"s. If those were all different, if, for example, we labeled the "A"s as "A1", "A2", etc, and the same for the "B"s and "N"s, there would be 60! ways of arranging them. But there are 22! ways of rearranging just the "A"s so we don't want to count strings in which only the different "A"s are swapped as different. We need to divide by 22! to discount that. The same thing happens for the 22 "B"s and 16 "N"s so we need to divide by 22! again and by 16!. That is where the $\frac{60!}{22! 22! 14!}$ comes from. (As a simple example, if there were 5 letters, two "A"s, two "B"s, and one "N", if we label them $A_1A_2B_1B_2N$ there would be 5 distinct "objects" and so 5!= 5(4)(3)(2)(1)= 120 ways of arranging them. But, in fact, we don't want to treat, say, $A_1A_2B_1B_2N$ as different from $A_2A_1B_1B_2N$. Since, for any given arrangement of the "B"s and "N" we don't want to count the 2!= 2 ways of rearranging only the "A"s as different, we divide by that. Similarly for the two "B"s. We don't really need to "divide by 1" to allow for the single "N" but for completeness we can write this as $\frac{5!}{2!2!1!}= \frac{120}{4}= 30$. There are 30 different ways to arrange "AABBN".
2017-04-23 10:10:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7210010886192322, "perplexity": 713.3789865238036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118519.29/warc/CC-MAIN-20170423031158-00459-ip-10-145-167-34.ec2.internal.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=aa&paperid=1323&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Impact factor Subscription Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Algebra i Analiz: Year: Volume: Issue: Page: Find Algebra i Analiz, 2013, Volume 25, Issue 2, Pages 63–74 (Mi aa1323) Research Papers Uniform estimates near the initial state for solutions of the two-phase parabolic problem D. E. Apushkinskaya, N. N. Uraltseva Abstract: Optimal regularity near the initial state is established for weak solutions of the two-phase parabolic obstacle problem. The approach is sufficiently general to allow the initial data to belong to the class $C^{1,1}$. Keywords: two-phase parabolic obstacle problem, free boundary, optimal regularity. Full text: PDF file (252 kB) References: PDF file   HTML file English version: St. Petersburg Mathematical Journal, 2014, 25:2, 195–203 Bibliographic databases: Language: Citation: D. E. Apushkinskaya, N. N. Uraltseva, “Uniform estimates near the initial state for solutions of the two-phase parabolic problem”, Algebra i Analiz, 25:2 (2013), 63–74; St. Petersburg Math. J., 25:2 (2014), 195–203 Citation in format AMSBIB \Bibitem{ApuUra13} \by D.~E.~Apushkinskaya, N.~N.~Uraltseva \paper Uniform estimates near the initial state for solutions of the two-phase parabolic problem \jour Algebra i Analiz \yr 2013 \vol 25 \issue 2 \pages 63--74 \mathnet{http://mi.mathnet.ru/aa1323} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=3114850} \zmath{https://zbmath.org/?q=an:1304.35144} \elib{http://elibrary.ru/item.asp?id=20730197} \transl \jour St. Petersburg Math. J. \yr 2014 \vol 25 \issue 2 \pages 195--203 \crossref{https://doi.org/10.1090/S1061-0022-2014-01285-X} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000343074000003} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-84924352195} • http://mi.mathnet.ru/eng/aa1323 • http://mi.mathnet.ru/eng/aa/v25/i2/p63 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. J. I. Díaz, T. Mingazzini, “Free boundaries touching the boundary of the domain for some reaction-diffusion problems”, Nonlinear Anal., 119 (2015), 275–294 2. D. E. Apushkinskaya, N. N. Uraltseva, “On regularity properties of solutions to the hysteresis-type problem”, Interface Free Bound., 17:1 (2015), 93–115 3. Curran M., Gurevich P., Tikhomirov S., “Recent Advances in Reaction-Diffusion Equations with Non-ideal Relays”, Control of Self-Organizing Nonlinear Systems, Understanding Complex Systems, eds. Scholl E., Klapp SH., Hovel P., Springer-Verlag Berlin, 2016, 211–234 4. D. Apushkinskaya, Free boundary problems. Regularity properties near the fixed boundary, Lect. Notes Math., 2218, Springer, Cham, 2018, xvii+146 pp. • Number of views: This page: 226 Full text: 48 References: 38 First page: 20
2020-05-28 07:16:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2777450978755951, "perplexity": 11285.039984556128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00529.warc.gz"}
http://math.stackexchange.com/questions/360504/do-groebner-bases-give-the-smallest-generating-set-for-ideals
# Do Groebner bases give the smallest generating set for Ideals? Given a Reduced Groebner Basis $(f_1,\ldots,f_n)$ for an ideal $I$, can there be another basis $(g_1,\ldots,g_m)$ for $I$ where $m<n$? I've been reading through Cox, but can't seem to find an answer. - Is the ideal $(x z - y^2, x^2 y - z^2, x^3 - y z)$ a counterexample? – Zhen Lin Apr 13 '13 at 16:24
2016-06-30 21:41:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6252897381782532, "perplexity": 253.9803588465881}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00197-ip-10-164-35-72.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/382709/complete-sufficient-statistic-and-mvue-estimator
# Complete sufficient statistic and MVUE estimator The Lehmann–Scheffé theorem from Wikipedia The theorem states that any estimator which is unbiased for a given unknown quantity and which is based on only a complete, sufficient statistic (and on no other data-derived values) is the unique best unbiased estimator of that quantity. ... Formally, if $T$ is a complete sufficient statistic for $θ$ and $E(g(T)) = τ(θ)$ then $g(T)$ is the minimum-variance unbiased estimator (MVUE) of $τ(θ)$. I was wondering why the last sentence is correct? My question boils down to: If $T$ is a complete sufficient statistic (unbiased?) for $θ$, is $g(T))$ a complete sufficient unbiased estimator of $E(g(T)) = τ(θ)$? Thanks and regards! -
2013-12-21 03:45:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7691757678985596, "perplexity": 290.53401507284605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345774929/warc/CC-MAIN-20131218054934-00082-ip-10-33-133-15.ec2.internal.warc.gz"}
https://hackage.haskell.org/package/cassava-0.5.0.0/candidate/docs/Data-Csv.html
cassava-0.5.0.0: A CSV parsing and encoding library Data.Csv Description This module implements encoding and decoding of CSV data. The implementation is RFC 4180 compliant, with the following extensions: • Empty lines are ignored. • Non-escaped fields may contain any characters except double-quotes, commas, carriage returns, and newlines. • Escaped fields may contain any characters (but double-quotes need to be escaped). Synopsis # Usage examples >>> encode [("John" :: Text, 27), ("Jane", 28)] "John,27\r\nJane,28\r\n" Since string literals are overloaded we have to supply a type signature as the compiler couldn't deduce which string type (i.e. String or Text) we want to use. In most cases type inference will infer the type from the context and you can omit type signatures. >>> decode NoHeader "John,27\r\nJane,28\r\n" :: Either String (Vector (Text, Int)) Right [("John",27),("Jane",28)] We pass NoHeader as the first argument to indicate that the CSV input data isn't preceded by a header. In practice, the return type of decode rarely needs to be given, as it can often be inferred from the context. ## Encoding and decoding custom data types To encode and decode your own data types you need to defined instances of either ToRecord and FromRecord or ToNamedRecord and FromNamedRecord. The former is used for encoding/decoding using the column index and the latter using the column name. There are two ways to to define these instances, either by manually defining them or by using GHC generics to derive them automatically. ### Index-based record conversion Derived: {-# LANGUAGE DeriveGeneric #-} data Person = Person { name :: !Text , salary :: !Int } deriving Generic instance FromRecord Person instance ToRecord Person Manually defined: data Person = Person { name :: !Text , salary :: !Int } instance FromRecord Person where parseRecord v | length v == 2 = Person <$> v .! 0 <*> v .! 1 | otherwise = mzero instance ToRecord Person where toRecord (Person name age) = record [ toField name, toField age] We can now use e.g. encode and decode to encode and decode our data type. Encoding: >>> encode [Person ("John" :: Text) 27] "John,27\r\n" Decoding: >>> decode NoHeader "John,27\r\n" :: Either String (Vector Person) Right [Person {name = "John", salary = 27}] ### Name-based record conversion Derived: {-# LANGUAGE DeriveGeneric #-} data Person = Person { name :: !Text , salary :: !Int } deriving Generic instance FromNamedRecord Person instance ToNamedRecord Person instance DefaultOrdered Person Manually defined: data Person = Person { name :: !Text , salary :: !Int } instance FromNamedRecord Person where parseNamedRecord m = Person <$> m .: "name" <*> m .: "salary" instance ToNamedRecord Person where toNamedRecord (Person name salary) = namedRecord [ "name" .= name, "salary" .= salary] instance DefaultOrdered Person where We can now use e.g. encodeDefaultOrderedByName (or encodeByName with an explicit header order) and decodeByName to encode and decode our data type. Encoding: >>> encodeDefaultOrderedByName [Person ("John" :: Text) 27] "name,salary\r\nJohn,27\r\n" Decoding: >>> decodeByName "name,salary\r\nJohn,27\r\n" :: Either String (Header, Vector Person) Right (["name","salary"],[Person {name = "John", salary = 27}]) # Treating CSV data as opaque byte strings Sometimes you might want to work with a CSV file which contents is unknown to you. For example, you might want remove the second column of a file without knowing anything about its content. To parse a CSV file to a generic representation, just convert each record to a Vector ByteString value, like so: >>> decode NoHeader "John,27\r\nJane,28\r\n" :: Either String (Vector (Vector ByteString)) Right [["John","27"],["Jane","28"]] As the example output above shows, all the fields are returned as uninterpreted ByteString values. # Custom type conversions for fields Most of the time the existing FromField and ToField instances do what you want. However, if you need to parse a different format (e.g. hex) but use a type (e.g. Int) for which there's already a FromField instance, you need to use a newtype. Example: newtype Hex = Hex Int parseHex :: ByteString -> Parser Int parseHex = ... instance FromField Hex where parseField s = Hex <$> parseHex s Other than giving an explicit type signature, you can pattern match on the newtype constructor to indicate which type conversion you want to have the library use: case decode NoHeader "0xff,0xaa\r\n0x11,0x22\r\n" of Left err -> putStrLn err Right v -> forM_ v$ \ (Hex val1, Hex val2) -> print (val1, val2) If a field might be in one several different formats, you can use a newtype to normalize the result: newtype HexOrDecimal = HexOrDecimal Int instance FromField DefaultToZero where parseField s = case runParser (parseField s :: Parser Hex) of Left err -> HexOrDecimal <$> parseField s -- Uses Int instance Right n -> pure$ HexOrDecimal n You can use the unit type, (), to ignore a column. The parseField method for () doesn't look at the Field and thus always decodes successfully. Note that it lacks a corresponding ToField instance. Example: Left err -> putStrLn err Right n -> pure $DefaultToZero n # Encoding and decoding Encoding and decoding is a two step process. To encode a value, it is first converted to a generic representation, using either ToRecord or ToNamedRecord. The generic representation is then encoded as CSV data. To decode a value the process is reversed and either FromRecord or FromNamedRecord is used instead. Both these steps are combined in the encode and decode functions. data HasHeader Source # Is the CSV data preceded by a header? Constructors HasHeader The CSV data is preceded by a header NoHeader The CSV data is not preceded by a header Arguments :: FromRecord a => HasHeader Data contains header that should be skipped -> ByteString CSV data -> Either String (Vector a) Efficiently deserialize CSV records from a lazy ByteString. If this fails due to incomplete or invalid input, Left msg is returned. Equivalent to decodeWith defaultDecodeOptions. Arguments :: FromNamedRecord a => ByteString CSV data -> Either String (Header, Vector a) Efficiently deserialize CSV records from a lazy ByteString. If this fails due to incomplete or invalid input, Left msg is returned. The data is assumed to be preceeded by a header. Equivalent to decodeByNameWith defaultDecodeOptions. encode :: ToRecord a => [a] -> ByteString Source # Efficiently serialize CSV records as a lazy ByteString. encodeByName :: ToNamedRecord a => Header -> [a] -> ByteString Source # Efficiently serialize CSV records as a lazy ByteString. The header is written before any records and dictates the field order. encodeDefaultOrderedByName :: (DefaultOrdered a, ToNamedRecord a) => [a] -> ByteString Source # Like encodeByName, but header and field order is dictated by the header method. class DefaultOrdered a where Source # A type that has a default field order when converted to CSV. This class lets you specify how to get the headers to use for a record type that's an instance of ToNamedRecord. To derive an instance, the type is required to only have one constructor and that constructor must have named fields (also known as selectors) for all fields. Right: data Foo = Foo { foo :: !Int } Wrong: data Bar = Bar Int If you try to derive an instance using GHC generics and your type doesn't have named fields, you will get an error along the lines of: <interactive>:9:10: No instance for (DefaultOrdered (M1 S NoSelector (K1 R Char) ())) arising from a use of ‘Data.Csv.Conversion.$gdmheader’ In the expression: Data.Csv.Conversion.$gdmheader In an equation for ‘header’: header = Data.Csv.Conversion.$gdmheader In the instance declaration for ‘DefaultOrdered Foo’ Methods The header order for this record. Should include the names used in the NamedRecord returned by toNamedRecord. Pass undefined as the argument, together with a type annotation e.g. headerOrder (undefined :: MyRecord). The header order for this record. Should include the names used in the NamedRecord returned by toNamedRecord. Pass undefined as the argument, together with a type annotation e.g. headerOrder (undefined :: MyRecord). ## Encoding and decoding options These functions can be used to control how data is encoded and decoded. For example, they can be used to encode data in a tab-separated format instead of in a comma-separated format. Options that controls how data is decoded. These options can be used to e.g. decode tab-separated data instead of comma-separated data. To avoid having your program stop compiling when new fields are added to DecodeOptions, create option records by overriding values in defaultDecodeOptions. Example: myOptions = defaultDecodeOptions { decDelimiter = fromIntegral (ord '\t') } Constructors DecodeOptions FieldsdecDelimiter :: !Word8Field delimiter. Instances Source # Methods Source # MethodsshowList :: [DecodeOptions] -> ShowS # Decoding options for parsing CSV files. Arguments :: FromRecord a => DecodeOptions Decoding options -> HasHeader Data contains header that should be skipped -> ByteString CSV data -> Either String (Vector a) Like decode, but lets you customize how the CSV data is parsed. Arguments :: FromNamedRecord a => DecodeOptions Decoding options -> ByteString CSV data -> Either String (Header, Vector a) Like decodeByName, but lets you customize how the CSV data is parsed. Options that controls how data is encoded. These options can be used to e.g. encode data in a tab-separated format instead of in a comma-separated format. To avoid having your program stop compiling when new fields are added to EncodeOptions, create option records by overriding values in defaultEncodeOptions. Example: myOptions = defaultEncodeOptions { encDelimiter = fromIntegral (ord '\t') } N.B. The encDelimiter must not be the quote character (i.e. ") or one of the record separator characters (i.e. \n or \r). Constructors EncodeOptions FieldsencDelimiter :: !Word8Field delimiter.encUseCrLf :: !BoolRecord separator selection. True for CRLF (\r\n) and False for LF (\n).encIncludeHeader :: !BoolInclude a header row when encoding ToNamedRecord instances.encQuoting :: !QuotingWhat kind of quoting should be applied to text fields. Instances Source # Methods Source # MethodsshowList :: [EncodeOptions] -> ShowS # data Quoting Source # Should quoting be applied to fields, and at which level? Constructors QuoteNone No quotes. QuoteMinimal Quotes according to RFC 4180. QuoteAll Always quote. Instances Source # Methods(==) :: Quoting -> Quoting -> Bool #(/=) :: Quoting -> Quoting -> Bool # Source # MethodsshowList :: [Quoting] -> ShowS # Encoding options for CSV files. encodeWith :: ToRecord a => EncodeOptions -> [a] -> ByteString Source # Like encode, but lets you customize how the CSV data is encoded. encodeByNameWith :: ToNamedRecord a => EncodeOptions -> Header -> [a] -> ByteString Source # Like encodeByName, but lets you customize how the CSV data is encoded. encodeDefaultOrderedByNameWith :: forall a. (DefaultOrdered a, ToNamedRecord a) => EncodeOptions -> [a] -> ByteString Source # Like encodeDefaultOrderedByNameWith, but lets you customize how the CSV data is encoded. # Core CSV types CSV data represented as a Haskell vector of vector of bytestrings. A record corresponds to a single line in a CSV file. A single field within a record. The header corresponds to the first line a CSV file. Not all CSV files have a header. A header has one or more names, describing the data in the column following the name. A record corresponds to a single line in a CSV file, indexed by the column name rather than the column index. # Type conversion There are two ways to convert CSV records to and from and user-defined data types: index-based conversion and name-based conversion. ## Index-based record conversion Index-based conversion lets you convert CSV records to and from user-defined data types by referring to a field's position (its index) in the record. The first column in a CSV file is given index 0, the second index 1, and so on. class FromRecord a where Source # A type that can be converted from a single CSV record, with the possibility of failure. When writing an instance, use empty, mzero, or fail to make a conversion fail, e.g. if a Record has the wrong number of columns. Given this example data: John,56 Jane,55 here's an example type and instance: data Person = Person { name :: !Text, age :: !Int } instance FromRecord Person where parseRecord v | length v == 2 = Person <$> v .! 0 <*> v .! 1 | otherwise = mzero Methods parseRecord :: (Generic a, GFromRecord (Rep a)) => Record -> Parser a Source # Instances FromField a => FromRecord [a] Source # Methods FromField a => FromRecord (Only a) Source # Methods FromField a => FromRecord (Vector a) Source # Methods (FromField a, Unbox a) => FromRecord (Vector a) Source # Methods (FromField a, FromField b) => FromRecord (a, b) Source # MethodsparseRecord :: Record -> Parser (a, b) Source # (FromField a, FromField b, FromField c) => FromRecord (a, b, c) Source # MethodsparseRecord :: Record -> Parser (a, b, c) Source # (FromField a, FromField b, FromField c, FromField d) => FromRecord (a, b, c, d) Source # MethodsparseRecord :: Record -> Parser (a, b, c, d) Source # (FromField a, FromField b, FromField c, FromField d, FromField e) => FromRecord (a, b, c, d, e) Source # MethodsparseRecord :: Record -> Parser (a, b, c, d, e) Source # (FromField a, FromField b, FromField c, FromField d, FromField e, FromField f) => FromRecord (a, b, c, d, e, f) Source # MethodsparseRecord :: Record -> Parser (a, b, c, d, e, f) Source # (FromField a, FromField b, FromField c, FromField d, FromField e, FromField f, FromField g) => FromRecord (a, b, c, d, e, f, g) Source # MethodsparseRecord :: Record -> Parser (a, b, c, d, e, f, g) Source # (FromField a, FromField b, FromField c, FromField d, FromField e, FromField f, FromField g, FromField h) => FromRecord (a, b, c, d, e, f, g, h) Source # MethodsparseRecord :: Record -> Parser (a, b, c, d, e, f, g, h) Source # (FromField a, FromField b, FromField c, FromField d, FromField e, FromField f, FromField g, FromField h, FromField i) => FromRecord (a, b, c, d, e, f, g, h, i) Source # MethodsparseRecord :: Record -> Parser (a, b, c, d, e, f, g, h, i) Source # (FromField a, FromField b, FromField c, FromField d, FromField e, FromField f, FromField g, FromField h, FromField i, FromField j) => FromRecord (a, b, c, d, e, f, g, h, i, j) Source # MethodsparseRecord :: Record -> Parser (a, b, c, d, e, f, g, h, i, j) Source # (FromField a, FromField b, FromField c, FromField d, FromField e, FromField f, FromField g, FromField h, FromField i, FromField j, FromField k) => FromRecord (a, b, c, d, e, f, g, h, i, j, k) Source # MethodsparseRecord :: Record -> Parser (a, b, c, d, e, f, g, h, i, j, k) Source # (FromField a, FromField b, FromField c, FromField d, FromField e, FromField f, FromField g, FromField h, FromField i, FromField j, FromField k, FromField l) => FromRecord (a, b, c, d, e, f, g, h, i, j, k, l) Source # MethodsparseRecord :: Record -> Parser (a, b, c, d, e, f, g, h, i, j, k, l) Source # (FromField a, FromField b, FromField c, FromField d, FromField e, FromField f, FromField g, FromField h, FromField i, FromField j, FromField k, FromField l, FromField m) => FromRecord (a, b, c, d, e, f, g, h, i, j, k, l, m) Source # MethodsparseRecord :: Record -> Parser (a, b, c, d, e, f, g, h, i, j, k, l, m) Source # (FromField a, FromField b, FromField c, FromField d, FromField e, FromField f, FromField g, FromField h, FromField i, FromField j, FromField k, FromField l, FromField m, FromField n) => FromRecord (a, b, c, d, e, f, g, h, i, j, k, l, m, n) Source # MethodsparseRecord :: Record -> Parser (a, b, c, d, e, f, g, h, i, j, k, l, m, n) Source # (FromField a, FromField b, FromField c, FromField d, FromField e, FromField f, FromField g, FromField h, FromField i, FromField j, FromField k, FromField l, FromField m, FromField n, FromField o) => FromRecord (a, b, c, d, e, f, g, h, i, j, k, l, m, n, o) Source # MethodsparseRecord :: Record -> Parser (a, b, c, d, e, f, g, h, i, j, k, l, m, n, o) Source # data Parser a Source # Conversion of a field to a value might fail e.g. if the field is malformed. This possibility is captured by the Parser type, which lets you compose several field conversions together in such a way that if any of them fail, the whole record conversion fails. Instances Source # Methods(>>=) :: Parser a -> (a -> Parser b) -> Parser b #(>>) :: Parser a -> Parser b -> Parser b #return :: a -> Parser a #fail :: String -> Parser a # Source # Methodsfmap :: (a -> b) -> Parser a -> Parser b #(<$) :: a -> Parser b -> Parser a # Source # Methodsfail :: String -> Parser a # Source # Methodspure :: a -> Parser a #(<*>) :: Parser (a -> b) -> Parser a -> Parser b #(*>) :: Parser a -> Parser b -> Parser b #(<*) :: Parser a -> Parser b -> Parser a # Source # Methodsempty :: Parser a #(<|>) :: Parser a -> Parser a -> Parser a #some :: Parser a -> Parser [a] #many :: Parser a -> Parser [a] # Source # Methodsmzero :: Parser a #mplus :: Parser a -> Parser a -> Parser a # Source # Methods(<>) :: Parser a -> Parser a -> Parser a #sconcat :: NonEmpty (Parser a) -> Parser a #stimes :: Integral b => b -> Parser a -> Parser a # Monoid (Parser a) Source # Methodsmappend :: Parser a -> Parser a -> Parser a #mconcat :: [Parser a] -> Parser a # Run a Parser, returning either Left errMsg or Right result. Forces the value in the Left or Right constructors to weak head normal form. You most likely won't need to use this function directly, but it's included for completeness. index :: FromField a => Record -> Int -> Parser a Source # Retrieve the nth field in the given record. The result is empty if the value cannot be converted to the desired type. Raises an exception if the index is out of bounds. index is a simple convenience function that is equivalent to parseField (v ! idx). If you're certain that the index is not out of bounds, using unsafeIndex is somewhat faster. (.!) :: FromField a => Record -> Int -> Parser a infixl 9 Source # Alias for index. unsafeIndex :: FromField a => Record -> Int -> Parser a Source # Like index but without bounds checking. class ToRecord a where Source # A type that can be converted to a single CSV record. An example type and instance: data Person = Person { name :: !Text, age :: !Int } instance ToRecord Person where toRecord (Person name age) = record [ toField name, toField age] Outputs data on this form: John,56 Jane,55 Methods toRecord :: a -> Record Source # Convert a value to a record. toRecord :: (Generic a, GToRecord (Rep a) Field) => a -> Record Source # Convert a value to a record. Instances ToField a => ToRecord [a] Source # MethodstoRecord :: [a] -> Record Source # ToField a => ToRecord (Only a) Source # Methods ToField a => ToRecord (Vector a) Source # Methods (ToField a, Unbox a) => ToRecord (Vector a) Source # Methods (ToField a, ToField b) => ToRecord (a, b) Source # MethodstoRecord :: (a, b) -> Record Source # (ToField a, ToField b, ToField c) => ToRecord (a, b, c) Source # MethodstoRecord :: (a, b, c) -> Record Source # (ToField a, ToField b, ToField c, ToField d) => ToRecord (a, b, c, d) Source # MethodstoRecord :: (a, b, c, d) -> Record Source # (ToField a, ToField b, ToField c, ToField d, ToField e) => ToRecord (a, b, c, d, e) Source # MethodstoRecord :: (a, b, c, d, e) -> Record Source # (ToField a, ToField b, ToField c, ToField d, ToField e, ToField f) => ToRecord (a, b, c, d, e, f) Source # MethodstoRecord :: (a, b, c, d, e, f) -> Record Source # (ToField a, ToField b, ToField c, ToField d, ToField e, ToField f, ToField g) => ToRecord (a, b, c, d, e, f, g) Source # MethodstoRecord :: (a, b, c, d, e, f, g) -> Record Source # (ToField a, ToField b, ToField c, ToField d, ToField e, ToField f, ToField g, ToField h) => ToRecord (a, b, c, d, e, f, g, h) Source # MethodstoRecord :: (a, b, c, d, e, f, g, h) -> Record Source # (ToField a, ToField b, ToField c, ToField d, ToField e, ToField f, ToField g, ToField h, ToField i) => ToRecord (a, b, c, d, e, f, g, h, i) Source # MethodstoRecord :: (a, b, c, d, e, f, g, h, i) -> Record Source # (ToField a, ToField b, ToField c, ToField d, ToField e, ToField f, ToField g, ToField h, ToField i, ToField j) => ToRecord (a, b, c, d, e, f, g, h, i, j) Source # MethodstoRecord :: (a, b, c, d, e, f, g, h, i, j) -> Record Source # (ToField a, ToField b, ToField c, ToField d, ToField e, ToField f, ToField g, ToField h, ToField i, ToField j, ToField k) => ToRecord (a, b, c, d, e, f, g, h, i, j, k) Source # MethodstoRecord :: (a, b, c, d, e, f, g, h, i, j, k) -> Record Source # (ToField a, ToField b, ToField c, ToField d, ToField e, ToField f, ToField g, ToField h, ToField i, ToField j, ToField k, ToField l) => ToRecord (a, b, c, d, e, f, g, h, i, j, k, l) Source # MethodstoRecord :: (a, b, c, d, e, f, g, h, i, j, k, l) -> Record Source # (ToField a, ToField b, ToField c, ToField d, ToField e, ToField f, ToField g, ToField h, ToField i, ToField j, ToField k, ToField l, ToField m) => ToRecord (a, b, c, d, e, f, g, h, i, j, k, l, m) Source # MethodstoRecord :: (a, b, c, d, e, f, g, h, i, j, k, l, m) -> Record Source # (ToField a, ToField b, ToField c, ToField d, ToField e, ToField f, ToField g, ToField h, ToField i, ToField j, ToField k, ToField l, ToField m, ToField n) => ToRecord (a, b, c, d, e, f, g, h, i, j, k, l, m, n) Source # MethodstoRecord :: (a, b, c, d, e, f, g, h, i, j, k, l, m, n) -> Record Source # (ToField a, ToField b, ToField c, ToField d, ToField e, ToField f, ToField g, ToField h, ToField i, ToField j, ToField k, ToField l, ToField m, ToField n, ToField o) => ToRecord (a, b, c, d, e, f, g, h, i, j, k, l, m, n, o) Source # MethodstoRecord :: (a, b, c, d, e, f, g, h, i, j, k, l, m, n, o) -> Record Source # record :: [ByteString] -> Record Source # Construct a record from a list of ByteStrings. Use toField to convert values to ByteStrings for use with record. newtype Only a :: * -> * # The 1-tuple type or single-value "collection". This type is structurally equivalent to the Identity type, but its intent is more about serving as the anonymous 1-tuple type missing from Haskell for attaching typeclass instances. Parameter usage example: encodeSomething (Only (42::Int)) Result usage example: xs <- decodeSomething forM_ xs $\(Only id) -> {- ... -} Constructors Only FieldsfromOnly :: a Instances Methodsfmap :: (a -> b) -> Only a -> Only b #(<$) :: a -> Only b -> Only a # Eq a => Eq (Only a) Methods(==) :: Only a -> Only a -> Bool #(/=) :: Only a -> Only a -> Bool # Data a => Data (Only a) Methodsgfoldl :: (forall d b. Data d => c (d -> b) -> d -> c b) -> (forall g. g -> c g) -> Only a -> c (Only a) #gunfold :: (forall b r. Data b => c (b -> r) -> c r) -> (forall r. r -> c r) -> Constr -> c (Only a) #toConstr :: Only a -> Constr #dataTypeOf :: Only a -> DataType #dataCast1 :: Typeable (* -> *) t => (forall d. Data d => c (t d)) -> Maybe (c (Only a)) #dataCast2 :: Typeable (* -> * -> *) t => (forall d e. (Data d, Data e) => c (t d e)) -> Maybe (c (Only a)) #gmapT :: (forall b. Data b => b -> b) -> Only a -> Only a #gmapQl :: (r -> r' -> r) -> r -> (forall d. Data d => d -> r') -> Only a -> r #gmapQr :: (r' -> r -> r) -> r -> (forall d. Data d => d -> r') -> Only a -> r #gmapQ :: (forall d. Data d => d -> u) -> Only a -> [u] #gmapQi :: Int -> (forall d. Data d => d -> u) -> Only a -> u #gmapM :: Monad m => (forall d. Data d => d -> m d) -> Only a -> m (Only a) #gmapMp :: MonadPlus m => (forall d. Data d => d -> m d) -> Only a -> m (Only a) #gmapMo :: MonadPlus m => (forall d. Data d => d -> m d) -> Only a -> m (Only a) # Ord a => Ord (Only a) Methodscompare :: Only a -> Only a -> Ordering #(<) :: Only a -> Only a -> Bool #(<=) :: Only a -> Only a -> Bool #(>) :: Only a -> Only a -> Bool #(>=) :: Only a -> Only a -> Bool #max :: Only a -> Only a -> Only a #min :: Only a -> Only a -> Only a # Read a => Read (Only a) MethodsreadsPrec :: Int -> ReadS (Only a) #readList :: ReadS [Only a] # Show a => Show (Only a) MethodsshowsPrec :: Int -> Only a -> ShowS #show :: Only a -> String #showList :: [Only a] -> ShowS # Generic (Only a) Associated Typestype Rep (Only a) :: * -> * # Methodsfrom :: Only a -> Rep (Only a) x #to :: Rep (Only a) x -> Only a # NFData a => NFData (Only a) Methodsrnf :: Only a -> () # ToField a => ToRecord (Only a) Source # Methods FromField a => FromRecord (Only a) Source # Methods type Rep (Only a) type Rep (Only a) = D1 (MetaData "Only" "Data.Tuple.Only" "Only-0.1-1dkiXHtbc8zGqo2Q6b73I6" True) (C1 (MetaCons "Only" PrefixI True) (S1 (MetaSel (Just Symbol "fromOnly") NoSourceUnpackedness NoSourceStrictness DecidedLazy) (Rec0 a))) ## Name-based record conversion Name-based conversion lets you convert CSV records to and from user-defined data types by referring to a field's name. The names of the fields are defined by the first line in the file, also known as the header. Name-based conversion is more robust to changes in the file structure e.g. to reording or addition of columns, but can be a bit slower. class FromNamedRecord a where Source # A type that can be converted from a single CSV record, with the possibility of failure. When writing an instance, use empty, mzero, or fail to make a conversion fail, e.g. if a Record has the wrong number of columns. Given this example data: name,age John,56 Jane,55 here's an example type and instance: data Person = Person { name :: !Text, age :: !Int } instance FromNamedRecord Person where parseNamedRecord m = Person <\$> m .: "name" <*> m .: "age" Note the use of the OverloadedStrings language extension which enables ByteString values to be written as string literals. Methods parseNamedRecord :: (Generic a, GFromNamedRecord (Rep a)) => NamedRecord -> Parser a Source # Instances (FromField a, FromField b, Ord a) => FromNamedRecord (Map a b) Source # Methods (Eq a, FromField a, FromField b, Hashable a) => FromNamedRecord (HashMap a b) Source # Methods Retrieve a field in the given record by name. The result is empty if the field is missing or if the value cannot be converted to the desired type. Alias for lookup. class ToNamedRecord a where Source # A type that can be converted to a single CSV record. An example type and instance: data Person = Person { name :: !Text, age :: !Int } instance ToNamedRecord Person where toNamedRecord (Person name age) = namedRecord [ "name" .= name, "age" .= age] Methods Convert a value to a named record. toNamedRecord :: (Generic a, GToRecord (Rep a) (ByteString, ByteString)) => a -> NamedRecord Source # Convert a value to a named record. Instances (ToField a, ToField b, Ord a) => ToNamedRecord (Map a b) Source # Methods (Eq a, ToField a, ToField b, Hashable a) => ToNamedRecord (HashMap a b) Source # Methods Construct a named record from a list of name-value ByteString pairs. Use .= to construct such a pair from a name and a value. namedField :: ToField a => ByteString -> a -> (ByteString, ByteString) Source # Construct a pair from a name and a value. For use with namedRecord. Construct a header from a list of ByteStrings. ## Field conversion The FromField and ToField classes define how to convert between Fields and values you care about (e.g. Ints). Most of the time you don't need to write your own instances as the standard ones cover most use cases. class FromField a where Source # A type that can be converted from a single CSV field, with the possibility of failure. When writing an instance, use empty, mzero, or fail to make a conversion fail, e.g. if a Field can't be converted to the given type. Example type and instance: data Color = Red | Green | Blue instance FromField Color where parseField s | s == "R" = pure Red | s == "G" = pure Green | s == "B" = pure Blue | otherwise = mzero Minimal complete definition parseField Methods Instances Source # Assumes UTF-8 encoding. Methods Source # Accepts same syntax as rational. Ignores whitespace. Methods Source # Accepts same syntax as rational. Ignores whitespace. Methods Source # Accepts a signed decimal number. Ignores whitespace. Methods Source # Accepts a signed decimal number. Ignores whitespace. Methods Source # Accepts a signed decimal number. Ignores whitespace. Methods Source # Accepts a signed decimal number. Ignores whitespace. Methods Source # Accepts a signed decimal number. Ignores whitespace. Methods Source # Accepts a signed decimal number. Ignores whitespace. Methods Source # Accepts an unsigned decimal number. Ignores whitespace. Methods Source # Accepts an unsigned decimal number. Ignores whitespace. Methods Source # Accepts an unsigned decimal number. Ignores whitespace. Methods Source # Accepts an unsigned decimal number. Ignores whitespace. Methods Source # Accepts an unsigned decimal number. Ignores whitespace. Methods Source # Ignores the Field. Always succeeds. Methods Source # Methods Source # Methods Source # Assumes UTF-8 encoding. Fails on invalid byte sequences. Methods Source # Assumes UTF-8 encoding. Fails on invalid byte sequences. Methods Source # Methods Source # Methods Source # Assumes UTF-8 encoding. Fails on invalid byte sequences. Methods FromField a => FromField (Maybe a) Source # Nothing if the Field is empty, Just otherwise. Methods Source # Left field if conversion failed, Right otherwise. Methods class ToField a where Source # A type that can be converted to a single CSV field. Example type and instance: data Color = Red | Green | Blue instance ToField Color where toField Red = "R" toField Green = "G" toField Blue = "B" Minimal complete definition toField Methods toField :: a -> Field Source # Instances Source # Uses UTF-8 encoding. Methods Source # Uses decimal notation or scientific notation, depending on the number. Methods Source # Uses decimal notation or scientific notation, depending on the number. Methods Source # Uses decimal encoding with optional sign. Methods Source # Uses decimal encoding with optional sign. Methods Source # Uses decimal encoding with optional sign. Methods Source # Uses decimal encoding with optional sign. Methods Source # Uses decimal encoding with optional sign. Methods Source # Uses decimal encoding with optional sign. Methods Source # Uses decimal encoding. Methods Source # Uses decimal encoding. Methods Source # Uses decimal encoding. Methods Source # Uses decimal encoding. Methods Source # Uses decimal encoding. Methods Source # Methods Source # Methods Source # Uses UTF-8 encoding. Methods Source # Uses UTF-8 encoding. Methods Source # Methods Source # Methods Source # Uses UTF-8 encoding. MethodstoField :: [Char] -> Field Source # ToField a => ToField (Maybe a) Source # Nothing is encoded as an empty field. MethodstoField :: Maybe a -> Field Source #
2021-07-30 22:36:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20826254785060883, "perplexity": 13646.946885066336}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154032.75/warc/CC-MAIN-20210730220317-20210731010317-00031.warc.gz"}
https://www.alanshawn.com/tech/2022/05/16/matplotlib-latex-guide.html
# A Better Guide On Producing High-Quality Figures in LaTeX Using matplotlib In a previous post, I briefly introduced using matplotlib to generate vector graphics for scientific papers. I think some of the steps in that post is unclear. In this newer version, I am producing a better guide with more concrete instructions and some additional updates. ## Prerequisites • Load matplotlib and set the parameters import matplotlib.pyplot as plt plt.rcParams['svg.fonttype'] = 'none' from IPython.display import set_matplotlib_formats set_matplotlib_formats('svg') # # in newer versions of IPython, use # import matplotlib_inline.backend_inline # matplotlib_inline.backend_inline.set_matplotlib_formats('svg') In Jupyter Lab, the figure can be viewed in the browser and saved as SVG using the Shift+Right Click approach descried in the previous post. In other environments, the figure can be saved as SVG using plt.savefig() with the .svg extension. from tol_colors import tol_cmap, tol_cset ### Preamble of LaTeX Examples \documentclass{article} \usepackage[T1]{fontenc} \usepackage{newtxtext,newtxmath} \usepackage{xcolor} \usepackage{svg} \renewcommand{\ttdefault}{qcr} ## Plotting Using Color-Blind Safe Color Templates According to David Nichols, about 1 in 20 people suffers from color blindness. People with such conditions see reduced color spaces, which may create issues when they are reading scientific figures. Therefore, it is encouraged to use color-blind safe color templates in papers (e.g., the one provided by Paul Tol). tol_colors provides the following qualitative color maps: tol_colors provides the following diverging color maps (circled colors are for bad data): tol_colors provides the following sequential color maps (circled colors are for bad data): ## Workflow To import figures from matplotlib to LaTeX, the standard steps are: 1. Load matplotlib and tol_colors using the code snippet provided above 2. Plot the data and save the image in SVG format by either using the Shift+Right Click approach or using plt.savefig(); copy the SVG image to your LaTeX working directory 3. In LaTeX document, import the svg package in the preamble: \usepackage{svg}; next, load the SVG using \includesvg{svg_image_name.svg} Note that the svg package requires shell access and installation of InkScape on the system. My experience is that the svg package cannot work on Windows due to the fact that the command line interface of InkScape on Windows is different from other operating systems. On systems without shell access or proper InkScape CLI, one can convert the SVG image into LaTeX-friendly PDF files manually in InkScape GUI by clicking “File->Save As” and choosing PDF as the output format. Then in the pop-up dialog, select “Omit text in PDF and create LaTeX file”. This will generate two files with extensions .pdf and .pdf_tex, respectively. Copy both files to the LaTeX working directory, and import the figure using \input{svg_image_name.pdf_tex} (the width can be controlled using \resizebox). Fortunately, the svg package works correctly on Overleaf. ## Formatting Issues Sometimes, there are specific formatting issues that may affect the quality of the LaTeX output. Here, we discuss ways to fix these common issues. ### Math equations To render math equations in native LaTeX fonts, all the math delimiters ($) in matploblib should be replaced by \$. For example, if we generate and save the plot as SVG using the following code (assume numpy is imported as np): x = np.linspace(-5, 5, 100) y = np.sinc(x) plt.plot(x, y) plt.title(r'The plot of \$\operatorname{sinc}(x)=\frac{\sin \pi x}{\pi x}\$') plt.savefig('sinc_plot.svg') The resulting SVG plot will look like: We can import this figure into LaTeX using the code below: \begin{figure}[ht] \centering \includesvg[width=0.8\linewidth]{sinc_plot.svg} \end{figure} In the compiled LaTeX document, it looks like: Notice that the title is close to the plot border, which can be fixed by increasing the title padding. A downside of this approach is that matplotlib is no longer able to deduce the width of the math equation. This can be problematic in some scenarios. For example, if we generate a figure using the following Python code: x = np.linspace(-5, 5, 100) y = np.sinc(x) plt.plot(x, y, label=r'\$\frac{\sin \pi x}{\pi x}\$') plt.legend() plt.savefig('sinc_plot_legend.svg') When the figure is rendered in LaTeX, it looks like: Clearly the legend box is too wide, and the height of the legend box is too small. To fix the width issue, we declare a command whose character width is approximately the same as the rendered equation (e.g., \xxxx) and use the command in the legend instead of the math equation. To fix the height issue, we can increase the legend item spacing or increase the border padding value. After fine-tuning, the Python code is: x = np.linspace(-5, 5, 100) y = np.sinc(x) plt.plot(x, y, label=r'\xxxx') plt.savefig('sinc_plot_legend_new.svg') Now, we need to change the corresponding LaTeX code to import the SVG: \begin{figure}[ht] \centering \newcommand{\xxxx}{$\frac{\sin \pi x}{\pi x}$} \includesvg[width=0.8\linewidth]{sinc_plot_legend_new.svg} \end{figure} The corresponding result looks much better: ### Text formatting We can use LaTeX commands to better control the formatting of text in the figure. For example, consider the following heatmap generated using Python: cmap=tol_cmap('YlOrBr') z = np.random.uniform(0, 1, size=(15, 15)) plt.imshow(z, cmap=cmap) plt.colorbar() for i in range(z.shape[0]): for j in range(z.shape[1]): plt.text(i, j, '{:0.1f}'.format(z[i,j]), ha='center', va='center') plt.savefig('heatmap.svg') The SVG file looks like: It may look fine in the web browser, but in LaTeX, it looks really off: A good practice in general is to wrap the text information with a short LaTeX command (e.g., \z) in the Python code: cmap=tol_cmap('YlOrBr') z = np.random.uniform(0, 1, size=(15, 15)) plt.imshow(z, cmap=cmap) plt.colorbar() for i in range(z.shape[0]): for j in range(z.shape[1]): plt.text(i, j, '\\z{{{:0.1f}}}'.format(z[i,j]), ha='center', va='center') plt.savefig('heatmap.svg') Then we can have find-grained control over the text formatting in LaTeX by defining \z: \begin{figure}[ht] \centering \newcommand{\z}[1]{ \raisebox{1pt}{ \hspace*{-2.1mm} \fontsize{3}{3} \selectfont \ttfamily \bfseries \color{cyan} #1 } } \includesvg[width=0.8\linewidth]{heatmap_new.svg} \end{figure} The resulting visual quality is greatly improved: Note that the font sizes of all text in the SVG can be changed globally by using the pretex key. For example, we can shrink all text sizes using: \includesvg[width=0.8\linewidth,pretex=\scriptsize]{heatmap_new.svg} Be creative with the use of LaTeX commands, and the plotting/formatting process can be greatly simplified. Of course, when no Python/LaTeX based approach works, one can always modify the SVG manually with InkScape to acquire a desirable result.
2023-03-29 03:08:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284855484962463, "perplexity": 3460.1528041631823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00209.warc.gz"}
https://www.physicsforums.com/threads/hard-inequalities-question.241640/
# Homework Help: Hard inequalities question 1. Jun 23, 2008 ### nokia8650 Last edited by a moderator: May 3, 2017 2. Jun 23, 2008 ### HallsofIvy If (f) is the part you are having trouble with, then presumably you have already proved that $2x_n^2- (2n-1)x- (n+1)= 0$ (part (e)). Now you want to find the smallest n such that $x_n< n+ 0.05$. You could, for example, solve that using the quadrative formula and compare the solutions to n+ 0.05. Have you calculated some values of $x_n$? What are $x_0$ $x_1$, etc.? 3. Jun 23, 2008 ### nokia8650 Thanks for the help. Im still confused as to how the markscheme answers have come about which I attached above. Thanks
2018-06-19 18:24:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5534619092941284, "perplexity": 1837.2972073382273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863109.60/warc/CC-MAIN-20180619173519-20180619193519-00290.warc.gz"}
https://math.stackexchange.com/questions/1178123/setting-up-a-differential-equation-to-find-time-constant-for-rc-circuit
Setting up a differential equation to find time constant for RC-circuit Problem: Calculate the time constant for charging the capacitor in the circuit shown in the figure. What is the maximum charge on the capacitor? Attempt at solution: Let current $I_1$ flow from the emf into $R_1$, let current $I_2$ flow from the top junction into $R_2$, and let $I_3$ flow in the upper right corner, charging the capacitor. Applying Kirchhoff's junction rule to the top junction (call it $a$) we have \begin{align*} I_1 = I_2 + I_3. \ \ \ (i) \end{align*} From Kirchhoff's loop rules we see that \begin{cases} -I_1 R_1 - I_2 R_2 + \epsilon = 0 \ \ \ (ii) \\ -I_1 R_1 - \frac{Q}{C} + \epsilon = 0 \ \ \ (iii) \end{cases} Furthermore, we have $I_3 = \frac{dQ}{dt}$ because the third current (in the upper right corner) is charging the capacitor. I now want to set up a differential equation, involving only $Q, \frac{dQ}{dt}$ and possibly some other constants, so can I solve it for the time constant. If I differentiate (iii), then I get \begin{align*}-\frac{dI_1}{dt}R_1 - \frac{dQ}{dt} \frac{1}{C} = 0, \end{align*} which can be rewritten as \begin{align*} \frac{I_3}{C} = -\frac{dI_1}{dt} R_1. \end{align*} I'm not sure how to proceed. Any help please? Edit (adding further progress): Differentiating all equation gives us \begin{cases} \frac{dI_1}{dt} = \frac{dI_2}{dt} + \frac{dI_3}{dt} \\ -\frac{dI_1}{dt} R_1 - \frac{dI_2}{dt} R_2 = 0 \\ -\frac{dI_1}{dt} R_1 - \frac{I_3}{c} = 0. \end{cases} From the second equation we have \begin{align*} -\frac{dI_2}{dt} R_2 = \frac{dI_1}{dt} R_1. \end{align*} Substituting the first equation in the right hand side gives \begin{align*} -\frac{dI_2}{dt} R_2 = (\frac{dI_2}{dt} + \frac{dI_3}{dt})R_1. \end{align*} Distribution and bringing the $I_2$ terms to the left side gives \begin{align*} -\frac{dI_2}{dt}(R_2 + R_1) = \frac{dI_3}{dt} R_1. \end{align*} Substituting the first equation again for $-dI_2/dt$, we get \begin{align*} (-\frac{dI_1}{dt} + \frac{dI_3}{dt})(R_2 + R_1) = \frac{dI_3}{dt} R_1. \end{align*} From the third equation we see that $-\frac{I_3}{R_1 C} = \frac{dI_1}{dt}$. Hence we plug that in and get: \begin{align*} (\frac{I_3}{R_1 C} + \frac{dI_3}{dt})(R_2 + R_1) = \frac{dI_3}{dt} R_1. \end{align*} Now we got everything in function of $I_3$, which is what we wanted. Arranging terms to get \begin{align*} \frac{dt(R_2 + R_1)}{R_1} = \frac{dI_3}{I_3/R_1C + dI_3/dt}. \end{align*} Now I'm not sure how to proceed. I want a differential equation of the form \begin{align*} \frac{dI_3}{I_3} = \text{(some terms come here)} \ \cdot dt \end{align*}, so I can integrate. Any help please? • Can you give the title of your textbook ? – Tony Piccolo Mar 6 '15 at 11:50 • It's not from a textbook. It's from a handout the professor gave us. Why you ask? – Kamil Mar 6 '15 at 12:18 • It is a good rule to know the context. – Tony Piccolo Mar 6 '15 at 12:19 HINT: You can write, $$I_1 = I_2 + I_3$$ in terms of $$\frac{\varepsilon}{R_1}-\frac{V_c}{R_1}=\frac{V_c}{R_2}+C\frac{dV_c}{dt}$$ This is a first order linear diff. equation: $$\frac{dV_c}{dt}+\frac 1 C (\frac{R_1+R_2}{R_1R_2})V_c=\frac{\varepsilon}{R_1C}$$ And the charge is: $$q=CV_c$$ • Can you check my progress please. I edited at the bottom. – Kamil Mar 6 '15 at 17:55 Using (i) eliminate $I_1$ from (ii) and (iii), then using the results to eliminate $I_2$ from one or other of the resulting equations which will leave you with an equation in $I_3$ and $Q$ (and of course $R_1$, $R_2$ and $C$). As $\frac{dQ}{dt}=I_3$ this is a linear first order ODE with constant coefficients, which is what you are aiming for. Assuming the algebra is right (and I would not do that if I were you): \begin{aligned}-I_2\,R_2+\left( -I_3-I_2\right) \,R_1+\varepsilon&=0\ \ ...(ii')\\ \left( -I_3-I_2\right) \,R_1-\frac{Q}{C}+\varepsilon&=0\ \ ...(iii')\end{aligned} so rearranging each of these into the form $I_2= ...$ and equating them: \begin{aligned}I_2&=-\frac{I_3\,R_1-\varepsilon}{R_2+R_1} \end{aligned}=-\frac{C\,I_3\,R_1+Q-\varepsilon\,C}{C\,R_1} etc.. • Can you show me how to eliminate $I_2$ in the last step? I've tried what you said, but I'm still always left with an $I_2$ factor. – Kamil Mar 6 '15 at 12:45
2019-07-23 09:19:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9964418411254883, "perplexity": 682.7974437982739}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529175.83/warc/CC-MAIN-20190723085031-20190723111031-00181.warc.gz"}
http://math.stackexchange.com/questions/421026/electric-field-of-a-symetrically-charged-ball-surface/421055
Electric field of a symetrically charged ball surface I've been trying to solve this for some time, to no avail I must say. I am to calculate function of intensity of electric field on $z$ axis. The problem is: We have a charged ball surface with radius R. The charge density is a function $k\cos(\theta)$, where k is a constant and theta is the angle of deviation from $z$ axis, so the charge density is $k$ at the closest point of the ball and $-k$ at the furthest point. There's a hint that this can be calculated by Dirac's delta function, but I think using it isn't necessary. Thanks in advance to anyone who tries to tackle this problem. - I would start from the basic relation between charge and electric field: $$E = \iint_{\Omega} \frac{dq}{r^2}$$ where $\Omega$ is the solid angle subtended by a point on the $z$ axis, $dq$ is a point charge on the spherical surface, and $r$ is the distance between the point charge and the point on the $z$ axis. NB: I am considering only $z>R$ here. Note that $dq = \sigma(\Omega) R^2 d\Omega$, where $\sigma$ is the local charge density and $d\Omega$ is an element of solid angle. We have $\sigma(\Omega) = k \cos{\theta}$. Further, by considering the geometry, $$r^2=R^2+z^2-2 R z \cos{\theta}$$ We may then write the electric field as $$E(z) = 2 \pi k R^2 \int_0^{\pi} d\theta \frac{\sin{\theta} \cos{\theta}}{R^2+z^2-2 R z \cos{\theta}}$$ You should be able to evaluate this integral using the substitution $y=\cos{\theta}$: $$E(z) = \frac{\pi k R}{z} \int_{-1}^1 dy \frac{2 R z y}{R^2+z^2-2 R z y}$$ I will leave further details for the reader. The result I get is $$E(z) = \frac{\pi k R}{z} \left [1-\frac{R^2+z^2}{R z} \log{\left ( \frac{z+R}{z-R}\right)} \right ]$$ Note this is valid only for $z>R$. - "For z<R the field is zero because the charge is on the (outer) surface." No. That only simplification only applies to uniform charge distributions. What you have here is a dipole and it has non-zero field in the interior. –  dmckee Jun 15 '13 at 16:21 @dmckee: i remove the controversial line until I doublecheck. The rest should be right. –  Ron Gordon Jun 15 '13 at 16:46 Ron, What I wrote above is imprecise. Spherically symmetric distributions have net zero field inside. This is basically a consequence of Guass' law plus symmetry. When I wrote the above I was thinking in particular of distributions on a spherical shell which are only spherically symmetric when they are uniform. –  dmckee Jun 15 '13 at 17:12 @dmckee: as I said, we'll get to the bottom of it, but in the meantime I just took the sentence that may not be correct out until I verify myself. –  Ron Gordon Jun 15 '13 at 17:26 You may want to use Gauss theorem -
2015-05-28 16:57:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8808296918869019, "perplexity": 190.52096805704025}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929422.8/warc/CC-MAIN-20150521113209-00037-ip-10-180-206-219.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/857720/finding-a-root-of-a-function-via-rolles-theorem
# Finding a root of a function via Rolle's theorem Consider the function $f(t)=a(1-t)\cos(at)-\sin (at)$, where $a\in\mathbb R$. To show that it has a root in the unit interval I am urged to integrate $f$ and apply Rolle's Theorem. Attempt: $$\int a(1-t)\cos(at)-\sin (at)dt=a\int\cos(at)dt-a\int t\cos(at)dt-\int\sin (at)dt$$ $$=\sin (at)-at\sin(at)+a\int\cos(at)dt+\frac1a\cos(at)=\sin (at)-at\sin(at)+\sin(at)+\frac1a\cos(at)$$ $$=(2-at)\sin(at)+\frac1a\cos(at)$$ Therefore, $F(x)=\int_0^xf(t)dt=(2-ax)\sin(ax)+\frac1a\cos(ax)-\frac1a\cos(a)$. Evaluating at $0$ and $1$, we get $F(0)=(1-\cos(a))\frac1a$ and $F(1)=(2-a)\sin(a)$. So $F(1)\not=F(0)$ and we cannot apply Rolle's Theorem. What did I do wrong? Does $f$ have no real root? Edit: $$\int a(1-t)\cos(at)-\sin (at)dt=a\int\cos(at)dt-a\int t\cos(at)dt-\int\sin (at)dt=$$ $$\sin (at)-t\sin(at)+\int\sin(at)dt-\int\sin (at)dt$$ - Your integration is wrong it should be: $$F(t)=(1-t)\sin at+\lambda\\ F(0)=(1-0)\sin0+\lambda=\lambda\\ F(1)=(1-1)\sin a +\lambda=\lambda\\ \text{Thus, }F(0)=F(1)$$ If there is a root, there is some $t$ such that $$f(t)=a(1-t)\cos(at)-\sin (at)=0$$ Changing variable $y=at$ gives $$f(y)=(a-y)\cos(y)-\sin(y)=0$$ Assuming $\cos(y) \neq 0$, then $$\cos(y) \Big((a-y)-\tan(y)\Big)=0$$ The last term is the intersection of the straight line $z=a-y$ and the curve $z=\tan(y)$ which always exists.
2015-08-01 06:30:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9531983137130737, "perplexity": 69.57675414905661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988511.77/warc/CC-MAIN-20150728002308-00154-ip-10-236-191-2.ec2.internal.warc.gz"}
https://gmatclub.com/forum/when-integer-m-is-divided-by-13-the-quotient-is-q-and-the-r-77545.html?sort_by_oldest=true
It is currently 18 Oct 2017, 11:47 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track Your Progress every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # When integer m is divided by 13, the quotient is q and the r new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Director Joined: 29 Aug 2005 Posts: 855 Kudos [?]: 488 [4], given: 7 When integer m is divided by 13, the quotient is q and the r [#permalink] ### Show Tags 06 Apr 2009, 13:08 4 This post received KUDOS 21 This post was BOOKMARKED 00:00 Difficulty: 45% (medium) Question Stats: 71% (01:27) correct 29% (01:53) wrong based on 708 sessions ### HideShow timer Statistics When integer m is divided by 13, the quotient is q and the remainder is 2. When m is divided by 17, the remainder is also 2. What is the remainder when q is divided by 17? A. 0 B. 2 C. 4 D. 9 E. 13 [Reveal] Spoiler: OA Last edited by Bunuel on 29 Jan 2013, 04:16, edited 1 time in total. Renamed the topic and added OA. Kudos [?]: 488 [4], given: 7 Senior Manager Joined: 01 Mar 2009 Posts: 367 Kudos [?]: 96 [0], given: 24 Location: PDX Re: Remainder [#permalink] ### Show Tags 06 Apr 2009, 13:29 1 This post was BOOKMARKED So q is 17 because if m = 13q + 2 and m is also 17(z)+2 So when 17 is divided by 17 the remainder is 0. -pradeep _________________ In the land of the night, the chariot of the sun is drawn by the grateful dead Kudos [?]: 96 [0], given: 24 GMAT Tutor Joined: 24 Jun 2008 Posts: 1339 Kudos [?]: 1952 [9], given: 6 Re: Remainder [#permalink] ### Show Tags 06 Apr 2009, 16:40 9 This post received KUDOS Expert's post 4 This post was BOOKMARKED botirvoy wrote: When integer m is divided by 13, the quotient is q and the remainder is 2. When m is divided by 17, the remainder is also 2. What is the remainder when q is divided by 17? A. 0 B. 2 C. 4 D. 9 E. 13 Detailed explanations please. From the definition of quotients and remainders, we have: m = 13q + 2 m = 17a + 2 (note that the quotient is different in the second case). So we have 13q + 2 = 17a + 2 13q = 17a and since this equation involves only integers, the primes that divide the right side must divide the left, and vice versa. That is, q must be divisible by 17, and a must be divisible by 13. If q is divisible by 17, the remainder is zero when you divide q by 17. Of course, if you can see that q = 17 is one possible value for q here, you can use that to get the answer of zero quickly as well. _________________ GMAT Tutor in Toronto If you are looking for online GMAT math tutoring, or if you are interested in buying my advanced Quant books and problem sets, please contact me at ianstewartgmat at gmail.com Kudos [?]: 1952 [9], given: 6 Manager Joined: 02 Mar 2009 Posts: 134 Kudos [?]: 55 [2], given: 0 Re: Remainder [#permalink] ### Show Tags 06 Apr 2009, 23:59 2 This post received KUDOS Got 0 as well but am I right in thinking that 0 is another possible value of q? Kudos [?]: 55 [2], given: 0 GMAT Tutor Joined: 24 Jun 2008 Posts: 1339 Kudos [?]: 1952 [1], given: 6 Re: Remainder [#permalink] ### Show Tags 07 Apr 2009, 05:20 1 This post received KUDOS Expert's post shkusira wrote: Got 0 as well but am I right in thinking that 0 is another possible value of q? Yes, perfectly correct - and that makes the question quite easy! _________________ GMAT Tutor in Toronto If you are looking for online GMAT math tutoring, or if you are interested in buying my advanced Quant books and problem sets, please contact me at ianstewartgmat at gmail.com Kudos [?]: 1952 [1], given: 6 Director Joined: 23 May 2008 Posts: 801 Kudos [?]: 83 [0], given: 0 Re: Remainder [#permalink] ### Show Tags 05 May 2009, 00:22 m-13q=2 m-17s=2 13q=17s r=0 A Kudos [?]: 83 [0], given: 0 Manager Joined: 14 Nov 2008 Posts: 195 Kudos [?]: 129 [0], given: 3 Schools: Stanford...Wait, I will come!!! Re: Remainder [#permalink] ### Show Tags 06 May 2009, 00:45 IanStewart wrote: shkusira wrote: Got 0 as well but am I right in thinking that 0 is another possible value of q? Yes, perfectly correct - and that makes the question quite easy! Thanks. Just to add, if q=0, it is divisible by 17.. and hence would not leave any remainder. Kudos [?]: 129 [0], given: 3 Intern Joined: 10 May 2010 Posts: 1 Kudos [?]: [0], given: 0 Re: Remainder [#permalink] ### Show Tags 10 May 2010, 23:18 We can solve this using a simple equation and deriving the values:- m = 13q + 2 - 1 m = 17p + 2 - 2 => 13q + = 17p + 2 => 13q = 17p => q = 17p/3 Here, 17p is equal to (m-2) from eqn 2. Therefore, q = (m-2)/3 Now substituting the values in eqn 1:- => m = 13(m-2)/3 + 2 From here, m = 2. Now using M, the value of Q can be derived from eqn 1. => 2 = 13q + 2 => q = 0 Now if we divide Q by any number henceforth, the remainder would always be 0. Kudos [?]: [0], given: 0 Manager Joined: 05 Feb 2007 Posts: 139 Kudos [?]: 8 [0], given: 7 Re: Remainder [#permalink] ### Show Tags 01 Jun 2011, 08:32 So, I understand how to get to here m=(13q)+2 m=(17x)+2 13q=17x But, once I reduce the equation to 13q=17x, I am unable to make any deductions...can someone provide a clear explanation on how to use algebra to derive the values when we still have variables? Kudos [?]: 8 [0], given: 7 VP Status: There is always something new !! Affiliations: PMI,QAI Global,eXampleCG Joined: 08 May 2009 Posts: 1285 Kudos [?]: 281 [1], given: 10 Re: Remainder [#permalink] ### Show Tags 13 Jun 2011, 01:17 1 This post received KUDOS 13q = 17p lcm = 13 * 17. thus q = 17. hence remainder = 0 _________________ Visit -- http://www.sustainable-sphere.com/ Promote Green Business,Sustainable Living and Green Earth !! Kudos [?]: 281 [1], given: 10 GMAT Tutor Joined: 24 Jun 2008 Posts: 1339 Kudos [?]: 1952 [2], given: 6 Re: Remainder [#permalink] ### Show Tags 13 Jun 2011, 10:26 2 This post received KUDOS Expert's post 1 This post was BOOKMARKED amit2k9 wrote: 13q = 17p lcm = 13 * 17. thus q = 17. hence remainder = 0 Careful here; if 13q = 17p, all you can say is that q is a multiple of 17, and that p is a multiple of 13. There is no way to find the actual value of q or p, and you certainly cannot be sure that q=17. It could be that q=34 and p=26, for example. In general, if you see an equation like 13q = 17p, and if q and p are integers, then 13q and 17p are *the same number*. So they must have the same divisors. Since 17 is a divisor of 17p, it must be a divisor of 13q, so q must be divisible by 17. Alternatively you can rewrite the equation as p = 13q/17, and since p is an integer, 13q/17 must be an integer, from which again we have that 13q is divisible by 17, so q is divisible by 17. _________________ GMAT Tutor in Toronto If you are looking for online GMAT math tutoring, or if you are interested in buying my advanced Quant books and problem sets, please contact me at ianstewartgmat at gmail.com Kudos [?]: 1952 [2], given: 6 Intern Joined: 18 Dec 2012 Posts: 2 Kudos [?]: [0], given: 25 Schools: Said Re: Remainder [#permalink] ### Show Tags 29 Jan 2013, 03:08 Can't I just say q=0, so (1) m=13q+2 => m=13*0+2 <=> m=2 (2) m=17k+2 => 2=17k+2 <=> k=0 --> 0 divided by 17 will obviously result in a reminder of 0 Kudos [?]: [0], given: 25 Intern Joined: 10 Dec 2012 Posts: 48 Kudos [?]: 11 [0], given: 52 Re: When integer m is divided by 13, the quotient is q and the r [#permalink] ### Show Tags 11 Aug 2015, 12:18 Hi stewart, Can this be the alternative approach? m=13q+2 2=13(0)+2 15=13(1)+2 28=13(2)+2 and m=17p+2 2=17(0)+2 19=17(1)+2 36=17(2)+2 Thus we know p=q and that's 0, so 0/17 = 0. Kudos [?]: 11 [0], given: 52 Math Forum Moderator Joined: 20 Mar 2014 Posts: 2676 Kudos [?]: 1723 [0], given: 792 Concentration: Finance, Strategy Schools: Kellogg '18 (M) GMAT 1: 750 Q49 V44 GPA: 3.7 WE: Engineering (Aerospace and Defense) Re: When integer m is divided by 13, the quotient is q and the r [#permalink] ### Show Tags 11 Aug 2015, 13:22 jaspreets wrote: Hi stewart, Can this be the alternative approach? m=13q+2 2=13(0)+2 15=13(1)+2 28=13(2)+2 and m=17p+2 2=17(0)+2 19=17(1)+2 36=17(2)+2 Thus we know p=q and that's 0, so 0/17 = 0. Yes, you are correct in your approach. This question can be solved either by algebra as shown by Ian above or by plugging a few values as you have done. The trick here is to realise that you are finding a number that gives a remainder of 2 with both 13 and 17. _________________ Thursday with Ron updated list as of July 1st, 2015: http://gmatclub.com/forum/consolidated-thursday-with-ron-list-for-all-the-sections-201006.html#p1544515 Rules for Posting in Quant Forums: http://gmatclub.com/forum/rules-for-posting-please-read-this-before-posting-133935.html Writing Mathematical Formulae in your posts: http://gmatclub.com/forum/rules-for-posting-please-read-this-before-posting-133935.html#p1096628 GMATCLUB Math Book: http://gmatclub.com/forum/gmat-math-book-in-downloadable-pdf-format-130609.html Everything Related to Inequalities: http://gmatclub.com/forum/inequalities-made-easy-206653.html#p1582891 Inequalities tips: http://gmatclub.com/forum/inequalities-tips-and-hints-175001.html Debrief, 650 to 750: http://gmatclub.com/forum/650-to-750-a-10-month-journey-to-the-score-203190.html Kudos [?]: 1723 [0], given: 792 Intern Joined: 10 Dec 2012 Posts: 48 Kudos [?]: 11 [0], given: 52 Re: When integer m is divided by 13, the quotient is q and the r [#permalink] ### Show Tags 11 Aug 2015, 14:49 Thank You Engr2012 for the rapid response. Appreciated Kudos [?]: 11 [0], given: 52 Manager Joined: 04 Jun 2015 Posts: 86 Kudos [?]: 53 [0], given: 2 Re: When integer m is divided by 13, the quotient is q and the r [#permalink] ### Show Tags 29 Jan 2017, 01:25 seofah wrote: When integer m is divided by 13, the quotient is q and the remainder is 2. When m is divided by 17, the remainder is also 2. What is the remainder when q is divided by 17? A. 0 B. 2 C. 4 D. 9 E. 13 Here's my take on this- We are given $$\frac{m}{13}$$ = q (rem 2) and $$\frac{m}{17}$$ = _ r 2 One possible value for m that satisfies both the conditions is $$m = 2$$. When $$m = 2$$, $$q$$ will be $$0 (\frac{2}{13}$$ gives quotient $$q= 0$$ and rem r= 2), which follows $$\frac{0}{17} = 0$$ _________________ Sortem sternit fortem! Kudos [?]: 53 [0], given: 2 Director Joined: 07 Dec 2014 Posts: 812 Kudos [?]: 247 [0], given: 12 Re: When integer m is divided by 13, the quotient is q and the r [#permalink] ### Show Tags 29 Jan 2017, 09:46 seofah wrote: When integer m is divided by 13, the quotient is q and the remainder is 2. When m is divided by 17, the remainder is also 2. What is the remainder when q is divided by 17? A. 0 B. 2 C. 4 D. 9 E. 13 m=2+(13*17)=223 q=223/13=17 17/17 gives remainder of 0 A Kudos [?]: 247 [0], given: 12 Manager Joined: 06 Jul 2014 Posts: 105 Kudos [?]: 39 [0], given: 179 Location: India Concentration: Finance, Entrepreneurship GMAT 1: 640 Q48 V30 Re: When integer m is divided by 13, the quotient is q and the r [#permalink] ### Show Tags 07 Oct 2017, 01:13 How do we know that when m is divided by 17 the quotient is different? Kudos [?]: 39 [0], given: 179 Math Expert Joined: 02 Sep 2009 Posts: 41888 Kudos [?]: 128752 [0], given: 12182 Re: When integer m is divided by 13, the quotient is q and the r [#permalink] ### Show Tags 07 Oct 2017, 01:23 Bounce1987 wrote: How do we know that when m is divided by 17 the quotient is different? You'd get the same answer if you set them equal: 13q + 2 = 17q + 2 --> q = 0 --> remainder when dividing 0 by 17 is 0. So, the quotients might or might not be the same but in any case you should consider more general case and denote them with different variables. _________________ Kudos [?]: 128752 [0], given: 12182 Re: When integer m is divided by 13, the quotient is q and the r   [#permalink] 07 Oct 2017, 01:23 Display posts from previous: Sort by # When integer m is divided by 13, the quotient is q and the r new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2017-10-18 18:47:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5371353030204773, "perplexity": 4640.573540081378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823067.51/warc/CC-MAIN-20171018180631-20171018200631-00770.warc.gz"}
http://docs.qinfer.org/en/latest/_modules/qinfer/resamplers.html
Source code for qinfer.resamplers #!/usr/bin/python # -*- coding: utf-8 -*- ## # resamplers.py: Implementations of various resampling algorithms. ## # © 2017, Chris Ferrie (csferrie@gmail.com) and # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # # 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # # 3. Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE # LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR # CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF # SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN # CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) # ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE # POSSIBILITY OF SUCH DAMAGE. ## ## FEATURES ################################################################### from __future__ import absolute_import from __future__ import division ## ALL ######################################################################## # We use __all__ to restrict what globals are visible to external modules. __all__ = [ 'Resampler', 'LiuWestResampler' ] ## IMPORTS #################################################################### import numpy as np import scipy.linalg as la import warnings from ._due import due, BibTeX from .utils import outer_product, particle_meanfn, particle_covariance_mtx, sqrtm_psd from abc import ABCMeta, abstractmethod, abstractproperty from future.utils import with_metaclass import qinfer.clustering from qinfer._exceptions import ResamplerWarning, ResamplerError from qinfer.distributions import ParticleDistribution ## LOGGING #################################################################### import logging logger = logging.getLogger(__name__) ## CLASSES #################################################################### [docs]class Resampler(with_metaclass(ABCMeta, object)): [docs] @abstractmethod def __call__(self, model, particle_dist, n_particles=None, precomputed_mean=None, precomputed_cov=None ): """ Resample the particles given by particle_weights and particle_locations, drawing n_particles new particles. :param Model model: Model from which the particles are drawn, used to define the valid region for resampling. :param ParticleDistribution paricle_dist: The particle distribution to be resampled. :param int n_particles: Number of new particles to draw, or None to draw the same number as the original distribution. :param np.ndarray precomputed_mean: Mean of the original distribution, or None if this should be computed by the resampler. :param np.ndarray precomputed_cov: Covariance of the original distribution, or None if this should be computed by the resampler. :return ParticleDistribution: Resampled particle distribution """ class ClusteringResampler(object): r""" Creates a resampler that breaks the particles into clusters, then applies a secondary resampling algorithm to each cluster independently. :param secondary_resampler: Resampling algorithm to be applied to each cluster. If None, defaults to LiuWestResampler(). """ def __init__(self, eps=0.5, secondary_resampler=None, min_particles=5, metric='euclidean', weighted=False, w_pow=0.5, quiet=True): warnings.warn("This class is deprecated, and will be removed in a future version.", DeprecationWarning) self.secondary_resampler = ( secondary_resampler if secondary_resampler is not None else LiuWestResampler() ) self.eps = eps self.quiet = quiet self.min_particles = min_particles self.metric = metric self.weighted = weighted self.w_pow = w_pow ## METHODS ## def __call__(self, model, particle_weights, particle_locations): ## TODO: docstring. # Allocate new arrays to hold the weights and locations. new_weights = np.empty(particle_weights.shape) new_locs = np.empty(particle_locations.shape) # Loop over clusters, calling the secondary resampler for each. # The loop should include -1 if noise was found. for cluster_label, cluster_particles in clustering.particle_clusters( particle_locations, particle_weights, eps=self.eps, min_particles=self.min_particles, metric=self.metric, weighted=self.weighted, w_pow=self.w_pow, quiet=self.quiet ): # If we are resampling the NOISE label, we must use the global moments. if cluster_label == clustering.NOISE: extra_args = { "precomputed_mean": particle_meanfn(particle_weights, particle_locations, lambda x: x), "precomputed_cov": particle_covariance_mtx(particle_weights, particle_locations) } else: extra_args = {} # Pass the particles in that cluster to the secondary resampler # and record the new weights and locations. cluster_ws, cluster_locs = self.secondary_resampler(model, particle_weights[cluster_particles], particle_locations[cluster_particles], **extra_args ) # Renormalize the weights of each resampled particle by the total # weight of the cluster to which it belongs. cluster_ws /= np.sum(particle_weights[cluster_particles]) # Store the updated cluster. new_weights[cluster_particles] = cluster_ws new_locs[cluster_particles] = cluster_locs # Assert that we have not introduced any NaNs or Infs by resampling. assert np.all(np.logical_not(np.logical_or( np.isnan(new_locs), np.isinf(new_locs) ))) return new_weights, new_locs [docs]class LiuWestResampler(Resampler): r""" Creates a resampler instance that applies the algorithm of [LW01]_ to redistribute the particles. :param float a: Value of the parameter :math:a of the [LW01]_ algorithm to use in resampling. :param float h: Value of the parameter :math:h to use, or None to use that corresponding to :math:a. :param int maxiter: Maximum number of times to attempt to resample within the space of valid models before giving up. :param bool debug: Because the resampler can generate large amounts of debug information, nothing is output to the logger, even at DEBUG level, unless this flag is True. :param bool postselect: If True, ensures that models are valid by postselecting. :param float zero_cov_comp: Amount of covariance to be added to every parameter during resampling in the case that the estimated covariance has zero norm. :param callable kernel: Callable function kernel(*shape) that returns samples from a resampling distribution with mean 0 and variance 1. :param int default_n_particles: The default number of particles to draw during a resampling action. If None, the number of redrawn particles redrawn will be equal to the number of particles given. The value of default_n_particles can be overridden by any integer value of n_particles given to __call__. .. warning:: The [LW01]_ algorithm preserves the first two moments of the distribution (in expectation over the random choices made by the resampler) if and only if :math:a^2 + h^2 = 1, as is set by the h=None keyword argument. """ @due.dcite( BibTeX(""" @incollection{liu_combined_2001, title = {Combined Parameter and State Estimation in Simulation-Based Filtering}, timestamp = {2013-01-28T21:57:35Z}, urldate = {2013-01-28}, booktitle = {Sequential {Monte Carlo} Methods in Practice}, publisher = {{Springer-Verlag, New York}}, author = {Liu, Jane and West, Mike}, editor = {De Freitas and Gordon, NJ}, year = {2001} } """), description="Liu-West resampler", tags=['implementation'] ) def __init__(self, a=0.98, h=None, maxiter=1000, debug=False, postselect=True, zero_cov_comp=1e-10, default_n_particles=None, kernel=np.random.randn ): self._default_n_particles = default_n_particles self.a = a # Implicitly calls the property setter below to set _h. if h is not None: self._override_h = True self._h = h self._maxiter = maxiter self._debug = debug self._postselect = postselect self._zero_cov_comp = zero_cov_comp self._kernel = kernel _override_h = False ## PROPERTIES ## @property def a(self): return self._a @a.setter def a(self, new_a): self._a = new_a if not self._override_h: self._h = np.sqrt(1 - new_a**2) ## METHODS ## [docs] def __call__(self, model, particle_dist, n_particles=None, precomputed_mean=None, precomputed_cov=None ): """ Resample the particles according to algorithm given in [LW01]_. """ # Possibly recompute moments, if not provided. if precomputed_mean is None: mean = particle_dist.est_mean() else: mean = precomputed_mean if precomputed_cov is None: cov = particle_dist.est_covariance_mtx() else: cov = precomputed_cov if n_particles is None: if self._default_n_particles is None: n_particles = particle_dist.n_particles else: n_particles = self._default_n_particles # parameters in the Liu and West algorithm a, h = self._a, self._h if la.norm(cov, 'fro') == 0: # The norm of the square root of S is literally zero, such that # the error estimated in the next step will not make sense. # We fix that by adding to the covariance a tiny bit of the # identity. warnings.warn( "Covariance has zero norm; adding in small covariance in " "resampler. Consider increasing n_particles to improve covariance " "estimates.", ResamplerWarning ) cov = self._zero_cov_comp * np.eye(cov.shape[0]) S, S_err = sqrtm_psd(cov) if not np.isfinite(S_err): raise ResamplerError( "Infinite error in computing the square root of the " "covariance matrix. Check that n_ess is not too small.") S = np.real(h * S) # Give shorter names to weights, locations, and nr. of random variables w = particle_dist.particle_weights l = particle_dist.particle_locations n_rvs = particle_dist.n_rvs new_locs = np.empty((n_particles, n_rvs)) cumsum_weights = np.cumsum(w) idxs_to_resample = np.arange(n_particles, dtype=int) # Loop as long as there are any particles left to resample. n_iters = 0 # Draw j with probability self.particle_weights[j]. # We do this by drawing random variates uniformly on the interval # [0, 1], then see where they belong in the CDF. js = cumsum_weights.searchsorted( np.random.random((idxs_to_resample.size,)), side='right' ) # Set mu_i to a x_j + (1 - a) mu. # FIXME This should use particle_dist.particle_mean mus = a * l[js,:] + (1 - a) * mean while idxs_to_resample.size and n_iters < self._maxiter: # Keep track of how many iterations we used. n_iters += 1 # Draw x_i from N(mu_i, S). new_locs[idxs_to_resample, :] = mus + np.dot(S, self._kernel(n_rvs, mus.shape[0])).T # Now we remove from the list any valid models. # We write it out in a longer form than is strictly necessary so # that we can validate assertions as we go. This is helpful for # catching models that may not hold to the expected postconditions. resample_locs = new_locs[idxs_to_resample, :] if self._postselect: else: assert valid_mask.ndim == 1, "are_models_valid returned tensor, expected vector." if self._debug and n_invalid > 0: logger.debug( "LW resampler found {} invalid particles; repeating.".format( n_invalid ) ) assert ( ( ), ( "are_models_valid returned wrong shape {} " "for input of shape {}." idxs_to_resample = idxs_to_resample[np.nonzero(np.logical_not( ))[0]] # This may look a little weird, but it should delete the unused # elements of js, so that we don't need to reallocate. mus = mus[:idxs_to_resample.size, :] if idxs_to_resample.size: # We failed to force all models to be valid within maxiter attempts. # This means that we could be propagating out invalid models, and # so we should warn about that. warnings.warn(( "Liu-West resampling failed to find valid models for {} " "particles within {} iterations." ).format(idxs_to_resample.size, self._maxiter), ResamplerWarning) if self._debug: logger.debug("LW resampling completed in {} iterations.".format(n_iters)) # Now we reset the weights to be uniform, letting the density of # particles represent the information that used to be stored in the # weights. This is done by SMCUpdater, and so we simply need to return # the new locations here. new_weights = np.ones((n_particles,)) / n_particles return ParticleDistribution(particle_locations=new_locs, particle_weights=new_weights)
2021-03-01 13:11:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33654290437698364, "perplexity": 7603.835566186048}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362513.50/warc/CC-MAIN-20210301121225-20210301151225-00276.warc.gz"}
http://openstudy.com/updates/503bc008e4b007f90031080a
85295james 3 years ago Solve the system of equations by graphing 1. 85295james -1/3x + y = –1 y = 4+1/3 x 2. theEric Do you know what it means to solve them by graphing? I'm unsure! I'm into calculus 3 and I can't think of what it means. Does this sound familiar? You're looking for a point (x,y) so that $\frac{1}{3}x+y=-1y=4+\frac{1}{3}x$ 3. theEric Because I would immediately think to use algebra. I'm looking online for a refresher. If you multiplied by (-1), though... $(-1)\frac{1}{3}x+(-1)y=y=(-1)4+(-1)\frac{1}{3}x$ 4. theEric But that would be changing values. I'm not sure right now, sorry, but I will research how to solve a syste of equations by graphing now. 5. theEric Wait.... I should've asked - are those two separate equations? I thought they were, and then I decided maybe they weren't.. 6. theEric If you have two line equations and they share just one (x,y) point, then they intersect on a graph. You just have to be able to plot the lines on grid paper and look for the intersection. For really random lines you will need to calculate, but for math problems designed for learning, you should be fine with just graphing. 7. theEric So $-\frac{1}{3}x + y = -1$and$y=4+\frac{1}{3}x$ ? 8. theEric THAT I can do for sure! :) 9. theEric That would be a system of equations... 10. theEric Sorry I delayed due to uncertainty. I'll guide you through it now. First, you should know how to plot a line, given it's equation. Second, you should rearange each equation so that you can easily plot it (I recommend making your equations look like y=mx+b) Third, you need to plot both lines. Fourth, you can the see where they intersect. So look at what the point is. 11. theEric Of the four steps, where would you like to start? You need to do all four, but maybe you already know how to plot a line with it's equation. If you don't, let me know, please! 12. theEric Actually there is a 5th step. You have to check to make sure that the (x,y) point you found to be the intersection actually IS on both lines buy using eliassaab's method for substituting. 13. 85295james i got three pics to choos from and i dont know how to load them 14. 85295james B. no solution C. no solution D. infinitely many solutions 15. theEric Once I put the equations into y=mx+b form, which I will do in a moment, you will see that it must be "no solution". Once you can plot the two equations, you will see why! 16. theEric For now, think about intersections. As simple as they are. They are the point or points where two things meet! If they never meet, then there are no intersections. If they are together at every point, there are infinately many points of intersection. 17. 85295james |dw:1346094672501:dw| 18. 85295james |dw:1346094737400:dw| 19. theEric I'll trust that you can algebraically manipulate your equations to look like y=mx+b. $-\frac{1}{3}x+y=-1\rightarrow y=\frac{1}{3}-1$ 20. 85295james no 21. theEric and $y=\frac{1}{3}x + 4$ 22. theEric No you can't do the algebra? Well lets look at the one line's equation: $-\frac{1}{3}x+y=-1$ 23. theEric Now, you have a goal. You want that "y" to be on its own side, all alone. 24. theEric There's a rule of thumb, "if you do something to one side, do it to the other". This is so you do the same thing to all sides. By changing each side in the same way, each side will be different from what it was before. BUT each changed side will be equal to each other, so you know it's a legitamate equation! Here's an example, to help you understand what I mean. $5=2+3$ add 10 to both sides $5+(10) = 2+3+(10)$ and solve $15=2+3+10$ $15=15$ 25. theEric Since both sides are equal, each side is okay to work with. And the variables are still the same too. $x=1$add 10 to both sides$x+(10) = 1+(10)$$x+10=11$x still equals 1! 26. theEric So changing both sides is a great thing to do! I was adding 10 in those examples. To your equation, try adding $\frac{1}{3}x$ to both sides. Here is the equation again: $-\frac{1}{3}x+y=1$ 27. theEric Tell me when your done, so I know you've seen the result, and we can talk about why I told you to add (1/3)x. 28. theEric Sorry I've taken so long! 29. 85295james it is cool 30. theEric Thanks! So, have you done the addition? What did you get? 31. theEric $-\frac{1}{3}x+\frac{1}{3}x+y=-1+\frac{1}{3}x$is what you want to simplify, here. I took the liberty of adding to both sides :P Just to be clear! :) 32. theEric Okay, I'll go on without you... But I hope that you'll ask me about anything you don't understand or that you see might be wrong! I got this much, much more understandable equation,$y=-1+\frac{1}{3}x=\frac{1}{3}x-1$ which looks like$y=mx+b$ which is easier to plot by hand. 33. 85295james so is it C 34. theEric Nope! Just look at either equation, and the number multiplying x, known as the slope, is $\frac{1}{3}$ The slope is, you go up one unit (along y) and you go over 3 units (along x). The slope goes up and to the right. So look for that. 35. theEric Since they habe the same slope, they are parallel, and thus cannot intersect. 36. theEric Thank you! Find more explanations on OpenStudy
2015-09-01 04:14:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5827714800834656, "perplexity": 788.4259416029873}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645151768.51/warc/CC-MAIN-20150827031231-00094-ip-10-171-96-226.ec2.internal.warc.gz"}
https://jazz.net/wiki/bin/view/Deployment/MigratingTheRequirementsManagementLinkValidityData
r13 - 2020-01-23 - 05:36:30 - ChethnaShenoyYou are here: TWiki >  Deployment Web > DeploymentMigratingAndEvolving > MigratingTheRequirementsManagementLinkValidityData # Migrating the Requirements Management application link validity data in a single-server topology Authors: ChethnaShenoy, MichaelAfshar Build basis: Requirements Management 7.0 and later You can use the Requirements Management (RM) application upgrade script to migrate the link validity data. This page is designed for those deployments that selected "Yes" to the following question in the Interactive Upgrade Guide (procedure for non-distributed topology deployment): Note: If your Jazz Team Server and the Requirements Management application are installed on the same server, see this deployment wiki document to migrate your link validity data. ## Option 1: Migrating the link validity data in a single-server topology using upgrade script with interaction Note: Use this method to migrate link validity data with an upgrade wrapper script that automates some of the manual steps. The script is run in a command window and you follow on-screen prompts for each step of the upgrade. For the migration task to complete successfully, Jazz Team Server must be running. In a single-server topology where both the RM application and Jazz Team Server are installed on the same server, you must interrupt the script execution and shut down and restart the server multiple times. As a result, do not use the -noStepPrompt during manual upgrade. However, you can use the following parameters in a silent upgrade: -noPrompt -noVerify -noEditor -noResumePrompt Note: This topic assumes that you have upgraded Jazz Team Server and all other applications except for the RM application and done all the necessary pre-upgrade steps. Tip: To make the workflow easier, you can have two console windows open for controlling the server and repeatedly interrupting the script execution. Note: The jtsRepositoryUrl is the URL that is defined in the Jazz Team Server Shared Validity provider URL advanced property. If no URL is defined, then use the URL of the Jazz Team Server that the Requirements Management application is connected to. ### Procedure for WebSphere Liberty Profile 1. Open a command window, change to the Jazz_Install_Dir/server directory and enter the following command: upgrade\rm\rm_upgrade.bat -oldApplicationHome Old_Jazz_Install_Dir\server\conf -migrateLinkValidity -jtsRepositoryUrl https://host_name:9443/jts -adminUserId admin_user_id -adminPassword admin_password 2. Enter E to execute step 0 (update config files). 3. When step 0 completes, enter a carriage return to stop the script execution. 4. Start the server. 5. Execute the rm_upgrade with the same options as before and enter 1 to resume the script at step 1 (export link validity data). 6. After step 1 completes, enter a carriage return to stop the script execution. 7. Stop the server. 8. Execute the rm_upgrade with the same options as before and enter 2 to resume the script at step 2 (add tables). 9. Start the server. 10. Execute the rm_upgrade with the same options as before and enter 3 to resume the script at step 3 (import link validity data). ### Procedure for WebSphere Application Server 1. Open a command window, change to the Jazz_Install_Dir/server directory and enter the following command: upgrade\rm\rm_upgrade.bat -oldApplicationHome Old_Jazz_Install_Dir\server\conf updateAppServerFiles no -migrateLinkValidity -jtsRepositoryUrl https://host_name:9443/jts -adminUserId admin_user_id -adminPassword admin_password 2. Enter E to execute step 0 (update config files). 3. When step 0 completes, enter a carriage return to stop the script execution. 4. Deploy the rm.war and converter.war applications in WebSphere Application Server. 5. Start the server. 6. Execute the rm_upgrade with the same options as before and enter 1 to resume the script at step 1 (export link validity data). 7. After step 1 completes, enter a carriage return to stop the script execution. 8. Stop the server. 9. Execute the rm_upgrade with the same options as before and enter 2 to resume the script at step 2 (add tables). 10. Start the server. 11. Execute the rm_upgrade with the same options as before and enter 3 to resume the script at step 3 (import link validity data). ## Option 2: Migrating the link validity data in a single-server topology using upgrade script in silent mode Note: Use this method, to migrate link validity data with the upgrade wrapper script, but with extra parameters on the command line that enable the script to run through the entire upgrade without interaction. At the end of the upgrade you can check the log files to see if the upgrade was successful or if any errors occurred. For the migration task to complete successfully, Jazz Team Server must be running. In a single-server topology both the RM application and Jazz Team Server are installed on the same server. Note: This topic assumes that you have upgraded Jazz Team Server and all other applications except for the RM application and done all the necessary pre-upgrade steps. Tip: To make the workflow easier, you can have two console windows open for controlling the server and repeatedly interrupting the script execution. Note: The jtsRepositoryUrl is the URL that is defined in the Jazz Team Server Shared Validity provider URL advanced property. If no URL is defined, then use the URL of the Jazz Team Server that the Requirements Management application is connected to. ### Procedure for WebSphere Liberty Profile 1. Stop the server. Open a command window, change to the Jazz_Install_Dir/server directory and enter the following command: server.shutdown.bat 2. Back up the following application file and directory into a folder path E.g. C:\BackupRM\RM\ a. Jazz_Install_Dir/server\liberty\servers\clm\apps\rm.war.zip b. Jazz_Install_Dir/server\liberty\servers\clm\apps\rm.war 3. Back up the following template file and directory into a folder path E.g. C:\BackupRM\RMTemplate\ a. Jazz_Install_Dir/server\liberty\clmServerTemplate\apps\rm.war.zip b. Jazz_Install_Dir/server\liberty\clmServerTemplate\apps\rm.war 4. Delete the following files and directories: a. Jazz_Install_Dir/server\liberty\servers\clm\apps\rm.war.zip b. Jazz_Install_Dir/server\liberty\clmServerTemplate\apps\rm.war.zip c. Jazz_Install_Dir/server\liberty\clmServerTemplate\apps\rm.war 5. Retain only the file Jazz_Install_Dir/server\liberty\servers\clm\apps\rm.war\WEB-INF\web.xml and delete all the other files and folders inside directory Jazz_Install_Dir/server\liberty\servers\clm\apps\rm.war Note: RM application link validity migration will fail if the web.xml file is not available under path Jazz_Install_Dir/server\liberty\servers\clm\apps\rm.war\WEB-INF\web.xml. The error message “CRJAZ1888E Error updating web.xml file” will be displayed. 1. Start the server. Change to the Jazz_Install_Dir/server directory and enter the command: server.startup.bat 2. From Jazz_Install_Dir/server directory, enter the following command to perform RM application database upgrade and link validity migration: upgrade\rm\rm_upgrade.bat -oldApplicationHome Old_Jazz_Install_Dir\server\conf -migrateLinkValidity -jtsRepositoryUrl https://host_name:9443/jts -adminUserId admin_user_id -adminPassword admin_password -noPrompt -noVerify -noStepPrompt -noEditor -noResumePrompt 3. Back up files web.xml and web-backup.xml (E.g. web-1578991938493backup.xml) from path Jazz_Install_Dir/server\liberty\servers\clm\apps\rm.war\WEB-INF\ to path C:\BackupRM 4. Stop the server. Open a command window, change to the Jazz_Install_Dir/server directory and enter the following command: server.shutdown.bat 5. Delete the directory Jazz_Install_Dir/server\liberty\servers\clm\apps\rm.war 6. Restore the backed-up files and directories a. Copy C:\BackupRM\RM\rm.war.zip to Jazz_Install_Dir/server\liberty\servers\clm\apps\ b. Copy C:\BackupRM\RM\rm.war to Jazz_Install_Dir/server\liberty\servers\clm\apps\ c. Copy C:\BackupRM\RMTemplate\rm.war.zip to Jazz_Install_Dir/server\liberty\clmServerTemplate\apps\ d. Copy C:\BackupRM\RMTemplate\rm.war to Jazz_Install_Dir/server\liberty\clmServerTemplate\apps\ e. Copy C:\BackupRM\web-<number>backup.xml (E.g. web-1578991938493backup.xml) to Jazz_Install_Dir/server\liberty\servers\clm\apps\rm.war\WEB-INF\ 7. Replace Jazz_Install_Dir/server\liberty\servers\clm\apps\rm.war\WEB-INF\web.xml file with C:\BackupRM\web.xml 8. Start the server. Change to the Jazz_Install_Dir/server directory and enter the command: server.startup.bat ### Procedure for WebSphere Application Server 1. To start Websphere Application Server if not already up, open a command window and enter the following commands. Replace WASInstallDir with the Websphere Application Server installation directory and server1 with the name of your server: cd WASInstallDir/AppServer\profiles\profile_name\bin startServer.bat server1 2. To uninstall RM application from WAS, run the following command substituting WAS_username with the WebSphere Application Server admin username, WAS_password with the admin user password, and pathToTheScript with the location with Jazz_Install_Dir/server/was wsadmin.bat -language jython -user WAS_username -password WAS_password -f pathToTheScript/clm_undeploy_distributed.py rm,converter 3. To perform RM application database upgrade and link validity migration enter the following commands from the directory Jazz_Install_Dir/server upgrade\rm\rm_upgrade.bat -oldApplicationHome Old_Jazz_Install_Dir\server\conf updateAppServerFiles no -migrateLinkValidity 4. To install RM application from WAS, run the following command substituting WAS_username with the WebSphere Application Server admin username, WAS_password with the admin user password, pathToTheScript with the location Jazz_Install_Dir/server/was, and pathToWarFiles with the location =Jazz_Install_Dir/server/webapps cd WASInstallDir/AppServer\profiles\profile_name\bin 5. To restart the Websphere Application Server, run command: cd WASInstallDir/AppServer\profiles\profile_name\bin startServer.bat server1 Edit | Attach | Printable | Raw View | Backlinks: Web, All Webs |  | More topic actions Status icon key: • To do • Under construction • New • Updated • Constant change • None - stable page • Smaller versions of status icons for inline text: Copyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
2020-01-27 19:44:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24190115928649902, "perplexity": 11015.855734507495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251705142.94/warc/CC-MAIN-20200127174507-20200127204507-00288.warc.gz"}
https://stats.stackexchange.com/questions/530199/maximum-likelihood-parameter-estimation-for-state-space-model-kalman-filter
# Maximum likelihood parameter estimation for state space model (Kalman Filter) it´s about a state space model that I want to run using the Kalman filter. However, certain parameters are unknown and must be estimated by the maximum likelihood method. The state space model is as follows: Alpha evolves according to an autoregressive process. If we use the following notation we get the following system \begin{align} \alpha_{t+1} &= (1-G)B+G\alpha_{t}+\phi^ix_t+v^i_{t+1} \\ R^i_{i+1}&=\beta^i_{i+1}R^M_{t+1}+\eta^i_{t+1}\\ B&=B: \text{unobserved} \end{align} $$x_t$$ is a two-dimensional vector with known stationary exogenous state variables. If we use the following notation: $$\xi_t= \left(\begin{array}{c} B \\ \alpha_t \end{array}\right)$$ $$\widetilde{G}= \left( \begin{array}{ll} 1 & 0 \\ 1-G & G \\ \end{array}\right)$$ $$V_{t+1}= \left(\begin{array}{c} 0 \\ v_{t+1} \end{array}\right)$$ $$\Phi_{t}= \left(\begin{array}{c} 0^T \\ \phi^T \end{array}\right)$$ $$H_t= \left(\begin{array}{c} 0 \\ R^M_t \end{array}\right)$$ where 0 is a vector of zeros of the same dimension as $$x_t$$ we get the following system \begin{align} \xi_{i+1}&= \widetilde{G}\xi_t+\phi x_t + V_{t+1} \\ R_{t}&= H^T\xi_t +\eta_t \end{align} The autoregressive parameter G, the standard deviations of error terms $$(\sigma_\eta)^2$$ and $$(\sigma_v)^2$$, and the loadings on the conditioning variables $$\phi$$ should be estimated using maximum likelihood on the whole history of portfolio returns $$R_t$$,the market returns $$R^M_t$$, and the state variables $$x_t$$. I have read in books that yes there are different likelihood functions according to state space model and parameters already known. Unfortunately, I am not sure which likelihood function is the correct one. Is there a concrete function in Python/Matlab/R that provide a function or package for exactly this case to determine these parameters via ML? I know that for such a system in a paper the parameters were determined. Unfortunately, it was not discussed how to implement this concretely. I would really appreciate any help. With kind regards New contributor Marv91 is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
2021-06-12 12:33:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9850991368293762, "perplexity": 505.85759501218996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487582767.0/warc/CC-MAIN-20210612103920-20210612133920-00138.warc.gz"}
http://www.redshiftacademy.com/index.php/redshift/topic/gauge_theories_%28yang-mills%29
Wolfram Alpha: Search by keyword: Astronomy - - - - Chemistry - - - - Classical Mechanics - Classical Physics - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Climate Change - Cosmology - - - - - - - - - - - - - - Finance and Accounting - - - - - - - - - Game Theory - General Relativity - - - - - - - - - - - - - - - - - - - - - Group Theory - - - - - - Lagrangian and Hamiltonian Mechanics - - - - - - Macroeconomics - - - Mathematics - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Mathjax - Microeconomics - Nuclear Physics - - Particle Physics - - - - - - - Probability and Statistics - - - - - - - - - - - - - - - - - - - - - - - - - - - - Programming and Computer Science - - - - - - - Quantum Computing - - - Quantum Field Theory - - - - - Quantum Mechanics - - - - - - - - - - - - - - - - - - - - - - - - Semiconductor Reliability - Solid State Electronics - - - - - Special Relativity - - - - - - - - - - - - Statistical Mechanics - - - String Theory - - - - - - Superconductivity - - - - - - Supersymmetry (SUSY) and Grand Unified Theory (GUT) - - - - - The Standard Model - - - - - - - - - - Topology - Units, Constants and Useful Formulas - Gauge Theories (Yang-Mills) --------------------------- Abelian Gauge Theory -------------------- A gauge theory is a field theory in which some GLOBAL continuous symmetry of the theory is replaced by a stricter LOCAL continuous symmetry requirement. The imposition of a local symmetry requires the introduction of new fields that make the Lagrangian invariant under the local transformation. For each group generator there necessarily arises a corresponding field (usually a vector field) called the gauge field. Consider the following Lagrangian for a simple complex field: L = ∂μφ∂μφ* - m2φφ* It is easy to see that the global transformation φ' = φexp(iθ) leaves the Lagrangian invariant. Now consider if one wished to not only make global changes of phase but also local transformations of the form φ' = φexp(iθ(x)). Now the situation is not so straightforward since θ is now a function of x and the kinetic term picks up a derivative of θ(x). As a result the action is no longer invariant under this type of change. In order to make it invariant and enforce such a symmetry, one must rewrite the transformation law so that there is a new type of derivative Dμφ, which under the change of phase on φ transforms in the same fashion, Dμφ -> exp(iθ(x))Dμφ. Dμ is called the GAUGE COVARIANT DERIVATIVE and is defined as: Dμ = ∂μ + igAμ Where Aμ is a new quantity called the GAUGE FIELD and g is a coupling constant. Therefore, gauge fields are included in the Lagrangian to ensure its invariance under the local transformations. Proof: φ' = φexp(iθ(x)) φ*' = φ*exp(-iθ(x)) Dμ'φDμ'φ* = (∂μφ' + igAμ'φ')(∂μφ*' - igAμ'φ*') = (∂μφ + i∂μθφ + igAμ'φ)exp(iθφ) (∂μφ* - i∂μθφ* - igAμ'φ*)exp(-iθφ) = (∂μφ + i∂μθφ + igAμ'φ) (∂μφ* - i∂μθφ* - igAμ'φ*) To get back to the original form we need Aμ' to transform as follows: Aμ' = Aμ - (1/g)∂μθ and Aμ' = Aμ - (1/g)∂μθ Therefore, Dμ'φDμ'φ* = (∂μφ + i∂μθφ + ig(Aμ - (1/g)∂μθ)φ) (∂μφ* - i∂μθφ* - ig(Aμ - (1/g)∂μθ)φ*) = (∂μφ + igAμφ)(∂μφ* - igAμφ*) = DμφDμφ* Also, m2φφ* -> m2exp(iθ)φexp(-iθ)ψ* = m2φφ* Aside: The above also applies to the Dirac Lagrangian. _ L = ψ(iγμ∂μ - m)ψ ψ -> exp(iθ)ψ _ _ ψ -> exp(-iθ)ψ Now, ∂μ(ψexp(1θ)) = exp(iθ)∂μψ + ψi∂μθexp(iθ) The covariant derivative is Dμ = ∂μ + igAμ Where, Aμ -> Aμ - (1/g)∂μθ Therefore, Dμ = ∂μ + ig(Aμ - (1/g)∂μθ) Dμ(ψexp(iθ)) = ∂μ(ψexp(iθ)) + iψexp(iθ)gAμ - iψexp(iθ)∂μθ = exp(1θ)∂μψ + iψexp(iθ)∂μθ + iψexp(iθ)gAμ - iψexp(iθ)∂μθ = exp(1θ)∂μψ + iψexp(iθ)gAμ = exp(1θ)[∂μψ + iψgAμ] = exp(1θ)[∂μ + igAμ]ψ = exp(1θ)Dμψ Therefore, _ ψiγμDμψ _ => ψexp(-iθ)iγμDμ(ψexp(iθ)) _ => ψexp(-iθ)exp(iθ)iγμDμψ _ => ψiγμDμψ Also, _ _ mψψ => mexp(iθ)ψexp(-iθ)ψ _ => mψψ Invariance of Aμ by Itself ------------------------- We have looked at how the introduction of the gauge field makes φ invariant under a local transformation. However, the gauge field itself must also have its own gauge invariant dynamic term in the Lagrangian. Consider the construction, Fμν, defined as: Fμν = ∂μAν - ∂νAμ Intuitively, this is a good choice since it contains the first derivatives, ∂μ, of the field that is characterisitic of all Lagrangians. If we plug in Aμ' = Aμ - (1/q)∂μθ into the above we get: Fμν = ∂μAν + ∂μ∂νθ - (∂νAμ + ∂ν∂μθ) = ∂μAν - ∂νAμ Thus, under the gauge transformation, Fμν, remains unchanged. To make this both gauge and Lorentz invariant and also match the quadratic nature of the other terms in the Lagrangian, it is necessary to form the product: FμνFμν We can now rewite our a Lagrangian for the gauge field as: L = -FμνFμν The is the Lagrangian that describes that electromagnetic field in the absence of any charges (complex fields are charged fields so in the absence of charge φ = 0). Fμν is referred to as the FIELD STRENGTH TENSOR. In the Abelian case this is equivalent to the ELECTROMAGNETIC TENSOR. Note: By convention L is normally wriiten as L = -(1/4)FμνFμν In the abelian U(1) theory there is 1 gauge fields whose quanta is the photon. Non-Abelian Gauge Theory (Yang-Mills) ------------------------------------- Yang and Mills extended the above Abelian theory to Non-Abelian groups. Therefore, it extends the U(1) gauge theory to a gauge theory based on the SU(N) group. Yang–Mills theory seeks to describe the behavior of elementary particles using these non- Abelian Lie groups and forms the basis of our understanding of the Standard Model of particle physics. The Lagrangian for the gauge fields is: L = -(1/4)FaμνFaμν Aside: This is the kinetic term of the PROCA ACTION that describes a massive spin-1 field of mass, m, in Minkowski spacetime (i.e. the W and Z bosons). The Proca action is given by: L = -(1/4)FμνFμν + m2AμAμ The mass term of the Proca Lagrangian is not invariant under a gauge transformation since (Aμ + ∂μθ)(Aμ + ∂μθ) ≠ m2AμAμ. The consequence is that m = 0 unless the symmetry is spontaneously broken. This discussed in detail in the section on the Higgs mechanism. Continuing, the gauge covariant derivative becomes: Dμ = ∂μ - igTaAaμ Where Ta are the group generators that satisfy the Lie algebra: [Ta,Tb] = ifabcTc [Dμ,Dν] = -igTaFaμν Proof: [Dμ,Dν] = igT(∂μAν - ∂νAμ + gT[Aμ,Aν] ≡ igTa(∂μAaν - ∂νAaμ + gfabcAbμAcν) = igTaFaμν Note that in the abelian case Faμν reduces to the electromanetic tensor and [Dμ,Dν] = igFμν. SU(2) ----- φ' = φexp(iθiσi/2) φ*' = φ*exp(-iθiσi/2) [σi/2,σj/2] = iεijkσk/2 Dμ = ∂μ - igWμiσi/2 Gμνi = ∂μWνi - ∂νWμi + fijkWμjWνk In the non-abelian SU(2) theory there are 3 gauge fields whose quanta are the W+, W- and Z bosons. SU(3) ----- φ' = φexp(iθiλi/2) φ*' = φ*exp(-iθiλi/2) [λi/2,λj/2] = ifijkλk/2 Dμ = ∂μ - ig''Gμiλi Gμν = ∂μGνi - ∂νGμi + fijkGμjGνk In the non-abelian SU(3) theory there are 8 gauge fields whose quanta are the gluons. U(1) ⊗ SU(2) ⊗ SU(3) --------------------- Dμ = ∂μ - ig''Gμiλi/2 - igWμiσi/2 - ig'YBμ Footnote: Gauge symmetry is a powerfull symmetry particularly in the context of General Relativity where coordinates vary from place to place based on the curvature of spacetime. QED is described by a U(1) group that represents electric charge. The unified electroweak interaction is described by SU(2) ⊗ U(1) group where the U(1) group represents the weak hypercharge, YW, rather than the electric charge. QCD is an SU(3) Yang–Mills theory. The massless bosons from the SU(2) ⊗ U(1) theory mix after spontaneous symmetry breaking (Higgs mechanism) to produce the 3 massive weak bosons, and the photon field. The Standard Model combines the strong interaction with the unified electroweak interaction (unifying the weak and electromagnetic interaction) through the symmetry group U(1) ⊗ SU(2) ⊗ SU(3). At the current time, however, the strong interaction has not been unified with the electroweak interaction, but from the observed running of the coupling constants it is believed they all converge to a single value at very high energies.
2023-04-01 07:59:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8621922135353088, "perplexity": 5561.00431270239}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00586.warc.gz"}
http://hal.in2p3.fr/view_by_stamp.php?label=IPNL&langue=fr&action_todo=view&id=in2p3-00358543&version=1
1023 articles – 5281 Notices  [english version] HAL : in2p3-00358543, version 1 DOI : 10.1063/1.3039823 Nuclear Physics and Astrophysics: From Stable Beams to Exotic Nuclei, Cappadocia : Turquie (2008) $/gamma$-ray Spectroscopy of Proton Drip-Line Nuclei in the A~130 Region using SPIRAL beams SPIRAL Collaboration(s) (2008) A fusion-evaporation experiment has been performed with a SPIRAL 76Kr radioactive beam in order to study the deformation of rare-earth nuclei near the proton drip-line. The experimental setup consisted in the EXOGAM gamma-array, coupled to the light-charged particles (LCP) DIAMANT detector and to the VAMOS heavy-ion spectrometer. The difficulties inherent to such measurements are enlightened. The coupling between EXOGAM and DIAMANT has been used to decrease the huge background caused by the radioactivity of the beam. It further permits assigning new gamma-ray transitions to specific residual nuclei. A gamma-ray belonging to the 130Pm level scheme has thus been observed for the first time. Thème(s) : Physique/Physique Nucléaire Expérimentale in2p3-00358543, version 1 http://hal.in2p3.fr/in2p3-00358543 oai:hal.in2p3.fr:in2p3-00358543 Contributeur : Michel Lion <> Soumis le : Mardi 3 Février 2009, 16:26:40 Dernière modification le : Lundi 16 Février 2009, 15:10:35
2014-09-17 23:44:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48273852467536926, "perplexity": 8936.269898556002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657124771.92/warc/CC-MAIN-20140914011204-00019-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://logic.harvard.edu/blog/?p=983
# Re: Paper and slides on indefiniteness of CH Dear Sy, My comments were about $\textsf{IMH}^\#$ not about anything else — I address one thing at a time, not all things at once. The third theorem is a relic — I neglected to delete it when I added the other two. It should be deleted. Best, Peter
2022-01-23 03:17:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3665822744369507, "perplexity": 1282.569586527592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303956.14/warc/CC-MAIN-20220123015212-20220123045212-00307.warc.gz"}
https://crypto.stackexchange.com/questions/16594/attack-on-a-key-exchange-symmetric-key-cryptography-protocol/16616
# Attack on a key-exchange,symmetric-key cryptography protocol This is an exam question in Oxford 's Computer Security course: Here is the start of a protocol, based on a long-term secret key $k_{AB}$ previously shared between Alice and Bob, designed to offer bilateral authentication and establishment of a session key $k_s$. The session, which begins at message 4, is supposed to be confidential and secure against reflection, replay, and re-ordering of the contents. 1.$A \to B$: Alice, $nonce_A$ 2.$B \to A$: $nonce_B$, $E_{k_{AB}}$$(nonce_A \| k_1) 3.A \to B: E_{k_{AB}}$$(nonce_B \| k_2)$ Alice and Bob both compute $k_s$ = $k_1$ ⊕ $k_2$ 4.$A \to B$: $E_{k_s}(...)$ 5.$B \to A$: $E_{k_s}(...)$ a) Why would Alice and Bob want a session key $k_s$, rather than simply using the already shared secret key $k_{AB}$ for session encryption? b) How should Alice and Bob format the contents of their session, messages 4 onwards, to meet the aims of the protocol? c) If they additionally wish to ensure the integrity of the session, what should they add to the protocol? d) Does this protocol provide key agreement or does it provide key transport? Explain your answer. The following appears to be a man-in-the-middle attack on the protocol, under the usual Dolev- Yao model: 1.$A→I_B$: Alice ,$nonce_A$ 1´.$I_A→B$: Alice, $nonce_A$ 2´.$B→I_A$: $nonce_B$,$E_{k_{AB}}$$(nonce_A ∥ K_1) 2.I_B→A: nonce_B,E_{k_{AB}}$$(nonce_A∥K_1)$ 3.$A→I_B$: $E_{k_{AB}}$$(nonce_B∥K_2) 3´.I_A→B: E_{k_{AB}}$$(nonce_B∥K_2)$ Alice and Bob continue their sessions with the intruder. e) Why does the above sequence not constitute an attack on the protocol? f) Although not subject to man-in-the-middle attacks, there does exist a flaw in the protocol. Find it, and explain carefully why it does constitute an attack on the protocol. Part f) is where I'm currently stuck. One attack I can think of is when $nonce_A = nonce_B$ a Dolev-Yao attacker can pose as B and send A $E_{k_{AB}}$$(nonce_A ∥ k_1) in message 3 then k_s = k_1 ⊕ k_1 = 0 then the attacker know sk_s and he can send message and decrypt message. But the chance nonce_A = nonce_B is very small, so I'm not sure if it constitutes an attack. • Which “Computer Security course” are you talking about? If you quote material from others, we expect you to mention the source of what you are quoting and – if possible – link to it to avoid copyright problems. – e-sushi Jun 6 '14 at 21:32 • @e-sushi: It's Oxford 's computer security course. – user3283751 Jun 7 '14 at 3:40 • @e-sushi : I don't have a link and there's no ISBN number( it's not published in a book). – user3283751 Jun 8 '14 at 12:14 • The more I think about it, the more I'm convinced Ricky Demer has the best answer, and the problem's author considers that \; A) a reflexion attack is not a man-in-the-middle attack; \; B) It is not worth stating that E is non-malleable encryption, because that's what the Dolev-Yao model applied to symmetric cryptography considers, even though many common secure encryption schemes are malleable and would make the protocol insecure. – fgrieu Jun 9 '14 at 7:12 • @fgrieu : But if I answer b) with E_{k_{AB}}$$(m_1 \| Alice \| counter)$ then his answer doesn't work. – user3283751 Jun 9 '14 at 9:03 I'm not sure but in my point of view. Since $k_{AB}$ is used to authenticate both side. Mapped $nounce_x:E_{k_{AB}}$$(nonce_x∥k_x) pair on one side can be used on both side. Thus we can gather nounce_x:E_{k_{AB}}$$(nonce_x∥k_x)$ pairs from $B$ since $B$ will happily echo $E_{k_{AB}}$$(nonce_x∥k_x) for every nounce_x I asked. This doesn't form practical MITM attack but I can make A agree on randomK_1⊕k_2 as k_s while on the other side B will accept k_1⊕k_2 as k_s. In another word we can make A and B agree on different values at probability equivalent to (or more than because we can query it ourself) birthday attack. This might make some cryptanalysis possible but I don't know the details. As you suggested in chat: nonce collision is unnecessary we rather change the scheme to something like: After received nonce_A we send it to B twice but only sent the first response to A • Sorry that was a mistake ,at step 4 and 5 the protocol use k_s. I have edited the question. – user3283751 Jun 6 '14 at 12:45 • @user3283751 I updated mine as well the main point is still the same. – Curious Sam Jun 6 '14 at 12:50 • Firstly nonce_x : E_{k_{AB}}$$(nonce_x \| k_s)$ should be $nonce_x$ : $E_{k_{AB}}$$(nonce_x \| k_1) or nonce_x : E_{k_{AB}}$$(nonce_x \| k_2)$. Do you mean at step 2 you change $E_{k_{AB}}$$(nonce_A \| k_1) to E_{k_{AB}}$$(nonce_A \| K_1)$ ? . And what does making A and B agree on different value achieve ? – user3283751 Jun 6 '14 at 14:39 • @user3283751 Oops that's the mistake from the first editing. Yes I mean what you understand. I think my answer may not satisfied your last question: This doesn't achieve anything by its own just like no one really did found SHA1 collision. (or at least doesn't know in public) or someone break XX from YY round of a block cipher. This can only demonstrate that we can do something with it. If you wait longer someone may come up with other practical attack that I can't think of or probably extend attack vector from what I described. – Curious Sam Jun 6 '14 at 15:07 • If you change $E_{k_{AB}}$$(nonce_A \| k_1) to E_{k_{AB}}$$(nonce_A \| K_1)$ then I guess you get $E_{k_{AB}}$$(nonce_A \| K_1)$ by making a connection with Bob and challenge him with $nonce_A$, it is likely that $k_1$ will be different from $K_1$ then why is it " A and B agree on different values at probability equivalent to (or more than because we can query it ourself) birthday attack." ? – user3283751 Jun 6 '14 at 15:18 Summary: The statement is ambiguous. My best guess is that the flaw thought in f) is the feasibility of the reflexion-to-different instance attack found by Ricky Demer, allowing Mallory to authenticate to Alice as Bob without involving Bob, constituting a valid attack against 1/2/3 in the Dolev-Yao model, and breaching the "bilateral authentication" goal assigned to steps 1/2/3 in the protocol. The affirmation in f) that "the protocol" is "not subject to man-in-the-middle attacks" is narrowly justified by noting that "the protocol" includes 4/5, and we modified that in b) so that no message will be accepted by Alice when the attack is performed. I apologize that this answer is mostly an exegesis of the problem statement. Bear with me: that's my honest attempt to make sense [in the context of the rest of the problem] of the fragment: f) Although not subject to man-in-the-middle attacks, there does exist a flaw in the protocol. Find it, and explain carefully why it does constitute an attack on the protocol. I'll assume the unstated but obvious: Alice checks $nounce_A$ deciphered from data received at step 2 before proceeding to step 3, and Bob checks $nounce_B$ deciphered from data received at step 3 before proceeding to step 4. It is much less easy to guess what we can assume about $E$. On reading the statement up to b), it would seem that $E$ is only assumed to be a symmetric encryption scheme/function providing confidentiality. However that turns out to be inconsistent with later developments: 1. b) ask us to introduce fixes such that "the session" of 4 onwards is "confidential and secure against reflection, replay, and re-ordering of the contents". Our fixes must be limited to how "Alice and Bob format the contents of their session". My reading is that the later at least of these "contents" is what's noted $...$ in steps 4 onwards, and "reformatting" precludes adding a MAC to that [in particular because we don't have a key at hand]. Such fix would not be possible for some $E$ providing confidentiality, including a block cipher in OFB mode and random IV: even if we add origin, sequence number and redundancy [say a hash], the session remains vulnerable to an adversary knowing the plaintext [Mallory replays earlier cryptograms with alterations fixing the sequence number and redundancy we introduced]. 2. f) states that the protocol is "not subject to man-in-the-middle attacks" [including steps 1/2/3 that "offer bilateral authentication and establishment of a session key $k_s$", with $k_s$ confidential], while we can exhibit a man-in-the-middle attack breaching these objectives for some $E$ that provides confidentiality: a block cipher in CBC mode, random IV, and $k_1$/$k_2$ corresponding to a whole block [Mallory replaces the block corresponding to $k_2$ in the cryptogram of step 3 with the block corresponding to $k_1$ in the cryptogram of step 2, to the effect of convincing Bob to run 4 onwards with a $k_s$ that Mallory can compute. For details, see the second part of my other answer about this protocol]. Perhaps $E$ is a variable-width block cipher [not requiring a mode of operation to handle its inputs in the protocol]. That would be the closest symmetric-key equivalent to $E$ considered in Danny Dolev and Andrew C. Yao's article On the Security of Public Key Protocols [IEEE Transactions on Information Theory, 1983], alluded to by "the usual Dolev-Yao model" in d). Or $E$ could be a symmetric encryption mode of a block cipher providing authenticated encryption [even though these are usually not functions, thus do not fit an requirement in the original Dolev-Yao article]. Whatever the reason, we must suppose that $E$ is such that its matching decryption procedure $D$, when fed with an input $E$ did not produce, would output something indistinguishable from random [that's non-malleability, a common assumption only for block ciphers when using the Dolev-Yao model for symmetric cryptography]; or reject that input [that's insuring integrity]; or perhaps we operate in some abstract model where our adversary can shuffle cryptograms but can't alter them. Even with such assumption, steps 1/2/3 of the protocol are vulnerable to a "man-in-the-middle attack on the protocol" as defined in d) by reference to the "Dolev-Yao model", for that allows to carry a reflexion-to-different-instance attack on steps 1/2/3 of the protocol, described in Ricky Demer's answer. That attack is not standard reflexion, although it uses the same messages: Mallory reflects to Alice playing the recipient role what Alice concurrently playing the initiator role sends, and vice versa. It leads to bogus authentication of Bob by Alice even though Bob is not part of the protocol, under assumptions that • Alice can carry the initiator and recipient roles as explicitly assumed by the Dolev-Yao model, and can do this concurrently [which is very realistic if Alice implements client and server sides of the protocol using independent processes, threads, or state machines]. • Alice uses the same long-term key $k_{AB}$ for these two different roles [the original Dolev-Yao article does not consider it because it deals with asymmetric cryptography, but this seems a reasonable interpretation of "long-term secret key $k_{AB}$ previously shared between Alice and Bob" in the statement]. That attack breaches the "bilateral authentication" goal assigned to steps 1/2/3 in the introduction of the problem, and can be completed by a "man-in-the-middle"; it can trivially be extended to involve Bob, and thus match any hypothetical definition of "man-in-the-middle attack" artificially excluding attacks which do not involve the impersonated party. Notice that in the reflexion-to-different-instance attack, Mallory does not get to know the value of $k_s$ that Alice wrongly conclude is shared with Bob, and that let us narrowly make sense of "not subject to man-in-the-middle attacks" in f) by noting that this affirmation applies to "the protocol", which includes 4 onwards as we modified it in b), and these changes avoid that Alice accepts as from Bob session plaintexts that really came from her. If we somewhat reject the reflexion-to-different-instance attack, I fail to find "a flaw in the protocol" [and I would be surprised if a systematic analysis by the Dolev-Yao algorithm concluded otherwise], when I restrict to the objectives stated in the introduction for steps 1/2/3: offer bilateral authentication and establishment of a session key $k_s$ and the unstated but obvious requirement that an adversary must not get to know $k_s$ [I ignore the objectives stated for 4 onwards, because they must have been met by whatever we changed in b)]. My only alternate proposition is hairy: we could consider that in d), it was introduced the requirement that "this protocol provide key agreement" between two parties. By a usual definition (emphasis and ellipsis mine) a key-agreement protocol is a protocol whereby two (..) parties can agree on a key in such a way that both influence the outcome and by a strict reading, that requirement is not met: Alice can choose $k_s$ and compute $k_2=k_1\oplus k_s$. As explained in the first part of my other answer about this protocol, that's a weakness in some situations, like when trying to prevent decryption of passive intercepts of communications between honest parties, one of which unwillingly using a device rigged in manner bound to not transmit data beyond the requirement of the protocol, in order to make detection of the rigging less likely. The statement [as transcribed in the question and without reference to the definitions used] is subject to interpretations, and arguably incorrect[*] 1. There are uncertainties about what properties of $E$ are assumed. Is it non-malleable? Does it provide integrity? 2. There are uncertainties about capabilities of adversaries: can they exploit a malleability of $E$? Dolev-Yao adversaries don't, but real ones do. 3. There are uncertainties about the goals, in particular integrity of deciphered plaintext. 4. It is unclear if the word "contents" is used to mean the output [ciphertext] or some of the input [plaintext] of $E$, in the introduction, and [perhaps independently] in b). 5. It is unclear what exactly we are allowed to change in b) and c), and how much of that is assumed in f). 6. It is questionable that the protocol is "not subject to man-in-the-middle attacks". [*] Incorrect problem statements happen, including when stakes are high. I experienced it personally a long time ago, with a physics problem statement so wrong that the competition had to be re-done, much to my dismay at the time. I had managed to prove the statement wrong in its first part (involving increasing the amplitude of a swing using gestures that did not), understand the intent from an electrical analogy in the second part, complete the first as intended, and overall performed well; I did poorly in the competition re-run, that narrowed my choices of engineering school, and ultimately diploma.
2021-05-06 07:06:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4446551203727722, "perplexity": 1842.632337180527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00105.warc.gz"}
https://textbook.cs161.org/web/intro.html
# 18. Introduction to the Web ## 18.1. URLs Every resource (webpage, image, PDF, etc.) on the web is identified by a URL (Uniform Resource Locator). URLs are designed to describe exactly where to find a piece of information on the Internet. A basic URL consists of three mandatory parts: http://www.example.com/index.html The first mandatory part is the protocol, located before in the URL. In the example above, the protocol is http. The protocol tells your browser how to retrieve the resource. In this class, the only two protocols you need to know are HTTP, which we will cover in the next section, and HTTPS, which is a secure version of HTTP using TLS (refer to the networking unit for more details). Other protocols include git+ssh://, which fetches a git archive over an encrypted tunnel using ssh, or ftp://, which uses the old FTP (File Transfer Protocol) to fetch data. The second mandatory part is the location, located after but before the next forward slash in the URL. In the example above, the location is www.example.com. This tells your browser which web server to contact to retrieve the resource. Optionally, the location may contain an optional username, which is followed by an @ character if present. For example, evanbot@www.example.com is a location with a username evanbot. All locations must include a computer identifier. This is usually a domain name such as www.example.com. Sometimes the location will also include a port number, such as www.example.com:81, to distinguish between different applications running on the same web server. We will discuss ports a bit more when we talk about TCP during the networking section. The third mandatory part is the path, located after the first single forward slash in the URL. In the example above, the path is /index.html. The path tells your browser which resource on the web server to request. The web server uses the path to determine which page or resource should be returned to you. One way to think about paths is to imagine a filesystem on the web server you’re contacting. The web server can use the path as a filepath to locate a specific page or resource. The path must at least consist of /, which is known as the “root”1 of the filesystem for the remote web site. Optionally, there can be a ? character after the path. This indicates that you are supplying additional arguments in the URL for the web server to process. After the ? character, you can supply an optional set of parameters separated by & characters. Each parameter is usually encoded as a key-value pair in the format key=value. Your browser sends all this information to the web server when fetching a URL. See the next section for more details on URL parameters. Finally, there can be an optional anchor after the arguments, which starts with a # character. The anchor text is not sent to the server, but is available to the web page as it runs in the browser. The anchor is often used to tell your browser to scroll to a certain part of the webpage when loading it. For example, try loading https://en.wikipedia.org/wiki/Dwinelle_Hall#Floor_plan and https://en.wikipedia.org/wiki/Dwinelle_Hall#Construction and note that your browser skips to the section of the article specified in the anchor. In summary, a URL with all elements present may look like this: http://evanbot@www.cs161.org:161/whoami?k1=v1&k2=v2#anchor where http is the protocol, evanbot is the username, www.cs161.org is the computer location (domain), 161 is the port, /whoami is the path, k1=v1&k2=v2 are the URL arguments, and anchor is the anchor. Further reading: What is a URL? ## 18.2. HTTP The protocol that powers the World Wide Web is the Hypertext Transfer Protocol, abbreviated as HTTP. It is the language that clients use to communicate with servers in order to fetch resources and issue other requests. While we will not be able to provide you with a full overview of HTTP, this section is meant to get you familiar with several aspects of the protocol that are important to understanding web security. ## 18.3. HTTP: The Request-Response Model Fundamentally, HTTP follows a request-response model, where clients (such as browsers) must actively start a connection to the server and issue a request, which the server then responds to. This request can be something like “Send me a webpage” or “Change the password for my user account to foobar.” In the first example, the server might respond with the contents of the web page, and in the second example, the response might be something as simple as “Okay, I’ve changed your password.” The exact structure of these requests will be covered in further detail in the next couple sections. The original version of HTTP, HTTP 1.1, is a text-based protocol, where each HTTP request and response contains a header with some metadata about the request or response and a payload with the actual contents of the request or response. HTTP2, a more recent version of HTTP, is a binary-encoded protocol for efficiency, but the same concepts apply. For all requests, the server generates and sends a response. The response includes a series of headers and, in the payload, the body of the data requested. ## 18.4. HTTP: Structure of a Request Below is a very simple HTTP request. GET / HTTP/1.1 Host: squigler.com Dnt: 1 The first line of the request contains the method of the request (GET), the path of the request (/), and the protocol version (HTTP/1.1). This is an example of a GET request. Each line after the first line is a request header. In this example, there are two headers, the DNT header and the Host header. There are many HTTP headers defined in the HTTP spec which are used to convey various pieces of information, but we will only be covering a couple of them through this chapter. Here is another HTTP request: POST /login HTTP/1.1 Host: squigler.com Content-Length: 40 Content-Type: application/x-url-formencoded Dnt: 1 Here, we have a couple more headers and a different request type: the POST request. ## 18.5. HTTP: GET vs. POST While there are quite a few methods for requests, the two types that we will focus on for this course are GET requests and POST requests. GET requests are are generally intended for “getting” information from the server. POST requests are intended for sending information to the server that somehow modifies its internal state, such as adding a comment in a forum or changing your password. In the original HTTP model, GET requests are not supposed to change any server state. However, modern web applications often change server state in response to GET requests in query parameters. Of note, only POST requests can contain a body in addition to request headers. Notice that the body of the second example request contains the username and password that the user alice is using to log in. While GET requests cannot have a body, it can still pass query parameters via the URL itself. Such a request might look something like this: GET /posts?search=security&sortby=popularity Host: squigler.com Dnt: 1 In this case, there are two query parameters, search and sortby, which have values of security and popularity, respectively. ## 18.6. Elements of a Webpage The HTTP protocol is designed to return arbitrary files. The response header usually specifies a media type that tells the browser how to interpret the data in the response body. Although the web can be used to return files of any type, much of the web is built in three languages that provide functionality useful in web applications. A modern web page can be thought of as a distributed application: there is a component running on the web server and a component running in the web browser. First, the browser makes an HTTP request to a web server. The web server performs some server-side computation and generates and sends an HTTP response. Then, the browser performs some browser-side computation on the HTTP response and displays the result to the user. ## 18.7. Elements of a Webpage: HTML HTML (Hypertext Markup Language) lets us create structured documents with paragraphs, links, fillable forms, and embedded images, among other features. You are not expected to know HTML syntax for this course, but some basics are useful for some of the attacks we will cover. Here are some examples of what HTML can do: • Create a link to Google: <a href="http://google.com">Click me</a> • Embed a picture in the webpage: <img src="http://cs161.org/picture.png"> • Include JavaScript in the webpage: <script>alert(1)</script> • Embed the CS161 webpage in the webpage: <iframe src="http://cs161.org"></iframe> Frames pose a security risk, since the outer page is now including an inner page that may be from a different, possibly malicious source. To protect against this, modern browsers enforce frame isolation, which means the outer page cannot change the contents of the inner page, and the inner page cannot change the contents of the outer page. ## 18.8. Elements of a Webpage: CSS CSS (Cascading Style Sheets) lets us modify the appearance of an HTML page by using different fonts, colors, and spacing, among other features. You are not expected to know CSS syntax for this course, but you should know that CSS is as powerful as JavaScript when used maliciously. If an attacker can force a victim to load some malicious CSS, this is functionally equivalent to the attacker forcing the victim to load malicious JavaScript. ## 18.9. Elements of a Webpage: JavaScript JavaScript is a programming language that runs in your browser. It is a very powerful language–in general, you can assume JavaScript can arbitrarily modify any HTML or CSS on a webpage. Webpages can include JavaScript in their HTML to allow for dynamic features such as interactive buttons. Almost all modern webpages use JavaScript. When a browser receives an HTML document, it first converts the HTML into an internal form called the DOM (Document Object Model). The JavaScript is then applied on the DOM to modify how the page is displayed to the user. The browser then renders the DOM to display the result to the user. Because JavaScript is so powerful, modern web browsers run JavaScript in a sandbox so that any JavaScript code loaded from a webpage cannot access sensitive data on your computer or even data on other webpages. Most exploits targeting the web browser itself require JavaScript, either because the vulnerability lies in the browser’s JavaScript engine, or because JavaScript is used to shape the memory layout of the program for improving the success rate of an attack. Almost all web browsers implement JavaScript as a Just In Time compiler, dynamically converting JavaScript into machine code2. Many modern desktop applications (notably Slack’s desktop client) are actually written in the Electron framework, which is effectively a cut down web browser running JavaScript. 1. It is called the root because the filesystem can be treated as a tree and this is where the tree starts. 2. Trivia: Running JavaScript fast is considered so important that ARM recently introduced a dedicated instruction, FJCVTZS (Floating-point Javascript Convert to Signed fixed-point, rounding toward Zero), specifically to handle how JavaScript’s math operates.
2022-01-24 19:15:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1762411743402481, "perplexity": 2054.4453865752585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304600.9/warc/CC-MAIN-20220124185733-20220124215733-00109.warc.gz"}
https://jeffparsons.github.io/2016/11/05/hexagons/
PlanetKit is a project aimed at creating a toolkit for making interactive virtual worlds. I’m writing it in the Rust programming language, which I’m finding delightful to work with, and a great fit for the task. In my last post I used Perlin noise (now switched over to simplex noise) to create some basic terrain. Here’s the lumpy blob we ended up with at the end: Let’s see what I said I was going to tackle next: My next move will be to turn this into a proper voxmap of (mostly) hexagonal prisms, à la Minecraft. Or, you know, I might lie about that again and do something else instead. Ok, then. Let’s do it! Errr, the voxels bit, I mean—not the lying. The first thing I did was to change what I store about the globe from being a 2D array of height values (a heightmap) to a 2D array of enum instances representing whether the cell contains dirt, or air. Then when I calculate the height of each cell on the globe, I either store Dirt if the cell we’re looking at is below the land elevation at that point, or Air otherwise, and only render the cell if it contains Dirt. Here’s what that looks like: Cool! That’s a good start. ## Voxels Before we get to hexagons, I want to turn this into a proper voxmap. So I made my chunks 3D, and stored the same thing as before (Dirt or Air) at each layer. That’s not much to look at yet, so I also introduced a third material type Water for wherever a cell is below sea level but above the land level at that point. Hmmm, that’s not quite what I expected. There seems to be a surprising amount of water immediately next to land, and not just on the coastline. This turned out to be what I think of as equivalent to a kind of “z-fighting”. This came about because I had the sea level defined as 1.0 units from the center of the globe, and the way I was calculating the value of each cell, I would be evaluating the simplex noise at an elevation of almost exactly that same 1.0. My understanding is that in many cells, whether or not the ocean covered the land was then determined largely by tiny floating point errors. Changing the ocean radius slightly to something that doesn’t align so inconveniently with cell centers, e.g, 1.01, makes it look a lot better: I think I also tweaked the colors between that screenshot and the one before, so the improvement in appearance is exaggerated. But you can clearly see that the coastline is better defined than before. ## Hexagons, at long last In all the screenshots until now, I’ve been drawing each cell in my world as a quad with one corner at the nominal center of the cell, and its opposite corner at the center of cell (x+1, y+1). This means they’ve been both offset a bit from where they’re supposed to be, and also obviously the wrong shape. So how can I turn those quads into the hexagons I’ve been intending them to be all along? This is usually the point where I start scribbling on the back of envelopes (I think it was a parking fine in this case) to better understand the problem. What I came up with was a hexagon that wraps in both directions—and therefore can be tiled—positioned such that its center exists at all four corners of the quad: This makes it visually obvious that we’re dealing with a grid of 6 units between hexagon centers (count it) to calculate cell vertex positions (assuming we want all vertices to lie at integer coordinate pairs) as opposed to the grid of 1 unit between cell centers when we’re only concerned with the center points of each cell. From there, if we list out points for the middle of each side and each vertex, starting from the middle of the side facing the positive x direction and travelling counterclockwise, we end up with 12 offset coordinate pairs in this grid, labelled as follows: Referring to the top figure for the offsets and the bottom for the labelling, that gives us: • 0: (3, 0) • 1: (2, 2) • 2: (0, 3) • 3: (-2, 4) • 4: (-3, 3) • 5: (-4, 2) • 6: (-3, 0) • 7: (-2, -2) • 8: (0, -3) • 9: (2, -4) • 10: (3, -3) • 11: (4, -2) Plugging this in where the quads were before gives us this: Yay, hexagons! But what’s the deal with that line of funny star things? Those happen where a particular cell is being drawn by two different chunks where they meet. Does that mean I have an off-by-one error? No—this is actually intentional. Whilst one chunk will eventually have to own the data for each boundary cell (otherwise how will I know what is the authoritative state of the cell?), I’m intending that a chunk will render only the part of the boundary cells that fall within the bounding quad for that chunk. So eventually I’ll fix this so that each chunk will render its boundary cells using, e.g., these vertices instead: Again, that’ll be easy to calculate now that we have the offset coordinates for the vertices of the hexagon, and the middle of each edge. (Were you wondering why I’d marked those edges in the diagrams above?) To decide which chunk owns which cell’s data, I’m intending to do something roughly equivalent to (or possibly identical to, for lack of a better idea) this approach that was used by a bunch of atmospheric scientists a while back. ## Gaps! Flying around the globe revealed this problem: This one is an off-by-one error. The overlaps we saw in the previous image were showing up along the intersection of x = 0 in one chunk and y = 0 in the next chunk over. So where the chunks go to at the other corner doesn’t make any difference there. But I’d been rendering quads in a chunk from x = 0 to x = chunk_width - 1. This works for quads that go from where one hexagon center should be over to another, but for actual hexagons I need to render cells all the way from x = 0 up to x = chunk_width, or where chunks intersect at their other end, then neither chunk will be trying to draw hexagons there. So that was easily fixed. ## Hexagonal prisms Now that we’ve got hexagons all over the place, it’s not much of a stretch to give them some sides to turn them into hexagonal prisms: Sweeeeet. By the way, I haven’t implemented any lighting yet. The sides only look a bit like they’re in shadow because I’m halving the brightness of their colour across the board. Let’s move a bit closer: And a bit closer again… oops, we’re inside the world! This very quickly reveals the next problem I need to solve: huge amounts of unnecessary geometry. This is about as high as I could crank up the resolution on the globe before my computer started begging for mercy. The initial solution here will be to only render each side of a cell if the next cell over is air, because at least for now that’s the only way that side should ever be seen. I’ll need to get a bit more sophisticated than that eventually, but for now that will make things about a bajillion times better (give or take a few squillion). ## What’s next? Rendering the globe as a voxmap of hexagonal prisms is a pretty neat milestone. But I’m intending to majorly ramp up the complexity of this thing soon—more complex terrain generation, more types of cells, much higher-resolution terrain—so I’ll need to do a little bit of boring housekeeping before I move on to those other more glamorous things. Specifically, I need to: • Fix the rendering of cells at chunk boundaries (see above). • Don’t render sides of cells that aren’t visible at all. • Split out separate modules for generating terrain and rendering chunks. I’m not going to bother optimising any of this yet beyond the above, but I am going to keep in mind when design the interfaces a whole bunch of super-easy caching/memoization optimisations that I want to drop in later. • Dynamically create and render chunks. After that, I think I might take a swing at building a very simple game on top of this, both to demonstrate how I intend all the pieces of this to eventually fit together, and to make sure I put my money where my mouth is.
2019-02-22 05:42:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4850955307483673, "perplexity": 891.3047596590753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247513661.77/warc/CC-MAIN-20190222054002-20190222080002-00586.warc.gz"}
https://www.hpmuseum.org/forum/showthread.php?mode=linear&tid=3438&pid=31226
03-20-2015, 08:15 AM (This post was last modified: 06-15-2017 01:59 PM by Gene.) Post: #1 Gerald H Senior Member Posts: 1,458 Joined: May 2014 Edit: Updated definition of (0,-1) to be 1 rather than 0. The Kronecker symbol is a generalization of the Jacobi symbol, itself a generalization of the Legendre symbol. The Legendre symbol (a/b) returns 1 if a is a quadratic residue of b, -1 if not & 0 if the GCD(a,b)>1, b an odd prime. The Jacobi symbol allows b to be the product of odd primes & then returns results as for the product of the individual Legendre symbols of the factors of b. The Kronecker symbol allows b to be any integer. Input to the programme: { a , b } returns 1, -1 or 0. Remember: the Kronecker symbol does NOT indicate a quadratic residue if it returns 1. The individual prime factors may have Legendre symbols of -1 & -1, ie their product is 1, which the Kronecker symbol returns. Ans►L1: Ans(1)►A: L1(2)►B: IF B==-1 THEN 1: IF A<0 THEN -1: END: ELSE IF B THEN IF (A MOD 2)+(B MOD 2) THEN 0►V: WHILE NOT(B MOD 2) REPEAT B/2►B: V+1►V: END: 1: IF V MOD 2 THEN IF ABS((A MOD 8)-4)==1 THEN -1: END: END: Ans►K: IF B<0 THEN -B►B: IF A<0 THEN -K►K: END: END: WHILE A REPEAT 0►V: WHILE NOT(A MOD 2) REPEAT V+1►V: A/2►A: END: IF V MOD 2 THEN IF ABS((B MOD 8)-4)==1 THEN -K►K: END: END: IF (A MOD 4)*(B MOD 4)==9 THEN -K►K: END: ABS(A)►R: B MOD R►A: R►B: END: (B==1)K: ELSE 0: END: ELSE ABS(A)==1: END: END: « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
2021-12-01 09:26:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8698921799659729, "perplexity": 6123.797401975938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359976.94/warc/CC-MAIN-20211201083001-20211201113001-00195.warc.gz"}
http://alberding.phys.sfu.ca/wordpress/?page_id=465
# The Theory Two equal masses, $M$, are hung from a rope that passes through two pulleys as shown. One mass hangs from a pulley in the middle of the rope and the other hangs from the end of the rope after the rope has passed through a pulley that is hanging from the ceiling. The other end of the rope is attached to the ceiling. In the actual situation the left-hand pulley which is attached to the first pass has a significant mass. Therefore, we will solve for the acceleration assuming the masses are different. If the masses are unequal, $m_1$ is the left=hand one and $m_2$ the right-hand one on the end of the rope: Newton’s second law for mass 1 is: $$2T -m_1g = m_1a/2$$ and for mass 2: $$m_2g -T = m_2a$$   Multiply the last equation by 2 $$2m_2g – 2T = 2m_2a$$ and add to the first one $$a = 2g \frac{2m_2 -m_1}{m_1+4m_2}$$ If $m_1$ = 0.130 kg, including the pulley,  and $m_2 = 0.100$ kg then $a = 2.641$ m/s$^2$. ## The Analysis #### Predicted accelerations: For mass 1: $a = -2.58$ m/s$^2$, and for mass 2 $a/2 = 1.295$ m/s$^2$ #### Estimated values from video analysis: For mass 1: $a = -2.47$ m/s$^2$, and for mass 2 $a/2 = 1.32$ m/s$^2$.  The estimated uncertainty, judged by varying the ranges of the fit is about ±0.05 m/s2.
2017-07-21 18:37:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7205324769020081, "perplexity": 562.9018965822146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423808.34/warc/CC-MAIN-20170721182450-20170721202450-00030.warc.gz"}
https://dragon.seetatech.com/versions/master/api/python/torch/nn/AdaptiveMaxPool3d
class dragon.vm.torch.nn.AdaptiveMaxPool3d(output_size)[source] Apply the 3d adaptive max pooling. This module excepts the input size $$(N, C, D, H, W)$$, and output size is $$(N, C, D_{\text{out}}, H_{\text{out}}, W_{\text{out}})$$, where $$N$$ is the batch size, $$C$$ is the number of channels, $$D$$, $$H$$ and $$W$$ are the depth, height and width of data. Examples: m = torch.nn.AdaptiveMaxPool3d(1) x = torch.ones(2, 2, 2, 2, 2) y = m(x) __init__¶ AdaptiveMaxPool3d.__init__(output_size)[source] Create a AdaptiveMaxPool3d module. Parameters: • output_size (Union[int, Sequence[int]]) – The target output size.
2021-02-28 22:06:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42405638098716736, "perplexity": 2099.6326253888788}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361776.13/warc/CC-MAIN-20210228205741-20210228235741-00339.warc.gz"}
http://pyviz.org/about.html
PyViz is a collaboration between the maintainers of several packages, including Bokeh , HoloViews , GeoViews , hvPlot , Datashader , Param , and Panel . The authors of these tools are working together closely to help make them into a coherent solution to a wide range of Python visualization problems. If you like PyViz, please tweet a screenshot of your latest creation, linking to pyviz.org and tagging it #PyViz along with any other library you relied on ( @HoloViews , @Datashader , @BokehPlots , @Matplotlib , etc.). Thanks! There is an active open source community around PyViz, and contributions from anyone are welcome. Contributions of any sort are valuable, including new documentation, plus bug reports and fixes to code or documentation that might seem trivial (PyViz is partly about having as many of those ironed out as possible!). Please go ahead and open a pull request or create a new issue :) While the PyViz team itself maintains a number of packages on PyViz’s GitHub , PyViz depends on and supports a number of other open-source libraries; for more information, see PyViz’s detailed background . The entire PyViz stack is open source, free for commercial and non-commercial use. However, if you are lucky enough to be in a position to fund developers to work on PyViz, you can contact sales@anaconda.com , or you can also collaborate with PyViz via Quansight’s open source partnerships . Additionally, some parts of the PyViz stack are able to accept donations, e.g. Bokeh via NumFOCUS . ## Acknowledgements and prior funding ¶ The original development of core PyViz libraries was supported in part by: The PyViz team thanks Brian Thomas at Washington State University for donating the pyviz.org domain to the PyViz project. If you are interested in Brian’s PyViz smart-home visualization tool, check out his paper .
2019-04-18 10:46:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30552977323532104, "perplexity": 5774.3727682892195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517558.8/warc/CC-MAIN-20190418101243-20190418122317-00040.warc.gz"}
https://homework.cpm.org/category/CC/textbook/cc3/chapter/7/lesson/7.3.3/problem/7-111
### Home > CC3 > Chapter 7 > Lesson 7.3.3 > Problem7-111 7-111. Researchers have determined that teenagers’ memories are negatively affected by getting less than $10$ hours of sleep. Being good scientists, the math students at North Middle School were skeptical, so they did their own study. They asked $300$ students to memorize $10$ objects. The next day, each student was asked how much sleep they got and then was asked to list the ten items. The results are below. Remembered all $\mathbf{10}$ items? Yes No TOTAL Less than $7$ hours sleep $6$ $149$ $155$ $\mathbf{7}$ to $\mathbf{9}$ hours sleep $11$ $109$ $120$ At least $\mathbf{10}$ hours sleep $5$ $20$ $25$ TOTAL $22$ $278$ $300$ Make a conditional relative frequency table to determine if there is an association between hours of sleep and memory. In order to make a conditional relative frequency table, you must first decide which variable is independent and which is dependent. Remembering depends on how many hours of sleep a student gets so the independent variable is hours of sleep. The frequencies should be calculated based on row totals. Remembered all 10 items? Yes No Less than $\mathbf{7}$ hours sleep $\mathbf{3.9\%}$ $\mathbf{7}$ to $\mathbf{9}$ hours sleep $\mathbf{90.8\%}$ At least $\mathbf{10}$ hours sleep
2021-05-16 09:17:16
{"extraction_info": {"found_math": true, "script_math_tex": 26, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36759695410728455, "perplexity": 1245.974714031439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992516.56/warc/CC-MAIN-20210516075201-20210516105201-00432.warc.gz"}
https://biblioteca.portalbolsasdeestudo.com.br/?q=Physical+measures
Página 1 dos resultados de 6301 itens digitais encontrados em 0.370 segundos ## Physical measures of the carcass and the chemical composition of Longissimus dorsi muscle of Alentejano pigs between 70 and 110 kg LW. Neves, J.A.; Freitas, A.B.; Martins, J.M.; Nunes, J.T. Tipo: Artigo de Revista Científica Português Relevância na Pesquisa 45.85% The aim of this work was to determine the relationship between physical measures from the subcutaneous tissue and Longissimus dorsi (LD) muscle (area, depth, and width measured between the 3rd and 4th lumbar vertebrae, at the last rib, and between the 3rd and 4th last ribs) and the chemical composition of LD at 70, 80, 90, 100, and 110 kg LW. The content of water, protein, neutral and polar lipids, total and soluble collagen, and total pigments, were determined. Globally, the measures taken and the chemical composition were not affected between 70 and 110 kg, except for the LD depth and width at the 3rd-4th lumbar vertebrae. At 70 kg, the LD depth was greater than at 110 kg (3.77 vs. 2.75 cm, P<0.05) and the width was smaller (8.14 vs. 9.82 cm, P<0.05). In conclusion, from 70 to 110 kg: i) the morphological changes in the lumbar region were due mainly to the width dimension, with no impact on the chemical composition of the muscle; and ii) the chemical composition did not change drastically, even though the amount of intramuscular fat increased slightly between 70 and 110 kg (5.32 and 6.67%, respectively) suggesting an early intramuscular fat deposition. ## Physical measures of the carcass and the chemical composition of Longissimus dorsi muscle of Alentejano pigs between 70 and 110 kg LW. Neves, J.A.; Freitas, A.; Martins, J.M.; Nunes, J.L.T. Tipo: Artigo de Revista Científica Português Relevância na Pesquisa 45.85% The aim of this work was to determine the relationship between physical measures from the subcutaneous tissue and Longissimus dorsi (LD) muscle (area, depth, and width measured between the 3rd and 4th lumbar vertebrae, at the last rib, and between the 3rd and 4th last ribs) and the chemical composition of LD at 70, 80, 90, 100, and 110 kg LW. The content of water, protein, neutral and polar lipids, total and soluble collagen, and total pigments, were determined. Globally, the measures taken and the chemical composition were not affected between 70 and 110 kg, except for the LD depth and width at the 3rd-4th lumbar vertebrae. At 70 kg, the LD depth was greater than at 110 kg (3.77 vs. 2.75 cm, P<0.05) and the width was smaller (8.14 vs. 9.82 cm, P<0.05). In conclusion, from 70 to 110 kg: i) the morphological changes in the lumbar region were due mainly to the width dimension, with no impact on the chemical composition of the muscle; and ii) the chemical composition did not change drastically, even though the amount of intramuscular fat increased slightly between 70 and 110 kg (5.32 and 6.67%, respectively) suggesting an early intramuscular fat deposition. ## Variation of the different attributes that support the physical function in community-dwelling older adults Pereira, Catarina Fonte: Journal of Sports Medicine and Physical Fitness; Minerva Medica Publicador: Journal of Sports Medicine and Physical Fitness; Minerva Medica Tipo: Artigo de Revista Científica Português Relevância na Pesquisa 45.81% AIM: This study investigated the variation with age of different attributes that support the physical functioning in community-dwelling older adults having as reference scores from young adults. METHODS: The study was a cross-sectional study. Participants were 559 older adults and 79 young adults grouped according to gender and age (Y:20-29, A:60-64, B:65-69, C:70-74, D:75-79 and E:≥80 years). Strength, flexibility, agility, aerobic endurance, and balance were evaluated by Fullerton tests. RESULTS: ANOVA and Bonferroni post-hoc tests showed that compared to young, the 60-64 years group showed decreased values in almost all attributes of physical function. In older adults, additional differences were observed in females mainly between the 60-64 years group and the 70-74 years, 75-79 years, and:≥80 years groups, and in males between the 60-64 years group and the ≥80 years group (P<0.05). Comparisons between standardized physical function attributes (T-scores) done by repeated measures and contrasts demonstrated that, across age groups, agility and dynamic balance showed the highest rate loss in both genders, and lower body flexibility showed the lowest (P<0.01). CONCLUSION: Physical function reduction seems to occur earlier in women than in men and abilities involving multiple structures such as balance... ## Physical activity patterns in adults who are blind as assessed by accelerometry. Marmeleira, José; Laranjo, Luís; Marques, Olga; Pereira, Catarina Tipo: Artigo de Revista Científica Português Relevância na Pesquisa 45.75% The main purpose of our study was to quantify, by using accelerometry, daily physical activity (PA) in adults with visual impairments. Sixty-three adults (34.9% women) who are blind (18-65 years) wore an accelerometer for at least 3 days (minimum of 10 hr per day), including 1 weekend day. Nineteen participants (~30%) reached the recommendation of 30 min per day of PA, when counting every minute of moderate or greater intensity. No one achieved that goal when considering bouts of at least 10 min. No differences were found between genders in PA measures. Chronological age, age of blindness onset, and body mass index were not associated with PA. We conclude that adults who are blind have low levels of PA and are considerably less active compared with the general population. Health promotion strategies should be implemented to increase daily PA for people with visual impairments. ## The Influence of Body Mass Index on Self-report and Performance-based Measures of Physical Function in Adult Women Hergenroeder, Andrea L; Brach, Jennifer S; Otto, Amy D; Sparto, Patrick J; Jakicic, John M Fonte: Cardiopulmonary Physical Therapy Journal, Inc. Publicador: Cardiopulmonary Physical Therapy Journal, Inc. Tipo: Artigo de Revista Científica Relevância na Pesquisa 36.11% Purpose: Little is known about limitations in physical function across BMI categories in middle aged women using both self-report and performance-based measures. Furthermore, the impact of BMI on the measurement of function has not been explored. The purpose of this study was to assess physical function in adult women across BMI categories using self-report and performance-based measures and determine the influence of BMI on the relationship between the measures. Methods: Fifty sedentary females (10 in each BMI category: normal weight, overweight, obese class I, II, and III) aged 51.2 ± 5.4 years participated. Assessments included demographics, past medical history, physical activity level, BMI, and self-report (Late Life Function and Disability Instrument) and performance-based measures of physical function (6-Minute Walk Test, timed chair rise, gait speed). Physical function was compared between BMI categories using analysis of variance. The influence of BMI on the relationship of self-report and performance-based measures was analyzed using linear regression. Results: Compared to those that were normal weight or overweight, individuals with obesity scored lower on the self-report measure of physical function (LLFDI) for capability in participating in life tasks and ability to perform discrete functional activities. On the performance-based measures... ## Factors Associated with Performance-based Physical Function of Older Veterans of the PLAAF: A Pilot Study Chen, Da-Wei; Jin, Yan-Bin; Liu, Wei; Du, Wen-Jin; Li, Hua-Jun; Chen, Jin-Wen; Xu, Wei Fonte: The Society of Physical Therapy Science Publicador: The Society of Physical Therapy Science Tipo: Artigo de Revista Científica Português Relevância na Pesquisa 45.8% [Purpose] This study investigated the factors associated with performance-based physical function of older veterans of the People’s Liberation Army Air Force of China (PLAAF). [Subjects and Methods] A cross-sectional survey of 146 older veterans of the PLAAF was carried out. Their physical function was measured using the Chinese Mini-Physical Performance Testing (CM-PPT). The demographics and health status (including physical measures, blood chemical tests, chronic diseases, and number of morbidities) were collected from health examination reports and computer records of case history. Cognition was measured using the Mini-Mental Status Examination (MMSE). [Results] In multiple linear regressions, age, MMSE, Parkinsonism, and chronic obstructive pulmonary disease were independently associated with CM-PPT, while previous stroke and albumin level reached borderline statistical significance. The association between the number of morbidities and CM-PPT was significant after adjustment for MMSE and demographics. The CM-PPT of low (0 or 1), medium (2 to 4) and high count (5 or more) morbidities were 11.3±3.9, 10.2±4.1, 6.1±3.8 respectively, and the difference among these three groups was significant. [Conclusion] Some modified conditions and the number of chronic diseases might be associated with the physical function of older veterans of the PLAAF. ## Standardized Application of Laxatives and Physical Measures in Neurosurgical Intensive Care Patients Improves Defecation Pattern but Is Not Associated with Lower Intracranial Pressure Kieninger, Martin; Sinner, Barbara; Graf, Bernhard; Grassold, Astrid; Bele, Sylvia; Seemann, Milena; Künzig, Holger; Zech, Nina Fonte: Hindawi Publishing Corporation Publicador: Hindawi Publishing Corporation Tipo: Artigo de Revista Científica Português Relevância na Pesquisa 45.85% Background. Inadequate bowel movements might be associated with an increase in intracranial pressure in neurosurgical patients. In this study we investigated the influence of a structured application of laxatives and physical measures following a strict standard operating procedure (SOP) on bowel movement, intracranial pressure (ICP), and length of hospital stay in patients with a serious acute cerebral disorder. Methods. After the implementation of the SOP patients suffering from a neurosurgical disorder received pharmacological and nonpharmacological measures to improve bowel movements in a standardized manner within the first 5 days after admission to the intensive care unit (ICU) starting on day of admission. We compared mean ICP levels, length of ICU stay, and mechanical ventilation to a historical control group. Results. Patients of the intervention group showed an adequate defecation pattern significantly more often than the patients of the control group. However, this was not associated with lower ICP values, fewer days of mechanical ventilation, or earlier discharge from ICU. Conclusions. The implementation of a SOP for bowel movement increases the frequency of adequate bowel movements in neurosurgical critical care patients. However... ## Using sulcal and gyral measures of brain structure to investigate benefits of an active lifestyle Lamont, Ashley J; Mortby, Moira E; Anstey, Kaarin J; Sachdev, Perminder S; Cherbuin, Nicolas Tipo: Artigo de Revista Científica Formato: 7 pages Português Relevância na Pesquisa 36.08% Background: Physical activity is associated with brain and cognitive health in ageing. Higher levels of physical activity are linked to larger cerebral volumes, lower rates of atrophy, better cognitive function and lesser risk of cognitive decline and dementia. Neuroimaging studies have traditionally focused on volumetric brain tissue 17 measures to test associations between factors of interest (e.g. physical activity) and brain structure. However, cortical sulci may provide additional information to these more standard measures. Method: Associations between physical activity, brain structure, and cognition were investigated in a large, community-based sample of cognitively healthy individuals (N = 317) using both sulcal and volumetric measures. Results: Physical activity was associated with narrower width of the Left Superior Frontal Sulcus and the Right Central Sulcus,while volumetric measures showed no association with physical activity. In addition, Left Superior Frontal Sulcal width was associated with processing speed and executive function. Discussion: These findings suggest sulcalmeasuresmay be a sensitive index of physical activity related to cerebral health and cognitive function in healthy older individuals. Further research is required to confirm these findings and to examine how sulcal measures may be most effectively used in neuroimaging. ## An investigation into the variation that exists between the physical performance indicators of hurling players at different levels of participation Murphy, Andrew Fonte: University of Limerick Publicador: University of Limerick Tipo: info:eu-repo/semantics/masterThesis; all_ul_research; ul_published_reviewed; ul_theses_dissertations Português Relevância na Pesquisa 36.08% peer-reviewed; Background: There is a paucity of research profiling the anthropometric measures and physical performance indicators of hurling athletes. Such profiles are essential to optimising the physical preparation structures of hurling athletes. Methods: A battery of tests profiling the anthropometric measures and physical performance indicators of 95 hurling athletes from three different levels of participation, senior elite (n1=34), senior sub- elite (n2=28) & junior elite (n3=33) were evaluated on two separate occasions during the season. Data was collected on all squads describing height, body mass, upper and lower extremity power, sprint performance, local muscular endurance, anaerobic endurance and aerobic endurance. Additional body composition measurements were conducted using DXA on the senior elite team. Results: There was a significant difference (p≤0.05) between measures of body mass, lower extremity power, sprint performance and endurance between senior elite and senior sub elite squads. Significant differences were also present (p≤0.05) between measures of height, body mass, upper and lower extremity power and sprint performance in the senior elite and junior elite squads. The findings also revealed selecte anthropometric measures and physical performance indicators are subject to significant seasonal change (p≤0.05). Conclusion: The results of this research provide normative data for the population of hurling athletes. The results also show that selected anthropometric measures and physical performance indicators illustrate a progressive improvement as level of participation increases. Such measures and indicators are subject to seasonal change but are dependent on the training programme content and initial pre-season levels. ## VALIDITY AND RELIABILITY OF THE INTERNATIONAL PHYSICAL ACTIVITY QUESTIONNAIRE AMONG MEXICAN ADULTS Medina Garcia, CATALINA Fonte: Quens University Publicador: Quens University Português Relevância na Pesquisa 36.1% Background: Because it is a strong determinant of chronic disease and mortality risk, physical activity is a health behaviour that is measured in most large health surveys. Questionnaires are the most commonly used method for measuring physical activity in health surveys. In the early 1990’s, an international physical activity questionnaire (IPAQ) was created to allow researchers from across the globe to employ the same questionnaire within their country. Several studies have been conducted on the IPAQ to determine whether the responses obtained are comparable when the questionnaire is administered on multiple occasions (reliability) and to determine the ability of the questionnaire to obtain the same physical activity result when compared to other direct measures, considered as “gold standard” (validity). However, none of these studies have been conducted in Mexico. Objective: Examine: 1) the reliability of the IPAQ among Mexican adults by comparing minutes per week (min/wk) spent in moderate-to-vigorous physical activity (MVPA) from the IPAQ administered two times, 2) the validity of the IPAQ surveys by comparing IPAQ min/wk of MVPA to those obtained by the accelerometer. Methods: 267 Mexican adults who worked in a factory in Mexico City participated. IPAQ was applied in a face-to-face interview during a first clinic visit. Participants received an accelerometer (motion sensor that measures and record physical activity) and wore it consecutively for the next 9 days. In a second visit... ## Physical Measures for Infinitely Renormalizable Lorenz Maps Martens, Marco; Winckler, Björn Tipo: Artigo de Revista Científica Relevância na Pesquisa 45.91% A physical measure on the attractor of a system describes the statistical behavior of typical orbits. An example occurs in unimodal dynamics. Namely, all infinitely renormalizable unimodal maps have a physical measure. For Lorenz dynamics, even in the simple case of infinitely renormalizable systems, the existence of physical measures is more delicate. In this article we construct examples of infinitely renormalizable Lorenz maps which do not have a physical measure. A priori bounds on the geometry play a crucial role in (unimodal) dynamics. There are infinitely renormalizable Lorenz maps which do not have a priori bounds. This phenomenon is related to the position of the critical point of the consecutive renormalizations. The crucial technical ingredient used to obtain these examples without a physical measure, is the control of the position of these critical points.; Comment: 26 pages, 1 figure ## Physical Measures for Certain Partially Hyperbolic Attractors on 3-Manifolds Bortolotti, Ricardo T. Tipo: Artigo de Revista Científica Relevância na Pesquisa 45.79% In this work, we study ergodic properties of certain partially hyperbolic attractors whose central direction has a neutral behavior, the main feature is a condition of transversality between unstable leaves when projected by the stable holonomy. We prove that partial hyperbolic attractors satisfying conditions of transversality between unstable leaves via the stable holonomy, neutrality in the central direction and regularity of the stable foliation admits a finite number of physical measures, coinciding with the ergodic u-Gibbs States, whose union of the basins has full Lebesgue measure. Moreover, we describe the construction of a family of robustly nonhyperbolic attractors satisfying these properties. ## On Physical Measures for Cherry Flows Palmisano, Liviana Tipo: Artigo de Revista Científica Relevância na Pesquisa 45.94% Studies of the physical measures for Cherry flows were initiated by R. Saghin and E. Vargas in "Invariant measures for Cherry flows". While the non-positive divergence case was resolved, the positive divergence one still lacked the complete description. Some conjectures were put forward. In this paper we contribute in this direction. Namely, under mild technical assumptions we solve conjectures stated by R. Saghin and E. Vargas by providing a description of the physical measures for Cherry flows in the positive divergence case.; Comment: arXiv admin note: substantial text overlap with arXiv:1403.7794 ## Physical measures for nonlinear random walks on interval Kleptsyn, Victor; Volk, Denis Tipo: Artigo de Revista Científica Português Relevância na Pesquisa 45.97% A one-dimensional confined Nonlinear Random Walk is a tuple of $N$ diffeomorphisms of the unit interval driven by a probabilistic Markov chain. For generic such walks, we obtain a geometric characterization of their ergodic stationary measures and prove that all of them have negative Lyapunov exponents. These measures appear to be probabilistic manifestations of physical measures for certain deterministic dynamical systems. These systems are step skew products over transitive subshifts of finite type (topological Markov chains) with the unit interval fiber. For such skew products, we show there exist only finite collection of alternating attractors and repellers; we also give a sharp upper bound for their number. Each of them is a graph of a continuous map from the base to the fiber defined almost everywhere w.r.t. any ergodic Markov measure in the base. The orbits starting between the adjacent attractor and repeller tend to the attractor as $t \to +\infty$, and to the repeller as $t \to -\infty$. The attractors support ergodic hyperbolic physical measures.; Comment: 29 pages. Corrected a few typos and the title. To appear in Moscow Mathematical Journal ## Physical measures at the boundary of hyperbolic maps Araujo, Vitor; Tahzibi, Ali Tipo: Artigo de Revista Científica Português Relevância na Pesquisa 45.9% We consider diffeomorphisms of a compact manifold with a dominated splitting which is hyperbolic except for a "small" subset of points (Hausdorff dimension smaller than one, e.g. a denumerable subset) and prove the existence of physical measures and their stochastical stability. The physical measures are obtained as zero-noise limits which are shown to satisfy the Entropy Formula.; Comment: 29 pages, 2 figures, 44 references. Minor corrections ## Semicontinuity of entropy, existence of equilibrium states and continuity of physical measures Araujo, Vitor Tipo: Artigo de Revista Científica Português Relevância na Pesquisa 45.97% We obtain some results of existence and continuity of physical measures through equilibrium states and apply these to non-uniformly expanding transformations on compact manifolds with non-flat critical sets, obtaining sufficient conditions for continuity of physical measures and, for local diffeomorphisms, necessary and sufficient conditions for stochastic stability. In particular we show that, under certain conditions, stochastically robust non-uniform expansion implies existence and continuous variation of physical measures.; Comment: 16 pages - Final version ## On the Structure of Physical Measures in Gauge Theories Fleischhack, Christian Tipo: Artigo de Revista Científica Relevância na Pesquisa 45.84% It is indicated that the definition of physical measures via exponential of minus the action times kinematical measure'' contradicts properties of certain physical models. In particular, theories describing confinement typically cannot be gained this way. The results are rigorous within the Ashtekar approach to gauge field theories.; Comment: 5 pages, LaTeX ## Physical measures for the geodesic flow tangent to a transversally conformal foliation Alvarez, Sébastien; Yang, Jiagang Tipo: Artigo de Revista Científica Relevância na Pesquisa 45.79% We consider a transversally conformal foliation $\mathcal{F}$ of a closed manifold $M$ endowed with a smooth Riemannian metric whose restriction to each leaf is negatively curved. We prove that it satisfies the following dichotomy. Either there is a transverse holonomy-invariant measure for $\mathcal{F}$, or the foliated geodesic flow admits a finite number of physical measures, which have negative transverse Lyapunov exponents and whose basin cover a set full for the Lebesgue measure. We also give necessary and sufficient conditions for the foliated geodesic flow to be partially hyperbolic in the case where the foliation is transverse to a projective circle bundle over a closed hyperbolic surface.; Comment: 31 pages ## Physical measures for partially hyperbolic surface endomorphisms Tsujii, Masato Tipo: Artigo de Revista Científica Português Relevância na Pesquisa 45.79% We consider dynamical systems generated by partially hyperbolic surface endomorphisms of class C^r with one-dimensional strongly unstable subbundle. As the main result, we prove that such a dynamical system generically admits finitely many ergodic physical measures whose union of basins of attraction has total Lebesgue measure, provided that r>= 19.; Comment: 72pages, no figures ## Physical measures of discretizations of generic diffeomorphisms Guihéneuf, Pierre-Antoine What is the ergodic behaviour of numerically computed segments of orbits of a diffeomorphism? In this paper, we try to answer this question for a generic conservative $C^1$-diffeomorphism, and segments of orbits of Baire-generic points. The numerical truncation will be modelled by a spatial discretization. Our main result states that the uniform measures on the computed segments of orbits, starting from a generic point, accumulates on the whole set of measures that are invariant under the diffeomorphism. In particular, unlike what could be expected naively, such numerical experiments do not see the physical measures (or more precisely, cannot distinguish physical measures from the other invariant measures).; Comment: 34 pages. The appendices overlap with arXiv:1510.00723
2020-08-12 03:14:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5521757006645203, "perplexity": 4485.756435215929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738864.9/warc/CC-MAIN-20200812024530-20200812054530-00463.warc.gz"}
http://mathhelpforum.com/math-topics/147617-newtonian-mechanics.html
# Math Help - newtonian mechanics 1. ## newtonian mechanics THe diagram shows particles A and B connected by a light inextensible string that passes over a small smooth pulley C. Pulley C is fixed at the top of a wedge which is fixed on a horizontal plane. Particle A is on the face of the wedge which makes an angle of alpha with the horizontal with the portion of the string AC parallel to the line of the greatest slope of the face. Particle B is on the face of the wedge which makes an angle of beta with the horizontal with the portion of the string BC prarallel to the line of the greatest slope of the face. Weight of each particle A and B is W and the coefficient of friction of each particle with the faces of the wedge is $\mu$ . If $\alpha>\beta$ and particles A and B are about to slide, show that $ \mu=\tan \frac{1}{2}(\alpha-\beta) $ THe diagram shows particles A and B connected by a light inextensible string that passes over a small smooth pulley C. Pulley C is fixed at the top of a wedge which is fixed on a horizontal plane. Particle A is on the face of the wedge which makes an angle of alpha with the horizontal with the portion of the string AC parallel to the line of the greatest slope of the face. Particle B is on the face of the wedge which makes an angle of beta with the horizontal with the portion of the string BC prarallel to the line of the greatest slope of the face. Weight of each particle A and B is W and the coefficient of friction of each particle with the faces of the wedge is $\mu$ . If $\alpha>\beta$ and particles A and B are about to slide, show that $ \mu=\tan \frac{1}{2}(\alpha-\beta) $ since A and B are about to slide, the static friction force is at a maximum, and the system is in equilibrium. forces on A ... $ W\sin{\alpha} - T - \mu W\cos{\alpha} = 0 $ forces on B ... $ T - \mu W\cos{\beta} - W\sin{\beta} = 0 $ combine the equations ... $ W(\sin{\alpha} - \sin{\beta}) - \mu W(\cos{\alpha} + \cos{\beta}) = 0 $ solving for $\mu$ ... $ \mu = \frac{\sin{\alpha} - \sin{\beta}}{\cos{\alpha} + \cos{\beta}} $ I'll leave the identity work for you. 3. Originally Posted by skeeter since A and B are about to slide, the static friction force is at a maximum, and the system is in equilibrium. forces on A ... $ W\sin{\alpha} - T - \mu W\cos{\alpha} = 0 $ forces on B ... $ T - \mu W\cos{\beta} - W\sin{\beta} = 0 $ combine the equations ... $ W(\sin{\alpha} - \sin{\beta}) - \mu W(\cos{\alpha} + \cos{\beta}) = 0 $ solving for $\mu$ ... $ \mu = \frac{\sin{\alpha} - \sin{\beta}}{\cos{\alpha} + \cos{\beta}} $ I'll leave the identity work for you. thanks Skeeter , this is the continuation of the problem. If alpha is 60 degrees and beta is 30 degrees , show that the tension of the string is $(\sqrt{3}-1)W$ so i started by evaluating $\mu=2-\sqrt{3}$ Then calculate the reaction forces on each particles and using $F_A+F_B=0$ i tried to solve the simultaneous equation but ended up cancelling everything . Could you check if i am on the right track ? 4. calculate $\mu$ ... $\mu = \tan\left(\frac{\alpha-\beta}{2}\right)$ $\mu = \tan(15) = 2-\sqrt{3}$ use the equation for forces on A ... $ W\sin{\alpha} - T - \mu W\cos{\alpha} = 0 $ $ T = W\sin(60) - (2-\sqrt{3})W\cos(60) $ $T = W(\sqrt{3}-1)$ or B ... $ T - \mu W\cos{\beta} - W\sin{\beta} = 0 $ $T = W[\mu\cos(30) + \sin(30)]$ $ T = W(\sqrt{3}-1) $
2016-07-26 16:21:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7263819575309753, "perplexity": 251.72915022583945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824995.51/warc/CC-MAIN-20160723071024-00312-ip-10-185-27-174.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1753146/finite-abelian-groups-isomorphic
# Finite abelian groups isomorphic? I know cyclic groups of the same order are always isomorphic, but as far as I'm aware finite abelian groups aren't necessarily cyclic. So is this statement true or false, and why? • The statement is false, for example $C_2$ x$\ C_2$, the Klein-four-group is not isomorphic to $C_4$. – Peter Apr 21 '16 at 18:39 • The statement is true iff the order is square free. – Arthur Apr 21 '16 at 18:42 This is not true at all. Simply compare $\Bbb{Z}_2 \times \Bbb{Z}_2$, a non-cyclic group, to $\Bbb{Z}_4$, a cyclic group to see a counterexample. If you are interested in the structure of finite abelian groups, though, the Fundamental Theorem of Abelian Groups might interest you. Basically, it says that all abelian groups are the direct product of cyclic groups. This means that how the group is structured relies on how the group splits the prime powers up, so if $4$ is split up into $2$ and $2$ like in $\Bbb{Z}_2 \times \Bbb{Z}_2$, then it will be different than if it is just kept as $4$ like in $\Bbb{Z}_4$. However, if all of the primes in the prime factorization of the order are to the power of one like in $30=2 \cdot 3 \cdot 5$, then the group has to split into cyclic groups of those primes like $\Bbb{Z}_2 \times \Bbb{Z}_3 \times \Bbb{Z}_5$, so all groups of order $30$ are isomorphic. The same goes for $21=3*7$ or $35=7*5$. Numbers that are product of distinct primes like this are called square-free because they have no perfect squares as divisors. There are already two different abelian groups of order $4$, namely the cyclic group $C_4$, and the non-cyclic group $C_2\times C_2$. So it is not true. In fact, it gets much worse for bigger (squarefree) group orders: if $n=\prod_{i=1}^rp_i^{k_i}$, then the number of distinct abelian groups of order $n$ is given by $$\prod_{i=1}^rp(k_i),$$ where $p(k)$ denotes the number of partitions of $k$. So, for example, there are $1$ million different abelian groups of order $49,659,789,817,537,838,957,341,175,342,490,000$, see here.
2019-11-19 10:03:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8307095170021057, "perplexity": 103.14755012877522}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670135.29/warc/CC-MAIN-20191119093744-20191119121744-00098.warc.gz"}
https://mathematica.stackexchange.com/questions/87969/partial-derivatives-of-an-implicit-equation
# Partial derivatives of an implicit equation Looking for an easy way to find partial derivatives of an implicit equation. For example: Find $\partial z/\partial x$ and $\partial z/\partial y$ if $z$ is defined implicitly as a function of $x$ and $y$ by the equation $$x^3+y^3+z^3+6xyz=1$$ Alternate Approach: Clear[x, y, z]; eqn = x^3 + y^3 + z[x, y]^3 + 6 x y z[x, y] == 1; Solve[D[eqn, y], D[z[x, y], y]] /. z[x, y] -> z • Find another approach, but I still like all of your approaches better. See my original post. Jul 11 '15 at 19:24 • Actually, that approach is quite nice; you now have an explicit reminder that z is the dependent variable. Jul 11 '15 at 19:31 eqn = x^3 + y^3 + z^3 + 6 x y z - 1 == 0 Solve[Dt[eqn, x], Dt[z, x]] /. Dt[y, x] -> 0 Solve[Dt[eqn, y], Dt[z, y]] /. Dt[x, y] -> 0 If y is function of x, just do: Solve[Dt[eqn, x], Dt[z, x]] Solve[Dt[eqn, y], Dt[z, y]] $$\frac{\partial z}{\partial x} = \frac{-x^2-y^2 \frac{\partial y}{\partial x}-2 x z \frac{\partial y}{\partial x}-2 y z}{2 x y+z^2}$$ $$\frac{\partial z}{\partial y} =\frac{x^2 \left(-\frac{\partial \ x}{\partial y}\right)-2 y z \frac{\partial x}{\partial y}-2 x \ z-y^2}{2 x y+z^2}$$ • How is this different from the solution I posted? Jul 11 '15 at 10:20 • You set Dt[y,x] to zero, and Dt[z, y] is missing Jul 11 '15 at 10:34 • Yes, and we are taking partial derivatives, no? We are, per OP, considering $z$ as a function of $x$ and $y$. Jul 11 '15 at 10:44 • Also, I left something for the OP to do by himself, so I left out the derivative with respect to the other variable. Jul 11 '15 at 10:45 • @EnriquePérezHerrero. I like all of the answers I see on this page. I think this particular arrangement is a little easier to understand for students just beginning to use Mathematica. But I want to thank everyone for their help. Jul 11 '15 at 17:58 Try this: Solve[0 == Dt[x^3 + y^3 + z^3 + 6 x y z - 1, x] /. Dt[y, x] -> 0, Dt[z, x]][[1, 1]] Dt[z, x] -> (-x^2 - 2 y z)/(2 x y + z^2) The procedure is similar for the other independent variable. • I like your solution. I do not understand now why I do not get the same answer as your solution. So I assume I am doing something wrong by solving for z and then taking derivative. So will delete my answer since I need to check why it is different. Jul 11 '15 at 7:29 • @Nasser, that approach ought to generate something correct, albeit complicated. I'll look at it myself later. Jul 11 '15 at 7:59 • They are not the same. Since in your method, you set Dt[y,x] to zero as you assumed that y is not function of x. There is no corresponding action done when one just solves for z and then take derivative of the result w.r.t x. That is why the expression I got was much more complicated. I assume your method is what one should do, as the result you obtain is much simpler ;) Jul 11 '15 at 8:11 • Well, if you use D[], anything that doesn't involve the target variable becomes zero, so it is supposed to be equivalent. Jul 11 '15 at 10:15 Application of the implicit function theorem in this case shows that $\frac{\partial z}{\partial x}=-\frac{\partial f}{\partial x}/\frac{\partial f}{\partial z}$ and $\frac{\partial z}{\partial y}=-\frac{\partial f}{\partial y}/\frac{\partial f}{\partial z}$, where $f(x,y,z)=0$ is the implicit function. f = x^3 + y^3 + z^3 + 6 x y z - 1; -D[f, {{x, y}}]/D[f, z] // Simplify {-((x^2 + 2 y z)/(2 x y + z^2)), -((y^2 + 2 x z)/(2 x y + z^2))} • This will come in handy once we do the chain rule and I can explain the implicit function theorem. Thanks. Jul 11 '15 at 18:00
2021-10-18 18:05:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7250053882598877, "perplexity": 604.8717785241341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00222.warc.gz"}
https://tex.stackexchange.com/questions/364646/how-do-you-create-a-keyhole-contour-not-centered-at-the-origin
# How do you create a keyhole contour not centered at the origin? I've been trying to create a keyhole contour where the keyhole is a tiny circle in say, the first quadrant, and the main circle is still centered at the origin. I'm specifically not trying to create a keyhole contour where the keyhole is at the origin, and can't seem to find how to do that. I've been playing around with it for a while, but can't make anything work. Does anybody have any tips? Thanks! Edit: Here's what I'm trying to create: • Welcome to TeX.SX! Please could you share what you have done so far in the form of a minimal working example (MWE). This makes it easier both for us to understand what your issue is and provide solutions/suggestions. – Dai Bowen Apr 13 '17 at 20:42 • Show us what you have been playing around! – Symbol 1 Apr 13 '17 at 20:44 • Show us also what you wanted to achieve (a photo of an hand-written drawing could be enough), please! – CarLaTeX Apr 13 '17 at 20:47 • @CarLaTeX I added a photo of what I'm going for (a poorly drawn one, I might add). I can't figure out how to make the keyhole look like that though, for every guide I've seen for drawing keyhole contours makes has the keyhole at the origin. – David Bowman Apr 14 '17 at 0:52 • As you can see, Zarko has already answered. If his answer meets your needs, remember to upvote and accept it. Otherwise, please add a minimal working example to your question. Thank you! – CarLaTeX Apr 14 '17 at 2:47 Like this: In image is considered $sin{alpha} ~ \alpha$ (at small \alpha) and ratio between angles of hole in locus is about 8: \documentclass[tikz, border=3mm, ]{standalone} \usetikzlibrary{arrows.meta, decorations.markings} \begin{document} \begin{tikzpicture}[>=Straight Barb, decoration={markings,% switch on markings mark=between positions 0.2 and 0.8 step 0.25 with {\arrow[thick]{>}}, mark=at position 0.86 with {\arrow[thick]{>}}, mark=at position 0.98 with {\arrow[thick]{>}} }] % curve \draw[fill=gray!10, postaction={decorate}] (47.5:2) arc (47.5:360:2 ) arc (0:42.5:2) -- + (225:0.5) arc (25 :0 :0.25) arc(360:65:0.25) -- cycle; % singularity \fill[red] (45:1.25) circle (1pt) node[inner sep=2pt,font=\footnotesize,right] {$a$}; \draw[thick, ->] (0,0) -- node[sloped,above] {$r$} (330:2); \draw[->] (-3,0) -- (3,0) node[right] {$\Re$}; \draw[->] (0,-3) -- (0,3) node[above] {$\Im$}; • @DavidBowman, you didn't provide anything about what you try so far nor about your document. So, with not knowing preamble of your document and \documentclass{...} I can't say anything what is going wrong at you :(. Image size is 4cm x 4cm ... – Zarko Apr 14 '17 at 3:01 • @DavidBowman Maybe, x and y are set to non-default values. Then, it helps to reset them: \begin{tikzpicture}[x=1cm, y=1cm]. – Heiko Oberdiek Apr 14 '17 at 4:15
2019-06-26 06:03:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5621877312660217, "perplexity": 1276.1638279936296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000175.78/warc/CC-MAIN-20190626053719-20190626075719-00262.warc.gz"}
https://physmath.spbstu.ru/en/article/2020.48.11/
# Production of K*-mesons in the copper-gold nuclei collisionsat √(s_NN) = 200 GeV Authors: Abstract: This paper presents invariant transverse momentum spectra and nuclear modification factors of K*(892)-mesons measured in the Cu + Au collisions at √(s_NN ) = 200 GeV. The measurements were performed in five centrality bins in the range of transverse momentum from 2.00 to 5.75 GeV/c in the PHENIX experiment at the RHIC. Nuclear modification factors were compared with previously obtained PHENIX data in Cu + Cu collisions at √(s_NN ) = 200 GeV. The nuclear modification factors of K*-mesons in Cu + Cu and Cu + Au collisions at the same values of a number of participants Npartwere found to have similar values (within uncertainties).
2021-01-19 09:12:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9087976813316345, "perplexity": 4756.44929423402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703518201.29/warc/CC-MAIN-20210119072933-20210119102933-00137.warc.gz"}
http://www.ebooklibrary.org/eBooks/WPLBN0003996022-Cloud-Condensation-Nuclei-Ccn-from-Fresh-and-Aged-Air-Pollution-in-the-Megacity-Region-of-Bei-by-Gunthe-S-S-.aspx
#jsDisabledContent { display:none; } My Account | Register | Help Add to Book Shelf Flag as Inappropriate This book will be permanently flagged as inappropriate and made unaccessible to everyone. Are you certain this book is inappropriate?          Excessive Violence          Sexual Content          Political / Social Email this Book Email Address: # Cloud Condensation Nuclei (Ccn) from Fresh and Aged Air Pollution in the Megacity Region of Beijing : Volume 11, Issue 3 (25/03/2011) ## By Gunthe, S. S. Book Id: WPLBN0003996022 File Size: Pages 39 Reproduction Date: 2015 Title: Cloud Condensation Nuclei (Ccn) from Fresh and Aged Air Pollution in the Megacity Region of Beijing : Volume 11, Issue 3 (25/03/2011) Author: Gunthe, S. S. Volume: Vol. 11, Issue 3 Language: English Subject: Collections: Historic Publication Date: 2011 Publisher: Copernicus Gmbh, Göttingen, Germany Member Page: Copernicus Publications Citation APA MLA Chicago Rose, D., Achtert, P., Zhu, T., Nowak, A., Pöschl, U., Kondo, Y.,...Shao, M. (2011). Cloud Condensation Nuclei (Ccn) from Fresh and Aged Air Pollution in the Megacity Region of Beijing : Volume 11, Issue 3 (25/03/2011). Retrieved from http://www.ebooklibrary.org/ Description Description: Biogeochemistry Department, Max Planck Institute for Chemistry, Mainz, Germany. Atmospheric aerosol particles serving as cloud condensation nuclei (CCN) are key elements of the hydrological cycle and climate. CCN properties were measured and characterized during the CAREBeijing-2006 campaign at a regional site south of the megacity of Beijing, China. Size-resolved CCN efficiency spectra recorded for a supersaturation range of S = 0.07% to 0.86% yielded average activation diameters in the range of 190 nm to 45 nm. The corresponding effective hygroscopicity parameters (Κ) exhibited a strong size dependence ranging from ~0.25 in the Aitken size range to ~0.45 in the accumulation size range. The campaign average value (Κ = 0.3 ± 0.1) was similar to the values observed and modeled for other populated continental regions. The hygroscopicity parameters derived from the CCN measurements were consistent with chemical composition data recorded by an aerosol mass spectrometer (AMS) and thermo-optical measurements of apparent elemental and organic carbon (ECa and OC). The CCN hygroscopicity and its size dependence could be parameterized as a function of AMS based organic and inorganic mass fractions using the simple mixing rule Κ p ≈ 0.1 · forg + 0.7 · finorg. When the measured air masses originated from the north and passed rapidly over the center of Beijing (fresh city pollution), the average particle hygroscopicity was reduced (Κ = 0.2 ± 0.1), which is consistent with enhanced mass fractions of organic compounds (~50%) and ECa (~30%) in the fine particulate matter (PM1). Moreover, substantial fractions of externally mixed weakly CCN-active particles were observed at low supersaturation (S = 0.07%), which can be explained by the presence of freshly emitted soot particles with very low hygroscopicity (Κ<0.1). Particles in stagnant air from the industrialized region south of Beijing (aged regional pollution) were on average larger and more hygroscopic, which is consistent with enhanced mass fractions (~60%) of soluble inorganic ions (mostly sulfate, ammonium, and nitrate). Accordingly, the number concentration of CCN in aged air from the megacity region was higher than in fresh city outflow ((2.5–9.9) × 103 cm−3 vs. (0.4–8.3) × 103 cm−3) although the total aerosol particle number concentration was lower (1.2 × 104 cm−3 vs. 2.3 × 104 cm−3). A comparison with related studies suggests that the fresh outflow from Chinese urban centers generally may contain more, but smaller and less hygroscopic aerosol particles and thus fewer CCN than the aged outflow from megacity regions. Summary Cloud condensation nuclei (CCN) from fresh and aged air pollution in the megacity region of Beijing Excerpt Achtert, P., Birmili, W., Nowak, A., Wehner, B., Wiedensohler, A., Takegawa, N., Kondo, Y., Miyazaki, Y., Hu, M., and Zhu, T.: Hygroscopic growth of tropospheric particle number size distributions over the North China Plain, J. Geophys. Res., 114, D00G07, doi:10.1029/2008JD010921, 2009.; Andreae, M. O., Jones, C. D., and Cox, P. M.: Strong present-day aerosol cooling implies a hot future, Nature, 435, 1187–1190, 2005.; Andreae, M. O.: Correlation between cloud condensation nuclei concentration and aerosol optical thickness in remote and polluted regions, Atmos. Chem. Phys., 9, 543–556, doi:10.5194/acp-9-543-2009, 2009.; Andreae, M. O. and Rosenfeld, D.: Aerosol-cloud-precipitation interactions. Part 1. The nature and sources of cloud-active aerosols, Earth-Sci. Rev., 89, 13–41, 2008.; Andreae, M. O. and Gelencsér, A.: Black carbon or brown carbon? The nature of light-absorbing carbonaceous aerosols, Atmos. Chem. Phys., 6, 3131–3148, doi:10.5194/acp-6-3131-2006, 2006.; Birmili, W., Stratmann, F., and Wiedensohler, A.: Design of a DMA-based size spectrometer for a large particle size range and stable operation, J. Aerosol Sci., 30(4), 549–553, 1999.; Chang, R. Y.-W., Slowik, J. G., Shantz, N. C., Vlasenko, A., Liggio, J., Sjostedt, S. J., Leaitch, W. R., and Abbatt, J. P. D.: The hygroscopicity parameter ($\kappa$) of ambient organic aerosol at a field site subject to biogenic and anthropogenic influences: relationship to degree of aerosol oxidation, Atmos. Chem. Phys., 10, 5047–5064, doi:10.5194/acp-10-5047-2010, 2010.; Chen, D. S., Cheng, S. Y., Liu, L., Chen, T., and Guo, X. R.: An integrated MM5-CMAQ modeling approach for assessing trans-boundary PM10 contribution to the host city of 2008 Olympic summer games – Beijing, China, Atmos. Environ., 41, 6, 1237-1250, doi:10.1016/j.atmosenv.2006.09.045, 2007.; Cheng, Y. F., Berghof, M., Garland, R. M., Wiedensohler, A., Wehner, B., Müller, T., Su, H., Zhang, Y. H., Achtert, P., Nowak, A., Pöschl, U., Zhu, T., Hu, M., and Zeng, L. M.: Influence of soot mixing state on aerosol light absorption and single scattering albedo during air mass aging at a polluted regional site in Northeastern China, J. Geophys. Res., 114, D00G10, doi:10.1029/2008JD010883, 2009.; DeCarlo, P. F., Slowik, J. G., Worsnop, D. R., Davidovits, P., and Jimenez, J. L.: Particle morphology and density characterization by combined mobility and aerodynamic diameter measurements. Part 1: Theory, Aerosol Sci. Tech., 38(12), 1185–1205, 2004.; Deng, Z. Z., Zhao, C. S., Ma, N., Liu, P. F., Ran, L., Xu, W. Y., Chen, J., Liang, Z., Liang, S., Huang, M. Y., Ma, X. C., Zhang, Q., Quan, J. N., Yan, P., Henning, S., Mildenberger, K., Sommerhage, E., Schäfer, M., Stratmann, F., and Wiedensohler, A.: Size- Modeling of Daytime Hono Vertical Gradie... (by Wong, K. W.) Characterization of the Size-segregated ... (by Zhang, L.) Oh Initiated Heterogeneous Oxidation of ... (by Liu, Y.) Heterogeneous Uptake of the C1 to C4 Org... (by Hatch, C. D.) Spatial and Seasonal Variations of Fine ... (by Zhang, X.) Direct and Disequilibrium Effects on Pre... (by McInerney, D.) The Distribution and Trends of Fog and H... (by Fu, G. Q.) Up/Down Trend in the Modis Aerosol Optic... (by Itahashi, S.) The Aerosol-climate Model Echam5-ham : V... (by Stier, P.) Precipitation of Salts in Freezing Seawa... (by Morin, S.) Cloud-resolving Simulations of Mercury S... (by Nair, U. S.) Physical Properties and Concentration of... (by Guyon, P.)
2019-09-24 08:58:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.312084436416626, "perplexity": 12100.130813305235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572896.15/warc/CC-MAIN-20190924083200-20190924105200-00177.warc.gz"}
https://docs.aviumtechnologies.com/user-guide/output-files
# Output files The APM Solver and Preprocessor can output files in .csv, .vtu, and .dat file formats. The .csv files can be opened with any text editor or data processing software such as MATLAB or RStudio. The .vtu files are specific to ParaView. ParaView is an open-source, multi-platform data analysis and visualization application which can be downloaded from https://www.paraview.org/download/. The .dat files are specific to Tecplot. Tecplot is the name of a family of visualization & analysis software tools developed by Tecplot, Inc. To enable or disable ParaView or Tecplot output change the `paraview_output` and `tecplot_output` options in the .conf file. ## Output file descriptions File Description _body_panels.vtu_body_panels.dat Files in .vtu, or .dat file formats which contain the source strengths, doublet strengths, reference velocities, relative velocities and the pressure coefficient of the body elements. Created by the APM Solver. _wake_panels.vtu_wake_panels.dat Files in .vtu, or .dat file formats which contain the doublet strengths of the wake elements. Created by the APM Solver. .loads File in space delimited file format which contains the integrated loads of the body. Created by the APM Solver. .mdl File in space delimited file format which contains the element types, vertices, and neighbours. Created by the APM preprocessor and used by the APM Solver. .checkpoint File in binary file format which contains the latest solution. Created by the APM Solver. .json File in .json file format which can be opened by the APM Viewer. The file contains the mesh, the body and wake panels as well as the integrated loads. Created by the APM Preprocessor/Solver. The structure of an example .loads file is shown below: Timestep Time CFx CFy CFz CMx CMy CMz CL CD CC 0 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 1 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 2 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 3 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 ...
2023-02-09 12:31:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9842671751976013, "perplexity": 2918.829980115497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499966.43/warc/CC-MAIN-20230209112510-20230209142510-00007.warc.gz"}
https://www.physicsforums.com/threads/trouble-with-waves-1d.681006/
# Trouble with waves 1D 1. Mar 26, 2013 ### Atheel 1. A given string length L, connected at one end while the other end is free. Assume that the string moves in one dimensional, ie, the amplitude can be described by a function of the shape of his y (x, t). What is the average kinetic energy (as a string), assuming it moves with the lowest frequency possible? Given that when the string is perfectly horizontal its tension = To and amplitude motion is A. T0 = 2.76 [gram*cm/sec2] L = 5.25 [cm] A = 5.86 [cm] 2. <sin2(at)>=<cos2(at)>=1/2. (average) 3. With boundary conditions find the frequency of the wave vector, and then calculate the kinetic energy 1. The problem statement, all variables and given/known data 2. Mar 26, 2013 ### Simon Bridge Welcome to PF; I see three questions there - how have you attempted to answer them? 3. Mar 26, 2013 ### Atheel hey its only one question not three... I have tried to find the frequency using: y(0,t)=0 y=A*cos(kx+wt) but then I dont know how to find the σ to use: Ek= σ*(A^2)*(ω^2)/2T * ∫cos^2(kx+wt) thnx 4. Mar 26, 2013 ### Simon Bridge Oh I see, the numbers refer to the sections in the standard question template... so "3" is your attempt. The question refers to a standing wave on a string length L, with only one end fixed. The lowest frequency standing wave has a special name - what is it called? You should be able to work out the shape of that wave and, thus, the wavelength, and from that and the wave speed, you get the frequency. 5. Mar 26, 2013 ### Atheel I think it called the fundamental. the shape of the wave is y=Acos(kx-wt) (asuming that the left edge of the string is stable). then I know that y(0,t)=0. the wave length is 2*L ? but how do I use the tension... 6. Mar 26, 2013 ### Simon Bridge By that equation, y(0,t)=Acos(wt) ... i.e. the x=0 end is not fixed, totally contradicting what you write below. Also according to that equation, the amplitude is the same for any value of x - does that make sense? Wavelength would be 2L if the string were fixed at both ends. Is it? The tension and the linear mass density are used to calculate the wave speed. That's the next step after getting the wavelength. 7. Mar 26, 2013 ### Atheel I think I should use: y(x,t)= A*e^i(kx+wt) + B*e^-i(kx+wt) nd then y(0,t)=0 should I also try dy/dt =0 ? its all missed up Im tring to solve this question more than two days.. the problem is I dont have material to stdy about this type of questions... do u have some examples or some good material that I should read befor trying solving this? thnx alot! 8. Mar 26, 2013 ### Atheel 1. The problem statement, all variables and given/known data A given string length L, connected at one end while the other end is free. Assume that the string moves in one dimensional, ie, the amplitude can be described by a function of the shape of his y (x, t). What is the average kinetic energy (as a string), assuming it moves with the lowest frequency possible? Given that when the string is perfectly horizontal its tension = To and amplitude motion is A. T0 = 2.76 [gram*cm/sec2] L = 5.25 [cm] A = 5.86 [cm] 2. Relevant equations <sin2(at)>=<cos2(at)>=1/2. (average) 3. The attempt at a solution With boundary conditions I should find the frequency of the wave vector, and then calculate the kinetic energy.. right? but how! "I should be able to work out the shape of that wave and, thus, the wavelength, and from that and the wave speed, you get the frequency." 9. Mar 26, 2013 ### Simon Bridge Good grief, whatever for?! I guess you need a primer for standing waves ... putting standing waves into a search engine should get you any number. A string fixed at both ends, the waveform has a node at each end and an antinode in the middle. $\lambda=2L$ http://hyperphysics.phy-astr.gsu.edu/hbase/waves/string.html A string fixed only at one end has a node at that end and an antinode at the other end. The math is the same as the link above, but the wavelength is longer. You should be able to see by sketching it out. 10. Mar 26, 2013 ### MisterX I think you need the linear density of the string to get the wave speed. 11. Mar 26, 2013 ### Atheel oh so in my case wavelength= 4*L !! right?!! 12. Mar 26, 2013 ### Simon Bridge 13. Mar 26, 2013 ### Simon Bridge Well done! The same webpage tells you how to get the wave speed too. From the wave speed and the wave length you can get the frequency, period, and angular frequency. Average kinetic energy is a bit trickier. 14. Mar 26, 2013 ### Staff: Mentor 15. Mar 26, 2013 ### Atheel ok now I have a problem, I think the wave speed is the same as the two sides fixed string... if Im right, then I need the density to work it out but I dont have it!! 16. Mar 26, 2013 ### Simon Bridge Yeah - no density ... but you have a better picture of what is going on than before. The equation would be: $y(x,t)=A\sin(\pi x/2L)\cos(\omega t)$ - but you don't know $\omega$. You need extra information to get it - check that what you have supplied in post #1 is everything. 17. Mar 26, 2013 ### Atheel I think I should use this: "Given that when the string is perfectly horizontal its tension = To and amplitude motion is A." ?? 18. Mar 27, 2013 ### Atheel The prof. says: "Mu is is the mass per unit volume , if you look at what;s given in the question you can see that you can make it up from the given data as well" can u give me a hint? 19. Mar 27, 2013 ### Simon Bridge OK "mass per unit volume" is the density. Do you know the volume of the string? (You have the length - you need the diameter or the radius or the cross-sectional area.) (note: this is stuff that is not in post #1 - please make sure that all the information you are supplied is present.) If you can get the area, then you have the mass per unit length. From there you can find the wave-speed ... and then the frequency.
2017-10-18 17:21:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6651325821876526, "perplexity": 1050.3494027214015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823016.53/warc/CC-MAIN-20171018161655-20171018181655-00205.warc.gz"}
https://www.physicsforums.com/threads/temperature-conversions.723310/
# Temperature Conversions Gold Member ## Homework Statement On a new Jekyll temperature scale, water freezes at 17 degree(s) J and boils at 97 degree(s) J. On another new temperature scale, the Hyde scale, water freezes at 0 degree(s) oH and boils at 120 degree(s) oH. If methyl alcohol boils at 84 oH, what is its boiling point on the Jekyll scale? ## Homework Equations Jekyll has a FP 17 degrees higher than the Hyde scale. ## The Attempt at a Solution Alright. Freezing point on Jekyll is 17 degrees higher than it is on Hyde. Therefore, any formula relating J and H must have the general form of: J(H) = H + 17, where H = 0. The general formula is: J(H) = H[(97-17)/120] + 17 Checking my work: H = 0, J = 17. Correct. H = 120. J = 97. Correct. What are some good ways of tackling this problem? Why does subtracting two points on the Jekyll scale divided by subtracting the two points on the Hyde scale multiplied by H work? DrClaude Mentor The best way is to tackle this mathematically. You are looking for a formula of the form $$J(H) = a H + b$$ (i.e., assuming that the two scales are linear). You then have $$J(0) = 17 = a \times 0 +b$$ and $$J(120) = 97 = a \times 120 + b$$ or, in other words, two equations with two unknowns. The first is easily solved for ##b = 17##, which allows you to solve the second for ##a##.
2021-06-13 20:14:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7137458920478821, "perplexity": 2230.500204440812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610841.7/warc/CC-MAIN-20210613192529-20210613222529-00155.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcdsb.2017098
# American Institute of Mathematical Sciences • Previous Article Risk-minimizing pricing and Esscher transform in a general non-Markovian regime-switching jump-diffusion model • DCDS-B Home • This Issue • Next Article Backward compact attractors for non-autonomous Benjamin-Bona-Mahony equations on unbounded channels September  2017, 22(7): 2587-2594. doi: 10.3934/dcdsb.2017098 ## On some difference equations with exponential nonlinearity Department of Civil Engineering, University of Patras, 26500 Patras, Greece In memory of Professor Evangelos K. Ifantis Received  July 2016 Revised  December 2016 Published  March 2017 The problem of the existence of complex $\ell_1$ solutions of two difference equations with exponential nonlinearity is studied, one of which is nonautonomous. As a consequence, several information are obtained regarding the asymptotic stability of their equilibrium points, as well as the corresponding generating function and $z-$ transform of their solutions. The results, which are obtained using a general theorem based on a functional-analytic technique, provide also a rough estimate of the region of attraction of each equilibrium point for the autonomous case. When restricted to real solutions, the results are compared with other recently published results. Citation: Eugenia N. Petropoulou. On some difference equations with exponential nonlinearity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2587-2594. doi: 10.3934/dcdsb.2017098 ##### References: [1] A. S. Ackleh and P. L. Salceanu, Competitive exclusion and coexistence in an $n-$species Ricker model, J. Biol. Dynamics, 9 (2015), 321-331.  doi: 10.1080/17513758.2015.1020576.  Google Scholar [2] D. Aruğaslan and L. Güzel, Stability of the logistic population model with generalized piecewise constant delays, Adv. Difference Equations, 2015 (2015). Google Scholar [3] I. Györi and L. Horváth, A new view of the $\ell^p$ -theory for a system of higher order difference equations, Comput. Math. Appl., 59 (2010), 2918-2932.  doi: 10.1016/j.camwa.2010.02.010.  Google Scholar [4] I. Györi and L. Horváth, $\ell^p$ -solutions and stability analysis of difference equations using the Kummer's test, Appl. Math. Comput., 217 (2011), 10129-10145.  doi: 10.1016/j.amc.2011.05.008.  Google Scholar [5] T. Hüls and C. Pötzsche, Qualitative analysis of a nonautonomous Beverton-Holt Ricker model, SIAM J. Appl. Dyn. Syst., 13 (2014), 1442-1488.  doi: 10.1137/140955434.  Google Scholar [6] E. K. Ifantis, On the convergence of power series whose coefficients satisfy a Poincaré-type linear and nonlinear difference equation, Complex Variables Theory Appl., 9 (1987), 63-80.  doi: 10.1080/17476938708814250.  Google Scholar [7] Y. Kang and H. Smith, Global dynamics of a discrete two-species Lottery-Ricker competition model, J. Biol. Dynamics, 6 (2012), 358-376.  doi: 10.1080/17513758.2011.586064.  Google Scholar [8] R. May, Simple mathematical models with very complicated dynamics, Nature, 261 (1976), 459-467.  doi: 10.1007/978-0-387-21830-4_7.  Google Scholar [9] G. Papaschinopoulos, N. Fotiades and C. J. Schinas, On a system of difference equations including negative exponential terms, J. Differ. Equations Appl., 20 (2014), 717-732.  doi: 10.1080/10236198.2013.814647.  Google Scholar [10] G. Papaschinopoulos, M. A. Radin and C. J. Schinas, On the system of two difference equations of exponential form: $x_{n+1}=a+bx_{n-1}e^{-y_{n}}$, $y_{n+1}=c+dy_{n-1}e^{-x_{n}}$, Math. Comp. Mod., 54 (2011), 2969-2977.  doi: 10.1016/j.mcm.2011.07.019.  Google Scholar [11] E. N. Petropoulou and P. D. Siafarikas, Functional analysis and partial difference equations, in Some Recent Advances in Partial Difference Equations (ed. E. N. Petropoulou), Bentham eBooks (2010), 49–76. Google Scholar [12] W. E. Ricker, Stock and recruitmnet, J. Fish. Res. Board Canada, 11 (1954), 559-623.   Google Scholar [13] G. Stefanidou, G. Papaschinopoulos and C. J. Schinas, On a system of two exponential type difference equations, Comm. Appl. Nonlinear Anal., 17 (2010), 1-13.   Google Scholar [14] S. Stevic, On a discrete epidemic model Discrete Dynam. Nat. Soc. , 2007 (2007), Article ID 87519, 10pp. doi: 10.1155/2007/87519.  Google Scholar show all references ##### References: [1] A. S. Ackleh and P. L. Salceanu, Competitive exclusion and coexistence in an $n-$species Ricker model, J. Biol. Dynamics, 9 (2015), 321-331.  doi: 10.1080/17513758.2015.1020576.  Google Scholar [2] D. Aruğaslan and L. Güzel, Stability of the logistic population model with generalized piecewise constant delays, Adv. Difference Equations, 2015 (2015). Google Scholar [3] I. Györi and L. Horváth, A new view of the $\ell^p$ -theory for a system of higher order difference equations, Comput. Math. Appl., 59 (2010), 2918-2932.  doi: 10.1016/j.camwa.2010.02.010.  Google Scholar [4] I. Györi and L. Horváth, $\ell^p$ -solutions and stability analysis of difference equations using the Kummer's test, Appl. Math. Comput., 217 (2011), 10129-10145.  doi: 10.1016/j.amc.2011.05.008.  Google Scholar [5] T. Hüls and C. Pötzsche, Qualitative analysis of a nonautonomous Beverton-Holt Ricker model, SIAM J. Appl. Dyn. Syst., 13 (2014), 1442-1488.  doi: 10.1137/140955434.  Google Scholar [6] E. K. Ifantis, On the convergence of power series whose coefficients satisfy a Poincaré-type linear and nonlinear difference equation, Complex Variables Theory Appl., 9 (1987), 63-80.  doi: 10.1080/17476938708814250.  Google Scholar [7] Y. Kang and H. Smith, Global dynamics of a discrete two-species Lottery-Ricker competition model, J. Biol. Dynamics, 6 (2012), 358-376.  doi: 10.1080/17513758.2011.586064.  Google Scholar [8] R. May, Simple mathematical models with very complicated dynamics, Nature, 261 (1976), 459-467.  doi: 10.1007/978-0-387-21830-4_7.  Google Scholar [9] G. Papaschinopoulos, N. Fotiades and C. J. Schinas, On a system of difference equations including negative exponential terms, J. Differ. Equations Appl., 20 (2014), 717-732.  doi: 10.1080/10236198.2013.814647.  Google Scholar [10] G. Papaschinopoulos, M. A. Radin and C. J. Schinas, On the system of two difference equations of exponential form: $x_{n+1}=a+bx_{n-1}e^{-y_{n}}$, $y_{n+1}=c+dy_{n-1}e^{-x_{n}}$, Math. Comp. Mod., 54 (2011), 2969-2977.  doi: 10.1016/j.mcm.2011.07.019.  Google Scholar [11] E. N. Petropoulou and P. D. Siafarikas, Functional analysis and partial difference equations, in Some Recent Advances in Partial Difference Equations (ed. E. N. Petropoulou), Bentham eBooks (2010), 49–76. Google Scholar [12] W. E. Ricker, Stock and recruitmnet, J. Fish. Res. Board Canada, 11 (1954), 559-623.   Google Scholar [13] G. Stefanidou, G. Papaschinopoulos and C. J. Schinas, On a system of two exponential type difference equations, Comm. Appl. Nonlinear Anal., 17 (2010), 1-13.   Google Scholar [14] S. Stevic, On a discrete epidemic model Discrete Dynam. Nat. Soc. , 2007 (2007), Article ID 87519, 10pp. doi: 10.1155/2007/87519.  Google Scholar [1] Xiaorui Wang, Genqi Xu, Hao Chen. Uniform stabilization of 1-D Schrödinger equation with internal difference-type control. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021022 [2] Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021002 [3] Yukihiko Nakata. Existence of a period two solution of a delay differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1103-1110. doi: 10.3934/dcdss.2020392 [4] Xuefei He, Kun Wang, Liwei Xu. Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28 (4) : 1503-1528. doi: 10.3934/era.2020079 [5] Álvaro Castañeda, Pablo González, Gonzalo Robledo. Topological Equivalence of nonautonomous difference equations with a family of dichotomies on the half line. Communications on Pure & Applied Analysis, 2021, 20 (2) : 511-532. doi: 10.3934/cpaa.2020278 [6] Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020432 [7] Patrick Martinez, Judith Vancostenoble. Lipschitz stability for the growth rate coefficients in a nonlinear Fisher-KPP equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 695-721. doi: 10.3934/dcdss.2020362 [8] Ludovick Gagnon, José M. Urquiza. Uniform boundary observability with Legendre-Galerkin formulations of the 1-D wave equation. Evolution Equations & Control Theory, 2021, 10 (1) : 129-153. doi: 10.3934/eect.2020054 [9] Sihem Guerarra. Maximum and minimum ranks and inertias of the Hermitian parts of the least rank solution of the matrix equation AXB = C. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 75-86. doi: 10.3934/naco.2020016 [10] Justin Holmer, Chang Liu. Blow-up for the 1D nonlinear Schrödinger equation with point nonlinearity II: Supercritical blow-up profiles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 215-242. doi: 10.3934/cpaa.2020264 [11] Mokhtari Yacine. Boundary controllability and boundary time-varying feedback stabilization of the 1D wave equation in non-cylindrical domains. Evolution Equations & Control Theory, 2021  doi: 10.3934/eect.2021004 [12] Xin-Guang Yang, Lu Li, Xingjie Yan, Ling Ding. The structure and stability of pullback attractors for 3D Brinkman-Forchheimer equation with delay. Electronic Research Archive, 2020, 28 (4) : 1395-1418. doi: 10.3934/era.2020074 [13] Oussama Landoulsi. Construction of a solitary wave solution of the nonlinear focusing schrödinger equation outside a strictly convex obstacle in the $L^2$-supercritical case. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 701-746. doi: 10.3934/dcds.2020298 [14] Marc Homs-Dones. A generalization of the Babbage functional equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 899-919. doi: 10.3934/dcds.2020303 [15] Bilel Elbetch, Tounsia Benzekri, Daniel Massart, Tewfik Sari. The multi-patch logistic equation. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021025 [16] Peter Poláčik, Pavol Quittner. Entire and ancient solutions of a supercritical semilinear heat equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 413-438. doi: 10.3934/dcds.2020136 [17] Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020345 [18] Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020384 [19] Anh Tuan Duong, Phuong Le, Nhu Thang Nguyen. Symmetry and nonexistence results for a fractional Choquard equation with weights. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 489-505. doi: 10.3934/dcds.2020265 [20] Maicon Sônego. Stable transition layers in an unbalanced bistable equation. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020370 2019 Impact Factor: 1.27
2021-01-23 08:52:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6866888999938965, "perplexity": 6572.709575513145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703536556.58/warc/CC-MAIN-20210123063713-20210123093713-00322.warc.gz"}
https://www.gamedev.net/forums/topic/504929-c-opengl-poligon-class/
OpenGL [c++ opengl] poligon class This topic is 3627 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts Hy. I have created a poligon class this: #include "stdafx.h" #ifdef _WINDOWS #include <windows.h> #endif #include <math.h> #include "polygon.h" CPolygon::CPolygon() { } CPolygon::~CPolygon() { delete m_PolyData; } void CPolygon::Draw() { FACE* ptrFace = m_PolyData->m_faces; INDEX* ptrIndex = NULL; int nIndexCount = 0 ; int nfacesCount = m_PolyData->m_nFacesCount; while(nfacesCount -- ) { ptrIndex = m_PolyData->m_faces[nfacesCount].pIndex; nIndexCount = m_PolyData->m_faces[nfacesCount].nCount; int pIdxNorm =m_PolyData->m_faces[nfacesCount].nFaceNormal; glBegin(GL_POLYGON); glNormal3fv(m_PolyData->m_FaceNormals[pIdxNorm].val); while(nIndexCount--) { int pIdx =m_PolyData->m_faces[nfacesCount].pIndex[nIndexCount].nVertex; glVertex3fv(m_PolyData->m_vertexs[pIdx].val); } glEnd(); //glPopMatrix(); } } but is very very slow comparated with draw the poly in the code. Can you give me a hint for a faseter render? Thanks. Share on other sites Rather than using GL_POLYGON, do you think you can use GL_TRIANGLES or GL_QUADS? And see if you can use a for loop rather than 2 whiles. And lastly, shouldn't #ifdef _WINDOWS be #ifdef WIN32, but if _WINDOWS works, then use it. Share on other sites Real efficiency is yielded in if you render all primitives "at once". That requires homogenized geometry. Therefore triangulate the polygons so that all faces are 3 sided, so that you need not distinct say 5-gons from 6-gons and so on. Next, go away from the immediate mode (those glBegin/glEnd stuff) but build arrays of vertex attributes and push them as VBOs to the API. As an optimization step later on, you can optimize the vertex buffers by sorting vertex indices w.r.t. the post-T-&-L cache. I personally also have dropped pointers to topological elements where possible already in the meshes. Using indices instead have several advantages: They consume less memory (e.g. 4 bytes instead of 8 bytes on 64-bit maschines), and, perhaps more important, they allow loading (and saving) the elements in blocks without any pointer adaption. Share on other sites thanks haegarr. Then if i use triangle I can draw more than one poly at once? Because a line of the triangle is shared? and im not understand this: Quote: but build arrays of vertex attributes and push them as VBOs to the API what are the VBOs and the attributes? do you have a link or a book thath explain this? Thanks. Share on other sites Quote: what are the VBOs and the attributes?do you have a link or a book that explain this? If you use glBegin/glEnd, you'll have to pass the vertex coordinates and attributes like colors to the GPU each frame. This is really inefficient. VBO's allow you to pass those once and use them every frame. Here's a pdf-file about VBO's: http://developer.nvidia.com/object/using_VBOs.html Here's a NeHe lesson about VBO's: http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=45 Share on other sites and for the strips there are some documentation? Share on other sites But the quad strip works equals to triangle strip? Where is better triangle strip and where is better quad strip? i search in google but i can't find a lot. Do you have some link on this argument? Thanks [Edited by - giugio on August 14, 2008 12:50:26 PM] Share on other sites Sorry if i post again for the 3rd time,but i'm fascinated by the argument. I read about the VBO and i have two questions about 1)if i add a new mesh to the scene ,the vertex in the GPU changes,how sincronize with the graphics card VBO's?How about the add/remove vertexes process ? 2)is the strips process of quad/triangle strips convenient(and the two think work fine boot) with VBO's or the game not worth the candle? Thanks. Share on other sites GPUs render triangles, so drivers need to split quads into 2 triangles. Best speed is hence achieved if the faces are already ready-to-use, i.e. triangles. There are many discussion whether strips are more efficient than post-T&L cache optimized lists. AFAIK the lists are winning, but I have no personal experience (i.e. time measurements) with this topic yet. VBOs are very versatile in the sense of usage: There are 9 usage modes, look at those DYNAMIC_... vs STATIC_... vs. STREAM_... and .._DRAW vs. ..._COPY vs ..._READ parameters when invoking glBufferData. Functions exist to map vertex data from/to GPU/CPU accessible memory (heap, VRAM, ...). In summary VBO is a complex beast. But you have to remember that VBOs are made for fast rendering, and not for editing purposes. The more "dynamic" your mesh is, the more is it likely to be rendered less speedy. But nevertheless does it work. I personally programming an engine with integrated editing purposes. I deal with up to 5 kinds of mesh representation: A topologically full featured EditMesh that comes in 2 flavors (expanded and memory friendly), a TriMesh similar to VBOs but with multiple indices, an application accessible master copy of VBO content, and the VBOs itself. Normally only the master copy (in the case of animated meshes) and the VBOs itself are available, and the other meshes appear during editing or perhaps at load-time only. Coming to the point of updating vertices, it often happens that large portions of vertices are updated at once, e.g. skeleton animated characters. If done on the CPU, the master copy will be computed accordingly and then pushed to the GPU. For such purposes GL_STREAM_DRAW or perhaps GL_DYNAMIC_DRAW are intended. Other ways exist as well. However, I neither claim that the way I go is the best, nor do I invite you to do the same. The solutions I implement have some constraints that are normally not part of a game, so you may find simpler solutions for your purposes. I must admit that programming such features definitely costs its time. However, I hope that my posts give you some stimulations. Share on other sites I'm not understand the response to this question: Quote: Original post by giugio2)is the strips process of triangle strips convenient(and the two think work fine boot) with VBO's or the game not worth the candle? Thanks. if the responce is true , can you post me some documentation?at least on the theory sorry for my english. [Edited by - giugio on August 15, 2008 6:07:20 AM] 1. 1 2. 2 Rutin 24 3. 3 4. 4 JoeJ 19 5. 5 • 14 • 26 • 11 • 11 • 9 • Forum Statistics • Total Topics 631771 • Total Posts 3002252 ×
2018-07-21 15:59:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21360142529010773, "perplexity": 4778.915580366386}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592636.68/warc/CC-MAIN-20180721145209-20180721165209-00289.warc.gz"}
https://infoscience.epfl.ch/record/114951
Infoscience Journal article # An integral characterization of Hajlasz-Sobolev space. We prove that the pointwise inequality used by P. Hajlasz in his definition of Sobolev spaces on metric spaces is equivalent to an integral (Poincaré-type) inequality.
2016-10-27 04:11:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8218462467193604, "perplexity": 1465.021773144321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721067.83/warc/CC-MAIN-20161020183841-00383-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.coursehero.com/file/9474081/Exponential-Decay-Linear-Approximation-sinhx-MVT-lecture-notes/
Exponential Decay, Linear Approximation, sinh(x), MVT lecture notes - MA103 Week 8 Exponential Growth and Decay Approximation Techniques Hyperbolic # Exponential Decay, Linear Approximation, sinh(x), MVT lecture notes • Notes • 4 • 100% (1) 1 out of 1 people found this document helpful This preview shows page 1 - 2 out of 4 pages. MA103 Week 8 Exponential Growth and Decay, Approximation Techniques, Hyperbolic Functions, MVT 1.ProportionalityWhen we say that a given quantity, sayy, is proportional to another quantity, sayt, the relationship isdenoted byyt.This means that the value ofywill be some constant multiple oftand the relationshipcan be represented by the equationy=k·t,kR.If a coordinate pair of values, (y, t),is given then thevalue ofkcan be determined, which then remains the same for any given application.In this case, ifk >0,then the quantityyincreases astincreases and ifk <0, thenydecreases astincreases.Ifyis inversely proportional tot, the relationship is represented byy1tor by the equationy=kt,kR,etc.2.Exponential Growth and Decay(Text: 3.8)Many practical applications involve a quantity which either grows or decays at a rate proportional to its size.In general, ify=f(t) gives the value of a quantity at timet, then the rate of change inywith respect totis given bydydt.Thus, from our first statement, we havedydt=kyfor some constantk.This is an exampleof a differential equation, and is called theLaw of Natural Growth[ ifk >0 ]or theLaw of Natural Decay[ ifk <0 ].We have already seen a function[ and it is, up to a constant multiple, the only such function! ]whosederivative is a multiple of itself – the exponential function.It can be shown that solutions for the abovedifferential equation are of the form:y=y0·ekt, wherey0denotes the initial value of the quantity[ i.e.,the value ofywhent= 0 ].Using given values in the application [ i.e., the size of the quantity at a specifictime ], the value ofkcan be determined to obtain the solution particular to that application, which can thenbe used to find the size of the quantity at timet.Examples of Exponential Growth/Decay that may be looked at during the lab:1)Malthusian Law of Population Growth:The rate of change of a populationP0(t) is proportional to the population sizeP(t) at timet.Specifically,P0(t)P(t)P0(t) =kP(t)Example: Suppose a population grows according to this model (i.e.P0(t) =kP(t)).IfP(0) = 5000,then (from above)P(t) =P(0)ekt= 5000ektNote: More information is required to findk(the relative growth rate of the population).
2021-07-31 05:03:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8977658152580261, "perplexity": 3452.870173464569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154053.17/warc/CC-MAIN-20210731043043-20210731073043-00614.warc.gz"}
https://philipdelff.github.io/NMdata/
## A fast R package for efficient data preparation, consistency-checking and post-processing in PK/PD modeling Pharmacometrics and PK/PD modeling offers unique information for decision-making in several steps of drug development. However, it often takes a lot of work to get there, and there are many pitfalls along the way. NMdata helps simplifying this work and steering around the pitfalls or at least making sure we didn’t fall into them. ### Automate book keeping and focus on modeling Preparing data sets - and if you use NONMEM, reading the results data - can be tedious, and mistakes can lead to hours of frustration. NMdata provides useful tools (including automated checks) for these trivial tasks. ### NMdata is not a silo Any functionality in the NMdata can be used independently of the rest of the package, and NMdata is not intended to force you to change any habits or preferences. Instead, NMdata tries to fit in with how you (or your colleague who worked on the project before you) do things. It likely provides helpful additions no matter what other tools you already use. The best place to browse information about the package is here. The quickest way in is the Cheatsheet. ### How to install NMdata is on CRAN, MRAN, and MPN. To install from the package archive you are already using, do: install.packages("NMdata") library(NMdata) See further below for instructions on how to install from other sources than your default archive, if need be. ## Prepare, check, and export PK/PD data On the data-generation side, functionality is provided for documentation of the datasets while generating them. Check out this vignette on the topic. There are functions for automatic checks of (some) data merges, handling and counting of exclusions flags, final preparations for ensuring readability in NONMEM, and ensuring traceability of datasets back to data generation scripts. ## Check data as read by NONMEM The NMcheckData function will do an extensive and fully automated set of checks of the data before you run NONMEM. And did NONMEM not behave? NMcheckData can debug the data as seen by NONMEM. That’s right - it has never been easier to find data bugs. ## Automated and general reader of NONMEM results data Reading the resulting data from NONMEM can require a few manual steps. Especially because all modelers seem to do things a little differently. NMscanData can return all data output (\$TABLE) from NONMEM combined, and if wanted with additional columns and rows in input data. It’s as simple as res <- NMscanData("xgxr014.lst",recover.rows=TRUE) #> Model: xgxr014 #> #> Used tables, contents shown as used/total: #> file rows columns IDs #> xgxr014_res.txt 905/905 12/12 150/150 #> xgxr2.rds (input) 1502/1502 22/24 150/150 #> (result) 1502 34+2 150 #> #> Input and output data merged by: ROW #> #> Distribution of rows on event types in returned data: #> EVID CMT input-only output result #> 0 1 2 0 2 #> 0 2 595 755 1350 #> 1 1 0 150 150 #> All All 597 905 1502 And we are ready to plot (a subset of) the result: res.plot <- subset(res,ID%in%c(113,135)&EVID==0) library(ggplot2) ggplot(res.plot,aes(TIME))+ geom_point(aes(y=DV,colour=flag))+ geom_line(aes(y=PRED))+ facet_wrap(~trtact)+ labs(y="Concentration (unit)",colour="Observations", subtitle="NOTICE:\nObservations are coloured by a character column fetched from input data.\nSamples below LLOQ are rows added from input data.\nPlots are correctly sorted because factor levels of dose are preserved from input data.")+ theme_bw()+theme(legend.position="bottom") #> Warning: Removed 2 row(s) containing missing values (geom_path). res.tibble <- NMscanData("xgxr001.lst",as.fun=tibble::as_tibble,quiet=TRUE) Or a data.table? This time, we’ll configure NMdata to return data.tables by default: NMdataConf(as.fun="data.table") res.dt <- NMscanData("xgxr001.lst",quiet=TRUE) NMscanData is very general, and should work with all kinds of models, and all kinds of other software and configurations. Take a look at this vignette for more info on the NONMEM data reader. Then you will learn how to access the meta data that will allow you to trace every step that was taken combining the data and the many checks that were done along the way too. ## Meta analysis made really easy Since NMscanData is so general and will figure out where to find input and output data on its own, let’s use the NMscanMultiple wrapper to read multiple models and compare their predictions. res <- NMscanMultiple(dir=system.file("examples/nonmem", package="NMdata"), file.pattern="xgxr.*\\.lst",as.fun="data.table",quiet=TRUE) #> No missing values identified #> #> Overview of model scanning results: #> lst #> 1: /tmp/RtmpTd9QIu/temp_libpath102521020f9b7/NMdata/examples/nonmem/xgxr001.lst #> 2: /tmp/RtmpTd9QIu/temp_libpath102521020f9b7/NMdata/examples/nonmem/xgxr002.lst #> 3: /tmp/RtmpTd9QIu/temp_libpath102521020f9b7/NMdata/examples/nonmem/xgxr003.lst #> 4: /tmp/RtmpTd9QIu/temp_libpath102521020f9b7/NMdata/examples/nonmem/xgxr014.lst #> 5: /tmp/RtmpTd9QIu/temp_libpath102521020f9b7/NMdata/examples/nonmem/xgxr018.lst #> nrows ncols success warning #> 1: 905 40 TRUE FALSE #> 2: 905 34 TRUE FALSE #> 3: 905 34 TRUE FALSE #> 4: 905 36 TRUE FALSE #> 5: 905 33 TRUE FALSE gmean <- function(x)exp(mean(log(x))) res.mean <- res[,.(gmeanPRED=gmean(PRED)),by=.(model,NOMTIME)] obs.all <- unique(res[,.(ID,NOMTIME,TIME,DV)]) ggplot(res.mean,aes(NOMTIME,gmeanPRED,colour=model))+geom_line()+ geom_point(aes(TIME,DV),data=obs.all,inherit.aes=FALSE)+ scale_y_log10()+ labs(x="Time",y="Concentration",subtitle="Comparison of population predictions") #> Warning: Transformation introduced infinite values in continuous y-axis #> Transformation introduced infinite values in continuous y-axis If your archive has not been updated since July 2021, you may not find NMdata if you try to install with install.packages (option 1). In that case you have two other options. You can explicitly select CRAN for the installation. Or if you should want a version that has not yet reached CRAN, installing from Github is easy too. ## Option 2: Install explicitly from CRAN install.packages("NMdata",repos="https://cloud.r-project.org") library(NMdata) ## Option 3: Install from github library(remotes) install_github("philipdelff/NMdata") library(NMdata) If you use the Github version, you may want to see the FAQ for how to install specific releases from Github (ensuring reproducibility). ## Questions? Check the FAQ, or ask on github ## Issues? The best way to report a bug or to request features is on github. ## Code of Conduct Please note that the patchwork project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.
2023-02-03 00:35:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22218473255634308, "perplexity": 4924.4102272760865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.2/warc/CC-MAIN-20230202232251-20230203022251-00227.warc.gz"}
http://mathoverflow.net/questions/94968/sheaf-embedding-preserving-initial-algebras
# Sheaf embedding preserving initial algebras? Any small pretopos $C$ can be embedded into a Grothendieck topos by a fully faithful functor that preserves all the pretopos structure (limits, images, finite unions of subobjects, disjoint coproducts, and quotients of equivalence relations). Namely, we may consider the topos of sheaves for the coherent topology on $C$, with the sheafified Yoneda embedding. If $C$ is (locally) cartesian closed, then that structure is also preserved by this embedding. My question is, what if $C$ also has a natural numbers object, or more general initial algebras for special endofunctors (e.g. "W-types")? Can we embed it into a topos of sheaves in a way that preserves these initial algebras? -
2014-04-16 08:08:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9842346906661987, "perplexity": 260.76703032837526}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
http://openstudy.com/updates/4eeb504de4b0367162f52c07
## A community for students. Sign up today Here's the question you clicked on: ## whatevs 4 years ago Determine the geometric partial sum is convergent or divergent? can you show me the steps? • This Question is Closed 1. Ishaan94 Hmm I don't know how to do that actually? If you could post a problem, maybe then I'll be able to help you 2. whatevs lol sorry about tht. 3. satellite73 any finite sum is a number so it "converges" 4. satellite73 if the question is does $\sum_{k=1}^{\infty} 3(5^k)=\lim_{n\rightarrow \infty}\sum_{k=1}^n 3\times 5^k$ converge the answer is surely not because the terms do not even go to zero, they get bigger and bigger! #### Ask your own question Sign Up Find more explanations on OpenStudy Privacy Policy
2016-02-11 13:08:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7937978506088257, "perplexity": 1652.7124274535754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701161946.96/warc/CC-MAIN-20160205193921-00256-ip-10-236-182-209.ec2.internal.warc.gz"}
https://binfalse.de/page6/
## Record Stream Using VLC I just needed to record a video stream. Usually, I use mplayer for these kinds of jobs, but this time it failed. However, on the internet I found a way to do it using VLC, which apparently has quite a command line interface. This comment revealed that the VLC media player comes with some command line magic. Of course, not much is documented in the man page, but the user guide on their website seems to be useful. Long story short, I ended up with the following command to save the stream http://STREAM.mp4 to /tmp/file.mkv : Cool. For the records, here are some alternatives: ## New GPG Key It was time to finally replace my old GPG key. I created the key in 2008 and from today’s perspective a 1024 bit DSA key is really weak. Thus, today I decided to move to a new key and created a 4096 bit RSA key. My old key was And the new key is: For those of you who already trust my old key I created a transition note which is signed by both my old and my new key. To import my new key to your key chain you can use the following command: The new key is already signed by the old key. Those of you trusting my old key may verify the signature using: To sign the new key execute the following command: And it would be nice if you upload the signed to one of the key servers: You are of course free to give me a call in order to verify the fingerprint ;-) ## Gajim idling error Just stumbled upon a small bug in Debian’s version of Gajim (0.15.4-2 – currently in testing and sid). The following error occurs when Gajim starts to idle: This results in a dis- and a subsequent reconnection. As the traceback already suggests the error can be found in /usr/share/gajim/src/common/connection_handlers.py on line 2009. This is the corresponding function: Obviously, there is no variable obj : The passed argument is called iq_obj … To fix that mistake just substitute the function definition with (replace iq_objobj in line 2006): Btw. I’m not sure why, but this error just affected one of my four machines which are running Gajim. ## Challenge is over. About 6 or 10 moths ago we were searching for a student to work with us in the SEMS project. In order to reduce the number of applications I started a challenge. To solve this challenge you had to show some understanding for basic techniques and programming languages, so we didn’t waste our time with people not able to write a single line of source code. And what should I say? It was successful! We’re now a great team with three students :D However, currently this challenge seems to spread over the internet. And lot’s of people try to solve it (and many submit a wrong answer^^). But even worse, some of you guys try to exploit it by submitting something like In general I don’t care. It was just some lines of PHP that send me an email in case of a correct answer. There is no database and the worst that can happen is a full inbox, but now I decided to close this challenge and instead forward users to this article. Thus, if you arrive here feel free to apply for a job! I guess all of my readers, even if they didn’t solve this challenge, are perfect fellows… If you nevertheless want to give it a try you can download the challenge. ## Extended MyTinyTodo MyTinyTodo is a self-hosted todo-list which convinces by its simplicity. It allows to maintain several different lists, you can assign tags, priorities and due dates to certain tasks. I used it myself for a long time and decided to fork the project in order to implement some stuff I missed in the original version. I do not intend to talk about MyTinyTodo a great deal. Very tiny, does nothing that isn’t necessary. No Dropbox/Facebook/Instagram etc integration. I really like this kind of software :D But I was missing an essential feature: Creating tasks via mail. Lucky us, MyTinyTodo is distributed under the terms of GPLv3 license. Thus, I hg clone d and extended the tool with desired functionality. And since the IDE was already opened I added a tiny authentication (now: username + password; previously: .htaccess ) and secured the API by introducing a signature. Nothing special or complex, but it had to be done. Long story short: I’m now able to submit tasks via e-mail. That means, a mail containing the following: will result in something similar to Figure 1. All possible attributes that are recognized in the mail body are listed at the wiki on GitHub. Find out more on GitHub. ## Integrating Tomcat with Apache You can configure the Apache web server to forward requests to Tomcat. Thus, you can speak to both servers on ports 80 or 443 and get rid of the :8080 for your Tomcat applications. I’m somehow doing that very often, so here is small how-to for copy&paste purposes. ## Install jk As you might know, while Tomcat is Java stuff Apache is written in C. So in general it’s not that easy to get them talking to each other. The key to achieve an integration is called mod_jk (see The Apache Tomcat Connector). So first of all you need to install it: If it is installed you can configure an AJP worker in /etc/libapache2-mod-jk/workers.properties : As soon as this is done the bridge is ready to close the gap between Apache and Tomcat. ## Configure Tomcat We need to configure an AJP connector on port 8009 . So open /etc/tomcat7/server.xml and add another connector next to the other ones: If you’re lucky there is already such a connector defined in the comments. So just remove the comment… ## Configure Apache to speak through jk Here I’ll show you how to setup a virtual host. For example, copy the following to /etc/apache2/sites-available/012-yourapp.conf : Ok, let me shortly explain what I did there. 1. Everything that arrives at this vhost gets forwarded to our previously defined AJP worker (line 9) 2. I assume your Tomcat webapp is running on server:8080/YourApp , therefor I configured a substitution of the URL to insert /YourApp (line 7). Of course you need to have mod_rewrite installed and enabled. (You may skip this line if you’re fine with having /YourApp in all your URLs) 3. The rest should be clear. The vhost is available at http://yourapp.yourserver.tld , as well as at http://ya.yourserver.tld (lines 3&4). You can also use SSL, just configure line 1 to listen at *:433 and add the SSL stuff to the body of your vhost. (SSL exmaple) Afterwards, enable the vhost to populate it: ## Give it a try If this is done just restart everything: Now Apache forwards all requests to http://yourapp.yourserver.tld to your Tomcat webapp at http://yourserver.tld:8080/YourApp . ## Find all Text Files, recursively Because I was thinking of something like that for a long time. In bash/zsh (add it to your .rc ): Using this function it’s possible to open all text files of a project at once: ## Change Title of moderncv Document Once again I had to prepare a CV for an application. I’m using the moderncv package to create the CV in $\LaTeX$ and I was always bothered about the title of the document. Today I spend some time to fix that. Using moderncv you can produce really fancy CV’s with very little effort. But unfortunately, by default it produces an ugly title (see the screenshot taken from Okular). As you can see, there is some character that cannot be displayed by certain tools. I guess most of my “CV-reviewers” don’t care about this little issue, if they recognize it at all, but it bothers me whenever I have to create a resumé. I already tried to override it using the hyperref package, but wherever I put the statement it seems to have no effect. However, since moderncv is open source (yeah! lovit) I took a look at the code to see how they produce the title. It was quite easy to find the concerning statement (in my case /usr/share/texlive/texmf-dist/tex/latex/moderncv/moderncv.cls:96 , texlive-latex-extra@2012.20120611-2): As expected the pdftitle contains a double-hyphen that is converted by latex to a dash. Apparently a problem for some programs. To fix this issue you could sudo:modify this file, but that’s of course messy. Better add something like the following to the end of the header of your document: This will override the broken output of the package. ## Check if certain Port is Open Just needed to get to know whether something listens at a certain TCP port on a particular host. Here is my workaround using Perl: Works at least for me. Any concerns or better solutions? Earlier this week I had a very small conversation with Pedro Mendes on twitter (well in terms of twitter it might be a long dicussion). It was initiated by him calling for suggestions for a password safe. I suggested better using a system for your passwords, which he thought was a bad idea. So lets have a look at both solutions. You all know about these rules for choosing a password. It should contain a mix of lower and upper case letters, numerals, special characters, and punctuation. Moreover, it should be at least eight characters long and has to be more or less random. Since our brain is limited in remembering such things we tend to use easy-to-remember password (e.g. replacing letters using leet). But of course hackers are aware of that and it is quite easy to also encode such rules in their cracking algorithms. Equally bad is using one strong password for all accounts. So, how to solve this problem? The second idea is using a system to generate passwords for each account. You have to choose a very strong password $p$, and a function $f$ that creates a unique password $u$ for every account using $p$ and the (domain) name $n$ of the related service: $u = f (p, n)$. You just need to remember this very good $p$ and $f$. Depending on your paranoia and your mind capabilities there are many options to choose $f$. An easy $f_1$ might just put the 3rd and last letters of $n$ at the 8th and 2nd pos in $p$ (see example below). More paranoid mathematicians might choose an $f_2$ that ASCII-adds the 3rd letter of $n$ to the 8th position of $p$, puts the $\lfloor\sqrt{n} * 10\rceil/10$ at the 2nd position in $p$, and appends the base64 representation of the multiplicative digital root of the int values of the ASCII letters of $n$ to $p$. Here you can see the examples: $p$ $n$ $f_1 (p, n)$ $f_2 (p, n)$ u:M~a{em0 twitter ur:M~a{eim0 u2.6:M~a{eW0Mi4yNDU2MjFlKzE0Cg== u:M~a{em0 google ue:M~a{eom0 u2.4:M~a{e]0MS40MjU4MjNlKzEyCg== So, you see if the password for twitter gets known the hacker isn’t able to log into your google account. To be honest, I guess that nobody will choose $f_2$, but I think even $f_1$ is quite good and leaves some space for simple improvements. However, as expected this solution also has some dramatic disadvantages. If one of your passwords gets compromised you need to change your system, at least choosing a different $p$ and maybe also an alternative for $f$. As soon as a hacker is able to get two of these passwords he will immediately recognize the low entropy and it is not difficult to create a pattern for your passwords making it easy to guess all other passwords. ## Conclusion This is not to convince somebody to use one or the other solution, its more or less a comparison of the pros and cons. In my opinion the current password mechanism is sort of stupid, but we need to find the least bad solution until we have some alternatives. So what about creating a small two-factor auth system? You could combine the two above mentioned solutions and use a password safe in combination with a password system. So keep a short lock in mind which is necessary to unlock the passwords in the safe. Maybe something like 29A which you have to add to every password (on some position of your choice, e.g. just append it). Thus, if a hacker breaks into one service only a singe password is compromised and you just need to update this entry in your safe, and if your whole safe is cracked all passwords are useless crap. Of course you have to create a new safe and update all your passwords, but the guy who knows your old “passwords” doesn’t know how to use them. However, we are discussing on a very high level. The mentioned scenarios are more or less just attacks against a particular person. I am a sysadmin, so I would already be very glad if users won’t use passwords like mama123 and stop sending passwords in clear-text mails! ## Supp: The Conversation just for the logs (in twitter chronology: new -> old): Pedro Mendes @gepasi at 1:13 PM - 30 May 13 @binfalse I agree, but using 30 character completely random ones seems to be the best. martin scharm @binfalse at 5:40 PM - 29 May 13 @gepasi either using a password safe (which also has drawbacks) or a system with a strong p and a complex f. martin scharm @binfalse at 5:39 PM - 29 May 13 @gepasi however, i support the attitude seeing every pw as compromised. so the most important rule is using unique pws for every service. martin scharm @binfalse at 5:39 PM - 29 May 13 @gepasi even after reading this article i’d say that ur:M~a{eim0 is quite strong and i’d expect to find it within the 10% uncracked. Pedro Mendes @gepasi at 1:18 PM - 29 May 13 @binfalse but thanks for the tip on KeePassX Pedro Mendes @gepasi at 1:18 PM - 29 May 13 @binfalse a system is not recommended. Anything a human can remember is broken within 24h. Read http://arstechnica.com/security/2013/05/how-crackers-make-minced-meat-out-of-your-passwords/ martin scharm @binfalse at 1:03 PM - 29 May 13 martin scharm @binfalse at 1:03 PM - 29 May 13 @gepasi quite easy to remember (when you know p), very hard to guess and brute-forcing the related hash really takes some time. martin scharm @binfalse at 1:03 PM - 29 May 13
2018-09-22 04:10:25
{"extraction_info": {"found_math": true, "script_math_tex": 28, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45787033438682556, "perplexity": 1415.2115814357123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158011.18/warc/CC-MAIN-20180922024918-20180922045318-00468.warc.gz"}
http://api.geodigs.cloud.newmediaone.net/page/kronecker-product-calculator-7628e8
kronecker product calculator Answers to Questions. For $M_1=[a_{ij}]$ a matrix with $m$ lines and $n$ columns and $M_2=[b_{ij}]$ a matrix with $p$ lines and $q$ columns. 2.1.1 Basic Properties KRON 1 (4.2.3 in [9]) It does not matter where we place multiplication with a scalar, i.e. Tool to calculate a Kronecker matrix product in computer algebra. From MathWorld--A Wolfram Web Resource. Note: In mathematics, the Kronecker product, denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. edit close. New York: Dover, p. 12, 1996. Write to dCode! Kronecker product has also some distributivity properties: - Distributivity over matrix transpose: $( A \otimes B )^T = A^T \otimes B^T$, - Distributivity over matrix traces: $\operatorname{Tr}( A \otimes B ) = \operatorname{Tr}( A ) \operatorname{Tr}( B )$, - Distributivity over matrix determinants: $\operatorname{det}( A \otimes B ) = \operatorname{det}( A )^{m} \operatorname{det}( B )^{n}$. How to multiply 2 matrices with Kronecker? Enhanced by many worked examples — as well as problems and solutions — this in-depth text discusses the Kronecker matrix product. b]. link brightness_4 code // C++ code to find the Kronecker Product of two // matrices and stores it as matrix C . Explore anything with the first computational knowledge engine. play_arrow. Schafer, R. D. An The Kronecker product C=A B can be thought of as creating an algebra C from two smaller algebras A and B. If A is an m -by- n matrix and B is a p -by- q matrix, then kron (A,B) is an m*p -by- n*q matrix formed by taking all possible products between the elements of A and the matrix B. Unlimited random practice problems and answers with built-in Step-by-step solutions. If A and B represent linear operators on different vector spaces then A B represents the combination of these linear operators. I will disallow built-ins that directly calculate the Kronecker, Jacobi or Legendre symbols, but anything else (including prime factorization functions) should be fair game. a feedback ? Below is the code to find the Kronecker Product of two matrices and stores it as matrix C : C++. In Fortran 90, matrices are stored as 2-D arrays. It calculates C = a*C + b* (A kron B). An Practice online or make a printable study sheet. I'm not seeing any in-built commands that produce the Kronecker product. It is a generalization of the outer product (which is denoted by the same symbol) from vectors to matrices, and gives the matrix of the tensor product with respect to a standard choice of basis. Weisstein, Eric W. "Kronecker Product." It contains generic C++ and Fortran 90 codes that do not require any installation of other libraries. Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. Hints help you try the next step on your own. https://mathworld.wolfram.com/KroneckerProduct.html. K = kron (A,B) returns the Kronecker tensor product of matrices A and B. kronecker,product,multiplication,matrix,tensor, Source : https://www.dcode.fr/kronecker-product. The Kronecker product suport associativity : $$A \otimes (B+ \lambda\ \cdot C) = (A \otimes B) + \lambda (A \otimes C) \\ (A + \lambda\ \cdot B) \otimes C = (A \otimes C) + \lambda (B \otimes C) \\ A \otimes ( B \otimes C) = (A \otimes B) \otimes C \\ (A \otimes B) (C \otimes D) = (A C) \otimes (B D)$$. \$\endgroup\$ – … Collection of teaching and learning tools built by Wolfram education experts: dynamic textbook, lesson plans, widgets, interactive Demonstrations, and more. no data, script or API access will be for free, same for Kronecker Product download for offline use on PC, tablet, iPhone or Android ! For $M_1=[a_{ij}]$ a matrix with $m$ lines and $n$ columns and $M_2=[b_{ij}]$ a matrix with $p$ lines and $q$ columns. For example, the matrix direct product of the matrix and the matrix is given by the following matrix. The Kronecker product is also sometimes calle… A dyad is a special tensor – to be discussed later –, which explains the name of this product. Join the initiative for modernizing math education. What are matrix Kronecker multiplication properties. Introduction to Nonassociative Algebras. The dot product of two vectors AB in this notation is AB = A 1B 1 + A 2B 2 + A 3B 3 = X3 i=1 A iB i = X3 i=1 X3 j=1 A iB j ij: Note that there are nine terms in the nal sums, but only three of them are non-zero. space tensor product of the original vector spaces. The matrix direct product gives the matrix of the linear transformation induced by the vector How to multiply 2 matrices with Kronecker? called their matrix direct product, is an matrix with elements defined by. More precisely, suppose that. The hadamard() command fails. The second kind of tensor product of the two vectors is a so-called con-travariant tensor product: (10) a⊗b0 = b0 ⊗a = X t X j a tb j(e t ⊗e j) = (a tb je j t). Given an matrix and a matrix, their Kronecker product , also called their matrix direct product, is an matrix with elements defined by (1) where (2) (3) For example, the matrix direct product of the matrix and the matrix is given by the following matrix, (4) (5) The matrix direct product is implemented in the Wolfram Language as KroneckerProduct[a, b]. It may have to be created as an external command, or function. a bug ? Except explicit open source licence (indicated CC / Creative Commons / free), any algorithm, applet or snippet (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt / encrypt, decipher / cipher, decode / encode, translate) written in any informatic language (PHP, Java, C#, Python, Javascript, Matlab, etc.) The matrix direct product is implemented in the Wolfram Language as KroneckerProduct[a, The #1 tool for creating Demonstrations and anything technical. The Kronecker product is a special case of tensor multiplication on matrices. Please, check our community Discord for help requests! Knowledge-based programming for everyone. Thanks to your feedback and relevant comments, dCode has developped the best 'Kronecker Product' tool, so feel free to write! The Kronecker product has a lot of interesting properties, many of them are stated and proven in the basic literature about matrix analysis ( e.g. https://mathworld.wolfram.com/KroneckerProduct.html. Introduction to Nonassociative Algebras. [9, Chapter 4] ). The Kronecker product is noted with a circled cross ⊗ $M_1 \otimes M_2 = [c_{ij}]$ is a larger matrix of $m \times p$ lines and $n \times q$ columns, with : $$\forall i, j : c_{ij} = a_{ij}.B$$, Example: $$M=\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} \otimes \begin{bmatrix} 7 & 8 \\ 9 & 10 \end{bmatrix} = \begin{bmatrix} 7 & 8 & 14 & 16 & 21 & 24 \\ 9 & 10 & 18 & 20 & 27 & 30 \\ 28 & 32 & 35 & 40 & 42 & 48 \\ 36 & 40 & 45 & 50 & 54 & 60 \end{bmatrix}$$, This product is not equivalent to the classical multiplication">matrix product, $M_1 \otimes M_2 \neq M_1 \dot M_2$. Thank you ! ffff* e# This is the important step for the Kronecker product (but e# not the whole story). Kronecker Product. The Kronecker product is a special case of tensor multiplication on matrices. [attachment=6953] In C++, matrices are stored as 'column major ordered' vectors. In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. an idea ? It is a generalization of the outer product (which is denoted by the same symbol) from vectors to matrices, and gives the matrix of the tensor product with respect to a standard choice of basis. 6. (αA)⊗ B = A⊗ (αB) = α(A⊗B) ∀α ∈ K,A ∈ Mp,q,B ∈ Mr,s. Walk through homework problems step-by-step from beginning to end. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. filter_none. Named after a 19th-century German mathematician, Leopold Kronecker, the Kronecker product is an increasingly important and useful matrix operation and an area of matrix calculus with numerous applications. Please note that the matricies in the example I provided are of differing sizes: a(4x4) and b(2x2), and produce an 8x8 Kronecker product. The order of the vectors in a covariant tensor product is crucial, since, as one can easily verify, it is the case that (9) a⊗b 6= b⊗a and a0 ⊗b0 6= b0 ⊗a0. It's an operator which takes two matrices e# and replaces each cell of the first matrix with the second matrix e# multiplied by that cell (so yeah, we'll end up with a 4D list of e# matrices nested inside a matrix). Given an matrix and a matrix , their Kronecker product , also 1.1.6 Tensor product The tensor product of two vectors represents a dyad, which is a linear vector transformation. Because it is often denoted without a symbol between the two vectors, it is also referred to as the open product. Powers Of Darkness: The Lost Version Of Dracula Pdf, Sulphur Cycle Biology Discussion, Black Pudding Ingredients, Fine Woodworking Courses, White Cheese Calories 100g, Soybean Production By Country 2019, Our Planet Bird Dance, Vegetable Salad With Yogurt Dressing Calories, How To Root Oppo A37f With Pc, How To Clean Mauviel Carbon Steel Pan, Blue Cross Blue Shield Phone Number,
2021-03-03 09:36:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7238017320632935, "perplexity": 1228.106244877572}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366477.52/warc/CC-MAIN-20210303073439-20210303103439-00532.warc.gz"}
https://www.physicsforums.com/threads/simple-limit-question-need-a-little-help-please.164040/
# Simple limit question. need a little help please i need someone to coment on this: lim(a^x_n)^x'_n, when n->infinity = (a^lim x_n)^lim x'_n , n-> infinity, what i am asking here is if we can go from the first to the second??? or if these two expressions are equal?? any help?? ## Answers and Replies HallsofIvy Science Advisor Homework Helper Yes, because the exponential function, ax, is continuous. one more thing here. Is there a theorem or a deffinition that supports similar expressions in a more generalized way?? i forgot to mention this also HallsofIvy Science Advisor Homework Helper I'm not sure what you mean. I was referring to the general fact that, from the definition of "continuous", if xn is a sequence of numbers converging to a and f is a function continuous at a, then $$\lim_{n \rightarrow \infnty} f(x_n)= f(\lim_{n\rightarrow \infty} x_n)= f(a)$$ I'm not sure what you mean. I was referring to the general fact that, from the definition of "continuous", if xn is a sequence of numbers converging to a and f is a function continuous at a, then $$\lim_{n \rightarrow \infnty} f(x_n)= f(\lim_{n\rightarrow \infty} x_n)= f(a)$$ yeah this is what i am asking. But what i want to know is if there is a theorem that states this, what you wrote. Or how do we know that this is so? Maybe the squeeze theorem? glenn As Halls indicated, it's essentially the definition of continuity. The definition of continuity for a function f of one real variable defined on an interval (a,b) is for any x in (a,b), $$\lim_{y \rightarrow x} f(y) = f(x)$$ (ie. the limit exists and is equal to f(x)) As Halls indicated, it's essentially the definition of continuity. The definition of continuity for a function f of one real variable defined on an interval (a,b) is for any x in (a,b), $$\lim_{y \rightarrow x} f(y) = f(x)$$ (ie. the limit exists and is equal to f(x)) Yeah, i know the definition of continuity, i was just wondering if there is a specific theorem that states this, as i have not encountered one on my calculus book. However, i do understand it now. Many thanks to all of you.
2022-05-24 20:59:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313399791717529, "perplexity": 293.8629923930177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577259.70/warc/CC-MAIN-20220524203438-20220524233438-00028.warc.gz"}
https://www.imrpress.com/journal/RCM/23/11/10.31083/j.rcm2311384/htm
NULL Section All sections Countries | Regions Countries | Regions Article Types Article Types Year Volume Issue Pages IMR Press / RCM / Volume 23 / Issue 11 / DOI: 10.31083/j.rcm2311384 Open Access Review Coronary Stent Fracture Causing Myocardial Infarction: Case Report and Review of Literature Show Less 1 Department of Internal Medicine II, University of Ulm, 89081 Ulm, Germany *Correspondence: mirjam.kessler@uniklinik-ulm.de (Mirjam Keßler) Rev. Cardiovasc. Med. 2022, 23(11), 384; https://doi.org/10.31083/j.rcm2311384 Submitted: 8 September 2022 | Revised: 22 September 2022 | Accepted: 26 September 2022 | Published: 16 November 2022 (This article belongs to the Special Issue Myocardial infarction: unsolved issues and future options) This is an open access article under the CC BY 4.0 license. Abstract Coronary stent fracture (SF) is a potential cause of stent failure increasing the risk for in-stent restenosis, stent thrombosis, target lesion revascularization and major adverse cardiac events. Overall incidence of SF ranges from $<$1.0% up to 18.6% and SF can be found in up to 60% of failed devices. Advanced imaging techniques have improved the detection of SF. However, defining the optimal therapeutic approach towards these complex lesions is challenging. This review summarizes the most important publications on the topic of SF and discusses current insights into pathophysiology, diagnostic tools, classification and therapeutic management. Furthermore, two illustrative cases of SF leading to myocardial infarction are presented, which demonstrate typical SF risk factors such as vessel angulation and hinge motion, stenting in the right coronary artery, use of long stents and multiple stent layers. Keywords stent fracture coronary stent fracture acute coronary syndrome 1. Introduction The evolution of percutaneous coronary intervention (PCI) to treat coronary stenosis and acute coronary syndrome is a success story. Stent technology has greatly improved since the first implantation of a stainless steel wire-mesh stent in a human coronary artery by Sigwart and Puel in 1986 [1]. Ground-breaking bio-mechanical advances have led to the development of first generation bare metal stents (BMS) and later drug-eluting stents (DES). While BMS addressed the issues of flow-limiting dissection, recoil and restenosis, the broad use of DES further decreased rates of in-stent restenosis (ISR), stent thrombosis and target lesion revascularization (TLR) [2]. However, stent failure including stent fracture has remained a hazard for patients and a potential challenge for interventional cardiologists. Initially being thought to have a generally benign clinical course, SF has been linked to adverse results such as stent failure and major adverse cardiac events (MACE) and has therefore raised increasing awareness among interventionalists [3, 4]. In this review, we describe two illustrating cases of stent fractures followed by an analysis of current literature on the pathomechanism, diagnostic tools, classification and therapeutic management of coronary stent fractures. 2. Methods We performed a systematic literature research using the scientific databases PubMed and Cochrane. Search terms were “stent fracture”, “coronary stent fracture”, “drug-eluting stent fracture”, “DES fracture”, “stent thrombosis”, “DES thrombosis”. 3. Case Report Patient 1, a 62-year-old male, had undergone percutaneous coronary intervention (PCI) of a high-grade stenosis of the proximal right coronary artery (RCA) with implantation of one everolimus-eluting stent (EES) (4.0 $\times{}$ 15 mm, Xience Xpedition, Abbott Vascular, Santa Clara, CA, USA). Recurrent ISR of the proximal RCA had led to Re-PCI with implantation of one sirolimus-eluting stent (SES) (4.0 $\times{}$ 30 mm, Orsiro, Biotronik AG, Berlin, Germany) and one EES (4.0 $\times{}$ 28 mm, Xience V, Abbott Vascular, Santa Clara, CA, USA). Afterwards the patient had been asymptomatic. After two years, the patient had undergone another PCI on admission due to inferior ST-Segment-Elevation myocardial infarction (STEMI). Emergent coronary angiography (CAG) had identified a gap within the previously implanted DES in the proximal RCA, indicative of a complete stent fracture (type IV) (Fig. 1A). Discontinuous TIMI-II-flow had been observed in the RCA. One platinum-chromium alloyed EES stent (3.5 $\times{}$ 20 mm, Promus PREMIER, Boston Scientific, Marlborough, MA, USA) had been deployed. ECG abnormalities had eventually returned to normal and the patient had turned asymptomatic. Fig. 1. Coronary angiogram of the right coronary artery of patient 1. Initial complete SF of a Xience EES, that had lead to STEMI (A), was initially treated with deployment of 1 Promus PREMIER DES and later stabilized by a second EES and 1 Onyx ZES (B). However, recurrent type-IV-SF with TIMI-0-flow was observed in a setting of NSTEMI (C). Short-term follow-up CAG had shown persistent excessive motion at the hinge point of the RCA within the Promus PREMIER EES indicating strong mechanical strain. At the initial SF site of the RCA, one Onyx zotarolimus-eluting stent (ZES) (4.0 $\times{}$ 22 mm, Medtronic, Minneapolis, MN, USA) had been deployed to further stabilize the Promus PREMIER EES. Final CAG had shown an optimal final result of the RCA (Fig. 1B). At the latest presentation, 4 years after the initial PCI, the patient was again referred to CAG due to Non-ST-Segment-Elevation myocardial infarction (NSTEMI). The CAG again showed SF of all deployed stent layers in the proximal RCA (Fig. 1C). Further coronary intervention was deferred and the patient was transferred to cardiac surgery for single bypass surgery. Patient 2, a 73-year-old male, had undergone coronary artery bypass graft (CABG) surgery with use of the left internal mammary artery (LIMA) more than 20 years prior to the index admission. With progression of the coronary artery disease 3 Promus PREMIER EES had been implanted via the native left main artery (LM) in the left anterior descending artery (LAD) segments 5, 6 and 7. One of the respective EES (3.0 $\times{}$ 16 mm) had been deployed at a high-grade de-novo stenosis of the inserting region of the LIMA-ad-LAD-bypass in segment 7. After a good short-term result, 6 months later an increase of exercise-induced dyspnea had occurred. CAG had again been performed showing ISR of the LIMA-ad-LAD inserting region and a paclitaxel-coated-balloon had been applied (SeQuent Please, B. Braun, Melsungen, Germany) accessing via the LM. 18 months later, the patient had again presented with unstable angina pectoris. CAG had shown recurrent ISR in segment 7 and 1 Promus PREMIER EES (2.5 $\times{}$ 20 mm) had been implanted, again using the LM as access. One month later, the patient had been re-admitted due to angina pectoris. CAG had shown a gap within the previously implanted DES in the insertion region of the LIMA-ad-LAD-bypass, suggesting type-IV-SF (Fig. 2A). Re-PCI and deployment of 2 ZES (2.5 $\times{}$ 18 mm, Resolute Integrity, Medtronic, Minneapolis, MN, USA) had been performed via the LM to stabilize the fracture (Fig. 2B). Fig. 2. Coronary angiogram of the left anterior descending artery of patient 2. A type-IV-SF in the inserting region of the LIMA-ad-LAD-bypass (A) that had previously been treated with implantation of 2 Promus Premier EES and had led to unstable angina pectoris was repaired by deployment of 2 Resolute Integrity ZES (B). However, re-SF occurred only weeks later and further attempts of intervention remained unsuccessful (C). At the latest admission the patient presented with NSTEMI. He was transferred to CAG and re-SF of the medial LAD (type IV) with distal TIMI-0-flow and prominent hinge motion was found. Catheterization of the fracture using a BMW guide wire (Abbott, Chicago, IL, USA) through the LM was unsuccessful (Fig. 2C). Further attempts of PCI were not carried out. The patient was commenced with dual antiplatelet therapy. 4. Review 4.1 Incidence of Stent Fractures Stent fractures are common in the field of peripheral vascular interventions and used to be unrecognized in coronary arteries [5]. Chowdhury and Ramos first described a fracture of a coronary BMS in a saphenous vein graft in 2002 [6]. BMS fracture is a rare finding [3, 7, 8], which might be explained by a stabilizing effect of greater neointimal proliferation [9, 10] but also a more difficult diagnosis of fracture due to lower radiopacity [11]. The incidence of SF became more considerable with the introduction of DES. Sianos et al. [12] were the first to report SF in DES in two cases involving sirolimus-eluting stents. In their meta-analysis of 8 studies assessing SF, Chakravarty et al. [13] report rates of SF ranging between 0.8% and 8.4% with a mean incidence of 4.0%. Notably, all but one SF in their analysis occurred with SES. Similar results were obtained by other studies with an incidence of $<$1.0% up to 18.6% [10, 11, 14, 15, 16, 17, 18, 19, 20]. An analysis with new generation DES by Schochlow et al. [21] showed incidental detection of SF in 8% immediately after implantation and almost 60% in the setting of device failure. The largest multicenter study has been carried out by Kan et al. [22] reporting an incidence of SF in 12.3% of the 6555 involved patients, 22.0% of stents and 17.2% of vessels. Limiting to clinical studies however is the varying definition of SF as well as an incomplete angiographic follow-up with the potential to miss out especially on late occurring SF. Accordingly, the highest incidence of SF has been reported in a post-mortem analysis, where Nakazawa et al. [23] found evidence of DES fracture in 29% of patients. 4.2 Pathomechanism The mechanistic culprit behind SF is material fatigue. Biomechanical demands of coronary stents are high. Important factors include vascular biocompatibility, resistance to corrosion, high elasticity and plasticity for expansion, rigidity at body temperature for the maintenance of dilatation and resistance to elastic recoil as well as radiopacity to allow X-ray tracking [24]. While the most broadly used stent backbone material used to be stainless steel, modern 2nd generation DES are preferably fabricated with a cobalt-chromium (Co-Cr) alloy, typically L-605 and MP35N. Radial strength is supported by hoop elements, which are linked by connectors. The latter provide longitudinal stability and their design varies between stent types. In an experimental approach, Ormiston et al. [25] showed highest susceptibility to fracture in less flexible stents and stents with three connectors between hoops. Accordingly, most studies have found the relatively inflexible sirolimus-eluting stent (SES) to be the most susceptible to SF [5, 3, 11, 13, 20, 23]. In 10 million cycles of repetitive bending no fracture occurred in the more flexible Element (Boston Scientific, Natick, Massachusetts, MA, USA), Promus (Bostin Scientific, Natick, Massachusetts, MA, USA) and Integrity (Medtronic, Santa Rosa, CA, USA) stents [25]. From their analysis of the American Food and Drug Administration’s (FDA) Manufacturer and User Facility Experience Database (MAUDE) Omar et al. [26] also report the highest SF rate in Cypher stents, followed by Xience and Promus stents. But not only the infrastructure of the stent can make a difference. DES are coated with a polymer that controls the release of an antiproliferative drug. Especially in first generation DES, hypersensitivity reactions of the surrounding endothel induced by the polymer with consecutive inflammation has been described. The inflammatory process can then cause late stent malapposition with modification of the mechanical integrity of the device. The results are endothelial hinge points, excessive motion and torsion and finally stent thrombosis or SF [10, 27, 28]. The frequency of polymer-associated inflammation appears to be lower in 2nd generation DES with lowest reported SF rates in everolimus-eluting stents [29]. Further technical improvements have been made to increase flexibility and fracture resistance. Platinum-chromium alloys were recently introduced, allowing for thinner stent struts and higher radiopacity without decreasing radial strength [30, 31, 32, 33]. Kuramitsu et al. [34] observed SF in 1.7% of lesions and 2.2% of patients treated with platinum-chromium alloyed everolimus-eluting stents and found a numerically higher incidence of clinically-driven TLR compared to non-SF-lesions. To date, there is no data comparing SF rates in platinum-chromium stents with previous models. However, though not reporting specifically on SF, studies have shown comparable rates of stent thrombosis between cobalt-chromium and platinum-chromium alloy with a significantly lower rate of target lesion failure in platinum-alloyed stents [35]. No stent thrombosis was reported after treatment of small vessels and long lesions, both known as SF-susceptible [36]. A different and promising approach was made with the development of bioresorbable scaffold stents. The latest models, which consist of poly-l-lactic acid (PLLA), degrade over time and were therefore thought to decrease rates of late stent failure. However, bioresorbable PLLA stents have proven inferior compared to non-degradable metallic stent platforms due to increased rates of target lesion restenosis and more frequent target vessel myocardial infarction [37]. Combining the mechanical advantages of non-resorbable metallic platforms and the biodegradability of resorbable stent material bioresorbable metallic stents are currently being advanced. The magnesium-alloyed Magmaris stent (Biotronik AG, Berlin, Germany) has shown safety and efficacy in early registries [38], however randomized controlled trials have not been carried out yet. No data is currently available regarding the use of scaffolds in SF predilection lesions. It is thinkable however, that while the scaffold technology may reduce the risk of fracture in angular and hinge point lesions their rather soft structure might make them prone to breakage or dismantling especially in heavily calcified lesions. Moreover, an essential issue with magnesium-based implants is their rapid degradation and corrosion in aqueous environments like body fluid. A more stable zink-silver (Zn-Ag) alloy has shown promising results in a porcine model with treatment of iliofemoral arteries [39]. Characteristics of the treated lesions are another important factor in the pathomechanism of SF. Numerous studies have shown, that stenting in the RCA is an independent predictor of SF [5, 16, 17, 40, 41, 42, 43, 44]. In their meta-analysis Chakravarty et al. [13] estimated 56.4% of SF to occur in the RCA followed by 30.4% in the left anterior descending artery (LAD). SF was least frequent in left main artery lesions [13]. Omar et al. [26] reported 47.7% of SF in the RCA, followed by the LAD. The most likely explanation for this clustering of SF events is the tortuosity and contortion of the RCA, leading to higher mechanical force and earlier material fatigue [3, 17, 20]. Ino et al. [20] showed a higher degree of hinge motion in the RCA or left circumflex artery (LCX) than in the LAD (31.0° $\pm{}$ 13.1° vs. 22.8° $\pm{}$ 4.9°). Stenting across angular or hinge regions is a risk factor for SF regardless of the vessel. Park et al. [44] found a more than 6-fold higher risk of SF when stenting across an angle of $>$45°. Popma et al. [45], while finding no significant difference in SF incidence between RCA and LAD, reported cyclic angulation changes of 32.3° $\pm{}$ 15.2° in patients with SF. Shaikh et al. [11] demonstrated a 14-fold risk increase for SF when stenting across a bend of $>$75° (Odds Ratio 13.8). Kuramitsu et al. [15] reported an Odds Ratio for SF prediction of 14.6 by hinge motion as defined by an at least 16° difference between systole and diastole. And in a study by Park et al. [16] 79% of SFs occurred at hinge points, either adjacent to edges of overlapping stents or at angles $>$45°. But not only does coronary anatomy have an effect on stent mechanics. Vice versa the presence of a stent can significantly alter vascular geometry. Especially stents with low conformability, i.e., small amount of longitudinal flexibility after deployment, can create maldistribution of force during the cardiac cycle predisposing for SF [46]. Other lesional risk factors include ostial [15, 47] and bifurcational lesions [21, 48] as well as plaque calcification [15, 21, 49]. Finally, procedural details can also contribute towards SF. Stent length plays a significant mechanistic role [13]. In the study by Park et al. [16], SF occurred in stents with a mean length of 48.3 mm. Omar et al. [26] report from the MAUDE database that 65.2% of SF happened in stents longer than 30 mm. Overlapping stents are also a risk factor, since they can change vascular angulation and potentially create new hinge points. Omar et al. [26] discuss that half of the SF events in the RCA and LAD registered in the MAUDE database involved lesions with overlapping stents. Stent overlap was also significantly associated with SF in the large meta-analysis by Chakravarty et al. [13]. Finally, stent overexpansion can exceed material capacity and induce strut distortion causing early fatigue and SF [5, 12, 21, 50]. Omar et al. [26] report a mean postdilation pressure of 18 atm (IQR 16–20) in fractured stents. However, Lee et al. [51] showed the occurrence of stent thrombosis in only 0.2% of 1037 patients with intravascular ultrasound (IVUS) guided postdilation with a mean pressure of 18.7 $\pm{}$ 4.1 atm. SF risk from postdilation is likely to vary between different lesions and might be higher in calcified or tortuous lesions [5, 12]. Risk factors of SF are summarized in Fig. 3. Fig. 3. Risk factors, clinical consequences and therapeutic strategies for stent fracture. DAPT, dual antiplatelet therapy. When a stent fractures, the new lesion predisposes for ischemic events. The altered stent geometry, mechanical shear stress and impaired contact of the antiproliferative drug to the endothelium can cause neoproliferation of endothelial tissue and smooth muscle cells, intimal hyperplasia and alteration of hemodynamic factors [13, 15, 52, 53]. Furthermore, complete SF is associated with the formation of coronary aneurysm [10, 45]. As a side note, occasional case reports have been published showing SF as a complication of post-stenting pyogenic infections and mycotic coronary pseudoaneurysm [54, 55]. Furthermore, an infected SF-induced coronary artery aneurysm has been described as a highly probable origin for formation of abscesses and sepsis with Staphylococcus aureus [56, 57]. Most SF are reported to occur several months after initial stenting [10, 13]. These events are likely due to the described pathomechanisms and risk factors of material fatigue. However, some SF have been reported only days after stent deployment [5, 50, 58] and are likely to have occurred in high-risk lesions (calcification, angle, overexpansion). All the above risk factors and mechanisms show, that SF can be seen as a form of “patient-prosthesis mismatch” that requires further improvements in engineering and precision medicine [59]. 4.3 Diagnosis and Classification Most studies reporting on SF have based their diagnosis on fluoroscopy [5, 15, 22, 60]. Ino et al. [20] discussed, that contrast injection might mask SF lesions and implanted stents should be assessed with and without contrast agents. However, the low spatial resolution of 300 $\mathrm{\mu}$m limits the diagnostic abilities of fluoroscopy, especially in newer stent models with thin struts. Therefore, additional diagnostic modalities have been introduced, which significantly improved imaging. To enhance visibility of stent struts, high-resolution cine-angiography technologies such as StentBoost (Philips Healthcare, Best, Netherlands) can be helpful [61, 62] and even diagnosed SF in one case, where evidence by intravascular ultrasound (IVUS) was lacking [63]. In a large prospective study by Biscaglia et al. [64] enhanced stent visualization techniques proved highly effective and safe in detection of SF during the index PCI. Use of digitally enhanced fluoroscopic imaging can lower the incidence of TLR and MACE [65]. Davlouros et al. [66] suggested a staged approach for high-risk patients with routine flat panel digital detector cinefluoroscopy and invasive assessment in case of pathologic initial findings. IVUS has been used successfully to visualize SF due to its higher spatial resolution (150–200 $\mathrm{\mu}$m) [5, 10, 13, 15, 19, 67]. Yamada et al. [68] reported superiority of IVUS compared to angiography to detect SF. Optical coherence tomography (OCT) with a spatial resolution of 10–15 $\mathrm{\mu}$m has seen a rise in use for the differential diagnosis of stent failure [50, 69, 70, 71, 72]. OCT can reproduce the complex spatial stent configuration with high precision and reproducibility [73]. Most recently, Schochlow et al. [21] have demonstrated an unexpectedly high prevalence in OCT-diagnosed SF in the setting of elective control examinations. Since many SF events remain asymptomatic non-invasive diagnostic options are useful. Computer tomography angiography (CTA) has been used as a reliable tool. Hecht et al. [48] have reported SF in 28% of patients with ISR as assessed by CTA. They defined evidence of SF with the following criteria: partial or complete (circumferential) gap or “crush” pattern and reduction of Hounsfield units $<$300 proving the absence of metal. CTA can be superior to fluoroscopic angiography for the detection of SF [74, 75, 76]. However, due to formation of severe artifacts some patients and stents are more suitable for follow-up by CTA than others: Carbone et al. [77] suggest patient selection according to stent diameter, stent material and type as well as heart rate and rhythm. Depending on the diagnostic tool that is used, numerous classifications of SF have been suggested. The first categorizations of SF based on fluoroscopy were done with self-expanding nitinol stents which were implanted in the superficial femoral artery. Scheinert et al. [78] described minor SF as single-strut fracture, moderate SF as fracture of more than one strut and severe SF as complete separation of segments. This classification was later adapted for coronary stents by Lee et al. [5], Shaikh et al. [11] and Kim et al. [43]. Allie et al. [79] suggested four types of SF with type I being single strut fracture, type II being multiple strut fractures at different sites, type III being complete transverse linear fracture without displacement and type IV being complete stent displacement. Jaff et al. [80] introduced an additional type V with formation of a gap between stent fragments. A similar classification was used for coronary stents by Nakazawa et al. [23] in their pathological study. Using IVUS, Doi et al. [10] differentiate between partial SF (absence of stent strut across at least one third of the stent) and complete SF (evidence of at least two fragments separated by an image slice with no visible struts). Schochlow et al. [21] reported four OCT-patterns of SF, with pattern 1 being a single stacked strut and pattern 4 being stent transection with or without gap formation. Finally, Hecht et al. [48] distinguished between partial and complete SF using CTA. 4.4 Clinical Presentation Stent-related adverse events are emerging as a significant issue for the interventional cardiologist. In a recent individual patient pooled study analysis, Madhavan et al. [60] showed an incidence of very late stent-related events of 2% per year with all stent types, without an evident plateau over time. SF is one proposed mechanism of stent failure and is known as a major risk factor for ISR, stent thrombosis, TLR and MACE [11, 13, 15, 40, 42, 81, 82, 83]. From their meta-analysis, Chakravarty et al. [13] report a significantly higher risk for ISR (38% vs. 8.2%, p $<$ 0.001) and TLR (17% vs. 5.6%, p $<$ 0.001) in lesions with fractured stents. In the large study by Kan et al. [22], SF increased the incidence of ISR, TLR and stent thrombosis more than three-fold. Kashiwagi et al. [83] showed a 5-fold higher rate of ISR in SF lesions compared to non-SF lesions. And Ohya et al. [82] described a significantly increased risk for clinically driven and all-cause TLR. Clinical presentation seems to be associated with the extent of SF. SF was seen in 20% of asymptomatic control group devices in a study by Schochlow et al. [21], suggesting SF as a common phenomenon often without clinical implications. Lee et al. [19] showed that only patients with SF grade III and IV were admitted with acute myocardial infarction. No cardiac deaths occurred in their study. In their post mortem analysis Nakazawa et al. [23] found adverse pathologic findings such as thrombosis and restenosis at the SF site in 67% of type V fractures. No significant impact on pathologic findings was seen in type I to type IV SF [23]. While not reporting on the respective grade of SF, Park et al. [17] did not find a significant difference in severity of angina pectoris or incidence of acute coronary syndrome in patients with SF compared to a matched control group. And in their large multicenter study, Kan et al. [22] did not find a difference in mortality between SF and non-SF patients. On the other hand, reports of SF leading to STEMI have been published with BMS [7] as well as 1st generation, cobalt-chromium alloyed DES [84, 85]. In a study with 2nd generation DES STEMI occurred in 15.8% of patients with SF [21]. Kuramitsu et al. [15] described that the risk for myocardial infarction was more than 12-fold higher in SF compared to non-SF patients. Chhatriwalla et al. [86] reported that 12% of patients with SF presented with STEMI or stent thrombosis and 19% with unstable angina or NSTEMI. Omar et al. [26] found that one third of patients with SF presented with acute coronary syndrome. Ohya et al. [82] report a significantly higher risk for myocardial infarction and very late stent thrombosis in SF patients. Heterogeneity of the available data might again be due to the varying definition and different diagnostic modalities in SF studies. Undoubtedly, however, SF has a hazardous potential and interventional cardiologists should be familiar with therapeutic strategies to prevent adverse outcome. 4.5 Therapeutic Options Despite the clinical experience with risk factors of SF, management of SF remains challenging and poorly researched. To date, no randomized controlled trials have been carried out to suggest an optimal treatment. Options for management of SF patients are drug therapy, re-stenting, balloon angioplasty and, in some cases, coronary bypass grafting [87]. Omar et al. [26] reported from the MAUDE database that half of patients re-admitted with STEMI or stent thrombosis due to SF were treated with DES, 23% with medical therapy alone, 13% with ballon angioplasty and 8% with surgery. While treatment of myocardial infarction due to SF is mainly interventional, there is an ongoing debate how asymptomatic SF lesions should be managed. Since the clinical course of SF, especially minor types with single strut fractures, are often benign and asymptomatic and pose a low risk for adverse cardiac events the cost-benefit ratio of re-intervention is often doubted. Therefore, Adlakha et al. [88] proposed to leave asymptomatic SF-related restenosis without treatment and reserve intervention for symptomatic patients. Lee et al. [19] advocated treatment of SF with continuation of dual antiplatelet therapy irrespective of symptoms and suggested re-intervention in symptomatic or asymptomatic ISR with $>$70% stenosis or symptomatic ISR with 50–70% stenosis, which shows positive results in physiological stress test. None of the conservatively treated patients in their study had significantly aggravated restenosis during follow-up nor did cardiac death occur [19]. Ino et al. [20] reported similar results with no adverse outcomes in SF patients without significant restenosis who were treated with dual antiplatelet therapy. Park et al. [17] only performed re-intervention in SF patients with ISR $>$70% and achieved excellent results without adverse events. SF Patients without ISR or with acceptable fractional flow reserve were treated conservatively with dual antiplatelet therapy and no patient required TLR during a follow-up period of median 30.5 months [17]. Different strategies are needed for patients presenting with SF and myocardial infarction. Most reports of STEMI due to SF have been treated by re-intervention and stent deployment [7, 84]. However, stenting in mal-apposed stent struts results in a double layer of metal, leading to an increased risk of thrombogenicity, ISR and recurring SF. Case reports have been published using “plain old balloon” [85] or drug-coated balloon (DCB) angioplasty only in a setting of STEMI achieving good short-term results [87]. A balloon only approach to prevent SF in de-novo lesions or stent failure including re-SF could be feasible. The BASKET-SMALL-2-trial has shown non-inferiority of drug-coated balloon (DCB) application compared to DES implantation in small de-novo coronary lesions [89]. DCB was also shown to be non-inferior to 2nd generation DES for treatment of DES ISR [90, 91]. However, specifically in SF DCB application alone could not achieve lower rates of re-ISR and TLR compared to DES implantation [92]. The authors proposed maintained mechanical stress at an SF site as the mechanism of re-re-ISR regardless of the used device [92]. Coronary artery bypass grafting can be seen as the “last resort” in recurrent SF or SF with heavy ISR or stent thrombosis that makes the vessel inaccessible for intervention wires. 5. Conclusions SF is a frequent complication following DES implantation. Minor SF lesions are usually asymptomatic and the risk of ISR, TLR or MACE is low. These lesions can be treated with antiplatelet therapy alone. Major SF however still present a challenge to the interventionalist, as they can be difficult to manage and pose a high risk of re-ISR, re-TLR and adverse cardiac events. Many lesion-specific and procedural risk factors are known that contribute towards material failure and SF. These risk factors are well illustrated by the two cases that we reported above: both patients were stented in areas with sharp angles and marked cyclic motion of the vessels, patient 1 in the RCA (a risk factor itself) and patient 2 at a bypass graft insertion. At least patient 1 was treated with an SES and long stents up to 30 mm were used. And due to recurrent SF and ISR, both patients were provided with multiple stent layers, which amplified mechanical forces and risk of re-SF even further. However, the presentation of both patients with symptomatic acute coronary syndrome has put the interventionalists into a predicament: indeed, in the setting of myocardial infarction efforts of revascularisation are urgently necessary. The risk of recurrent SF due to placement of multiple stent layers was accounted for by use of a Promus PREMIER EES since the platinum chromium platform of Promus Stents was favoured in regard to flexibility compared to cobalt chromium alloys. Further stabilization of the persistent hinge point of the RCA was attempted by implantation of a Resolute Onyx ZES due to putative advantages related to the stent’s construction of a single strand of platinum iridium wire. And in patient 2, after an unsuccessful attempt of DCB application, two Resolute Integrity ZES, consisting of a single cobalt strand, were used to stabilize the hinge point at the insertion of a LIMA-ad-LAD bypass. However, these attempts could not protect the patients from recurrent SF leading to myocardial infarction. With regard to the discussed literature, a balloon-only approach could have been considered a valid alternative strategy in these two patients since it is at least non-inferior to DES implantation. Since the relevance of coronary interventions will further rise with an ageing population, the number of stent-related complications is also likely to increase. Importantly, elderly patients often combine multiple risk factors for stent failure and SF, such as heavy calcification, vessel tortuosity, presence of vessel grafts, diabetes, renal failure or poor adherence to antiplatelet therapy [93]. Our experience again shows, that once SF has occurred a downward spiral of re-interventions can result. Therefore, the main focus should be on prevention of SF. This can likely be achieved through careful evaluation of the necessity of stent implantation especially in high-risk lesions and consideration of balloon-only strategies if appropriate, adequate lesion preparation by pre-dilation, lithotripsy or application of scoring balloons as well as use of shorter, more flexible stents and avoidance of too aggressive postdilation. Enhanced visualization techniques can be helpful for early SF detection during the index PCI especially in high risk lesions where they can function as a gate-keeper for further IVUS or OCT assessment. If ISR is detected, further assessment by IVUS or OCT should be carried out to elucidate the cause of device failure including SF. And if an SF lesion is verified re-stenting should only be performed after careful consideration of alternative strategies, such as balloon-only or dual antiplatelet therapy. Future technological developments will hopefully further reduce the incidence of SF and more scientific appreciation of this topic will provide us with evidence based treatment options for these high-risk patients. Abbreviations BMS, Bare metal stent; CABG, Coronary artery bypass graft; CAG, Coronary angiography; DES, Drug-eluting stent; ISR, In-stent restenosis; IVUS, Intravascular ultrasound; LAD, Left anterior descending artery; LCX, Left circumflex artery; LIMA, left internal mammary artery; NSTEMI, Non-ST-segment-elevation myocardial infarction; OCT, Optical coherence tomography; PCI, Percutaneous coronary intervention; RCA, Right coronary artery; SF, Stent Fracture; STEMI, ST-segment-elevation myocardial infarction; TLR, Target-lesion revascularization. Availability of Data and Materials Data and materials are available on request. Author Contributions MG has performed literature research and has drafted and written the manuscript. WR and MK have provided support in writing and drafting and have critically revised the manuscript. All authors contributed to editorial changes in the manuscript. All authors read and approved the final manuscript. Ethics Approval and Consent to Participate Patients of the case report consented to anonymous publication of their age, sex, past and present medical history and coronary angiograms. Due to local regulations of the University of Ulm, no ethical approval is required for this retrospective case report. The case report complies with the principles of the 1975 declaration of Helsinki. Acknowledgment Not applicable. Funding This research received no external funding. Conflict of Interest The authors declare no conflict of interest. Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Share
2022-11-30 00:08:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 26, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5375591516494751, "perplexity": 9386.737244548838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710712.51/warc/CC-MAIN-20221129232448-20221130022448-00749.warc.gz"}
http://www.gbhatnagar.com/2018/11/
## Friday, November 16, 2018 ### A bibasic Heine transformation formula While studying chapter 1 of Andrews and Berndt's Lost Notebook, Part II, I stumbled upon a bibasic Heine's transformation. A special case is Heine's 1847 transformation. Other special cases include an identity of Ramanujan (c. 1919), and  a 1966 transformation formula of Andrews. Eventually, I realized that it follows from a Fundamental Lemma given by Andrews in 1966. Still, I'm happy to have rediscovered it. Using this formula one can find many identities proximal to Ramanujan's own $_2\phi_1$ transformations. And of course, the multiple series extensions (some in this paper, and others appearing in another paper) are all new. Here is a preprint. Here is a video of a talk I presented at the Alladi 60 Conference. March 17-21, 2016. Update (November 10, 2018). The multi-variable version has been accepted for publication in the Ramanujan Journal. This has been made open access. It is now available online, even though the volume and page number has not been decided yet. The title is: Heine's method and $A_n$ to $A_m$ transformation formulas. Here is a reprint. -- UPDATE (Feb 11, 2016). This has been published. Reference (perhaps to be modified later): A bibasic Heine transformation formula and Ramanujan's $_2\phi_1$ transformations, in Analytic Number Theory, Modular Forms and q-Hypergeometric Series, In honor of Krishna Alladi's 60th Birthday, University of Florida, Gainesville, Mar 2016,  G. E. Andrews and F. G. Garvan (eds.), 99-122 (2017) The book is available here. The front matter from the Springer site. -- UPDATE (June 16, 2016).  The paper has been accepted to appear in: Proceedings of the Alladi 60 conference held in Gainesville, FL. (Mar 2016), K. Alladi, G. E. Andrews and F. G. Garvan (eds.)
2022-09-24 16:32:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7908709049224854, "perplexity": 2190.4273198681954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00788.warc.gz"}
http://secolinsky.com/blogs/gre_problems_13_14,2/
The writer must be conscious of the separateness that exists between himself and his reader. The reader's only access to the mind of the writer is through the words on the page. ## GRE Math Practice Test Problems ### By Maurice Ticas Studying for the exam I've encountered two problems. The first is stated as follows: If $$f$$ is a continuously differentiable real-valued function defined on the open interval $$(-1,4)$$ such that $$f(3)=5$$ and $$f'(x)\geq-1$$ for all $$x$$, what's the greatest value of $$f(0)$$? The second I thought was difficult too and worth sharing: Suppose $$g$$ is a continuous real-valued function such that $$3x^5+96=\int_c^x g(t) dt$$ for each $$x \in \mathbb{R}$$, where $$c$$ is a constant. What is the value of $$c$$? What approach do you have to answer the two problems?
2018-02-23 12:06:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8098859786987305, "perplexity": 153.59752467569595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814700.55/warc/CC-MAIN-20180223115053-20180223135053-00724.warc.gz"}
https://lists.openwall.net/netdev/2014/10/22/125
lists.openwall.net lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC Open Source and information security mailing list archives Date: Wed, 22 Oct 2014 22:33:39 +0200 From: Kristian Evensen <kristian.evensen@...il.com> To: Hagen Paul Pfeifer <hagen@...u.net> Cc: David Miller <davem@...emloft.net>, Network Development <netdev@...r.kernel.org> Subject: Re: [PATCH net-next] tcp: Add TCP_FREEZE socket option Hi, I am very sorry for not explaining the scenario/use-case properly. Freeze-TCP is mostly targeted at TCP connections established through mobile broadband networks. One example scenario is that of when a user moves outside of an area with LTE coverage. The mobile broadband connection will then be downgraded to 2G/3G and this process takes 10-15 seconds in the networks I have been able to measure. During this handover, the modem/device will in most cases report that it is still connected to LTE. So just looking at the state of the link is not good enough, as it will appear to be working fine (except for no data coming through it). The device does not change IP address, so TCP connections will resume normal operation as soon as the network connection is re-established and packet is retransmitted. However, because of the large "idle" period, this can take another 10-15 seconds. On Wed, Oct 22, 2014 at 9:50 PM, Hagen Paul Pfeifer <hagen@...u.net> wrote: > At least better. But what userspace daemon would configure this? > Likely NetworkManager and friends. But at what conditions? Yes, that would be my suggestion for tools too. The conditions would depend on the kind of network, available information and so on. > In a NATed scenario there is no gain because IP addreses change and > the connection is lost anyway. For the signal strength thing there > might be an advantage but it has costs: > > a) how long did you freeze the connection? What if NetworkManager > stops? The connection hang \infty > b) is it not better to inform the upper layer - the application - that > something happen with the link? > > I mean when the application experience disruptions, the application > can decide what it do: reconnect, reconnect and resend or inform the > user. This possibility is now lost/hidden. Maybe it is no problem - > maybe it is for some applications. This is the main reason why I went with a socket option. While I worked on this patch I wrote a small daemon for testing purposes. This daemon analyses data exported from a mobile broadband modem (QMI), looks at total interface throughput and then multicasts a netlink message when it determines that a handover might happen. This message is only a hint and then it is up to the application developer to decide what to do. Another solution would be a hybrid, the module will works as I described and the socket option will be used as an opt-in > > Do you have considered to bring this to the IETF (TCPM WG)? > Yes, I am currently considering it, or if I should look into different solutions before bringing it up for discussion. The ideal solution would be if there was a way to force a retransmit when the handover period is over, but that opens a whole net set of problems, potential security problems and changes TCP semantics a bit. An advantage of Freeze-TCP is that it works fine with what we have today.
2023-01-30 08:22:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38462433218955994, "perplexity": 5959.110823507351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00011.warc.gz"}
http://gmatclub.com/forum/if-the-measures-of-the-angle-in-a-triangle-are-in-the-ratio-126305.html?fl=similar
Find all School-related info fast with the new School-Specific MBA Forum It is currently 03 May 2015, 17:05 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # If the measures of the angle in a triangle are in the ratio Author Message TAGS: Intern Joined: 29 Sep 2011 Posts: 16 Followers: 1 Kudos [?]: 14 [0], given: 0 If the measures of the angle in a triangle are in the ratio [#permalink]  19 Jan 2012, 16:25 1 This post was BOOKMARKED 00:00 Difficulty: 35% (medium) Question Stats: 54% (01:40) correct 46% (00:35) wrong based on 81 sessions If the measures of the angle in a triangle are in the ratio of 1:2:3, what is the ratio of the smallest side of the triangle to the largest side? A. 1:2 B. 1:3 C. 1:5 D. 2:3 E. 2:5 [Reveal] Spoiler: OA Math Expert Joined: 02 Sep 2009 Posts: 27170 Followers: 4226 Kudos [?]: 40945 [0], given: 5576 Re: Geometry [#permalink]  19 Jan 2012, 17:08 Expert's post Splendidgirl666 wrote: Hi, can someone help with this one please? If the measures of the angle in a triangle are in the ratio of 1:2:3, what is the ratio of the smallest side of the triangle to the largest side? 1. 1:2 2. 1:3 3. 1:5 4. 2:3 5. 2:5 The measures of the angle in a triangle are in the ratio of 1:2:3 --> $$x+2x+3x=180$$ --> $$6x=180$$ --> $$x=30$$, $$2x=60$$ and $$3x=90$$. We have a right triangle where the angles are 30°, 60°, and 90°. Attachment: Math_Tri_Right3060.png [ 3.64 KiB | Viewed 1804 times ] This is one of the 'standard' triangles you should be able recognize on sight. A fact you should commit to memory is: The sides are always in the ratio $$1:\sqrt{3}:2$$. Notice that the smallest side (1) is opposite the smallest angle (30°), and the longest side (2) is opposite the largest angle (90°). So the ratio of the smallest side of the triangle to the largest side is 1/2. Check Triangles chapter of Math Book for more on this topic: math-triangles-87197.html Hope it helps. _________________ Manager Joined: 25 Oct 2013 Posts: 173 Followers: 0 Kudos [?]: 30 [0], given: 56 Re: If the measures of the angle in a triangle are in the ratio [#permalink]  07 Mar 2014, 02:14 If angles are in ratio of 1:2:3 the triangle becomes a 30-60-90 triangle. ratio of sides is 1:$$\sqrt{3}$$:2 hence answer is 1:2. _________________ Click on Kudos if you liked the post! Practice makes Perfect. Re: If the measures of the angle in a triangle are in the ratio   [#permalink] 07 Mar 2014, 02:14 Similar topics Replies Last post Similar Topics: 20 In isosceles triangle RST what is the measure of angle R? 15 20 Mar 2012, 08:16 In isosceles triangle PQR, if the measure of angle P is 80 , 2 06 Oct 2008, 13:13 1 In a certain triangle, the ratio of the smallest angle to 5 19 Mar 2008, 13:34 In an isosceles triangle RST what is the measure of angle R? 7 30 Sep 2006, 18:51 In isosceles triangle PQR, if the measure of angle P is 80, 2 04 Mar 2006, 11:39 Display posts from previous: Sort by
2015-05-04 01:05:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5280859470367432, "perplexity": 2147.247613478375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430452285957.31/warc/CC-MAIN-20150501035125-00039-ip-10-235-10-82.ec2.internal.warc.gz"}
https://data.mendeley.com/datasets/pvzwfr8fcj
# Validity of pseudo first order (PFO) approximation of binary adsorption kinetics Published: 05-11-2019| Version 2 | DOI: 10.17632/pvzwfr8fcj.2 Contributor: Olga Jakšić ## Description ADmix2isLin.m is a MATLAB function that gives the outcome 1 if modeling of binary adsorption kinetics by using the pseudo first order (PFO) approximation is appropriate, zero otherwise. Input parameters are parameters of reaction rate equation: ka1 ka2 kd1 kd2 No1 No2 M where ka refers to rate constant of adsorption, kd refers to rate constant of desorption No refers to the overall number of adsorbate molecules in the system M is the number of adsorption centers on the surface Reaction rate equations are N1'=ka1*No1*(M-N1-N2)-kd1*N1 N2'=ka2*No2*(M-N1-N2)-kd2*N2 The solutions to RRE are N1=Ne1+Nt11*exp(-t/tau1)+Nt12*exp(-t/tau2) N2=Ne2+Nt21*exp(-t/tau1)+Nt22*exp(-t/tau2) Steady state values Ne1 and Ne2 are outcome of the function STEADY_mix2lin.m Transient amplitudes Nt,ij are the outcome of the function TRANSmix2lin.m Time constants are the outcome of the function TAUmix2lin.m ## Steps to reproduce All functions are independent. The function ADmix2isLin.m is obtained by training an artificial neural network to recognize if the solution to linear system of reaction rate equations (RRE) is in accord with the solution to matrix Riccati differential equations (MRDE). The underlined theory is explained in the linked paper. RRE set is N1'=ka1*No1*(M-N1-N2)-kd1*N1 N2'=ka2*No2*(M-N1-N2)-kd2*N2 MRDE set is N1'=ka1*(No1-N1)*(M-N1-N2)-kd1*N1 N2'=ka2*(No2-N2)*(M-N1-N2)-kd2*N2 The input for TAUmix2lin.m is ( ka,kd,No ) where ka, kd and No are columns of two elements, the first one being placed up and the second one down The input for TRANSmix2lin.m and for STEADY_mix2lin.m is ( ka,kd,No,M ) The input for ADmix2isLin(x1) is x1=[ka1 ka2 kd1 kd2 No1 No2 M]
2021-03-08 06:22:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8170875906944275, "perplexity": 4498.50353035989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381989.92/warc/CC-MAIN-20210308052217-20210308082217-00473.warc.gz"}
http://openstudy.com/updates/514bd0a4e4b05e69bfacf452
## UmmmHELP Group Title ind the value of x given that OP ll NQ one year ago one year ago triangle(OMP) is similar to triangle(NMQ). so,$\frac{OM}{NM}=\frac{MP}{MQ}=\frac{OP}{NQ}$ so,$\frac{27}{18}=\frac{20+x}{20}$
2014-09-01 14:11:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8316866159439087, "perplexity": 8979.35240426778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919066.8/warc/CC-MAIN-20140901014519-00321-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/ok-corral-local-versus-non-local-qm.157177/page-10
# OK Corral: Local versus non-local QM paw ....There is not a single shred of evidence, direct, or indirect, that a wavefunction physically exists.... Have I missed something here? Aren't there solutions of the Schroedinger equation that predict accurately the orbitals of the hydrogen atom, the electron density of H2? Didn't solutions of the SE allow development of the scanning tunnelling EM? Aren't DeBroglie waves wavefunctions? Don't they accurately predict the various interferance patterns of electrons? I think the evidence is almost overwhelming. Of course you could argue that all these phenomena are caused by some property that behaves exactly like the wavefunction, but a gentleman by the name of Occam had something to say about that.If it walks like a duck and quacks like a duck....... JesseM Juao Magueijo’s article “Plan B for the cosmos” (Scientific American, Jan. 2001, p.47) reads: Inflationary theory postulates that the early universe expanded so fast that the range of light was phenomenally large. Seemingly disjointed regions could thus have communicated with one another and reached a common temperature and density. When the inflationary expansion ended, these regions began to fall out of touch. It does not take much thought to realize that the same thing could have been achieved if light simply had traveled faster in the early universe than it does today. Fast light could have stitched together a patchwork of otherwise disconnected regions. These regions could have homogenized themselves. As the speed of light slowed, those regions would have fallen out of contact It is clear from the above quote that the early universe was in thermal equilibrium. That means that there was enough time for the EM field of each particle to reach all other particles (it only takes light one second to travel between two opposite points on a sphere with a diameter of 3 x 10^8 m but this time is hardly enough to bring such a sphere of gas at an almost perfect thermal equilibrium). A Laplacian demon “riding on a particle” could infer the position/momentum of every other particle in that early universe by looking at the field around him. This is still true today because of the extrapolation mechanism. Your logic here is faulty--even if the observable universe had reached thermal equilibrium, that definitely doesn't mean that each particle's past light cone would become identical at some early time. This is easier to see if we consider a situation of a local region reaching equilibrium in SR. Suppose at some time t0 we fill a box many light-years long with an inhomogenous distribution of gas, and immediately seal the box. We pick a particular region which is small compared to the entire box--say, a region 1 light-second wide--and wait just long enough for this region to get very close to thermal equilibrium. The box is much larger than the region so this will not have been long enough for the whole thing to reach equilibrium, so perhaps there will be large-scale gradients in density/pressure/temperature etc., even if any given region 1 light-second wide is very close to homogenous. So, does this mean that if we take two spacelike-separated events inside the region which happen after it has reached equilibrium, we can predict one by knowing the complete light cone of the other? Of course not--this scenario is based entirely on the flat spacetime of SR, so it's easy to see that for any spacelike-separated events in SR, there must be events in the past light cone of one which lie outside the past light cone of the other, no matter how far back in time you go. In fact, as measured in the inertial frame where the events are simultaneous, the distance between the two events must be identical to the distance between the edges of the two past light cones at all earlier times. Also, if we've left enough time for the 1 light-second region to reach equilibrium, this will probably be a lot longer than 1 second, meaning the size of each event's past light cone at t0 will be much larger than the 1 light-second region itself. The situation is a little more complicated in GR due to curved spacetime distorting the light cones (look at some of the diagrams on Ned Wright's Cosmology Tutorial, for example), but I'm confident you wouldn't see two light cones smoothly join up and encompass identical regions at earlier times--it seems to me this would imply at at the event of the joining-up, this would mean photons at the same position and moving in the same direction would have more than one possible geodesic path (leading either to the first event or the second event), which isn't supposed to be possible. In any case, your argument didn't depend specifically on any features of GR, it just suggested that if the universe had reached equilibrium this would mean that knowing the past light cone of one event in the region would allow a Laplacian demon to predict the outcome of another spacelike-separated event, but my SR example shows this doesn't make sense. ueit said: I also disagree that “the singularity doesn't seem to have a state that could allow you to extrapolate later events by knowing it”. We don’t have a theory to describe the big-bang so I don’t see why we should assume that it was a non-deterministic phenomena rather than a deterministic one. If QM is deterministic after all I don’t see where a stochastic big-bang could come from. I wasn't saying anything about the big bang being stochastic, just about the initial singularity in GR being fairly "featurless", you can't extrapolate the later state of the universe from some sort of description of the singularity itself--this doesn't really mean GR is non-deterministic, you could just consider the singularity to not be a part of the spacetime manifold, but more like a point-sized "hole" in it. Of course GR's prediction of a "singularity" may be wrong, but in that case the past light cones of different events wouldn't converge on a single point of zero volume in the same way, so as long as we assume the new theory still has a light cone structure, we're back to my old argument about the past light cones of spacelike-separated events never becoming identical. JesseM said: I was asking if you were sure about your claim that in the situation where Mars was deflected by a passing body, the Earth would continue to feel a gravitational pull towards Mars' present position rather than its retarded position, throughout the process. ueit said: Yes, because this is a case where Newtonian theory applies well (small mass density). I’m not accustomed with GR formalism but I bet that the difference between the predictions of the two theories is very small. Probably, but that doesn't imply that GR predicts that the Earth will be attracted to Mars' current position, since after all one can ignore Mars altogether in Newtonian gravity and still get a very good prediction of the movement of the Earth. If you really think it's plausible that GR predicts Earth can "extrapolate" the motions of Mars in this situation which obviously departs significantly from spherical/cylindrical symmetry, perhaps we should start a thread on the relativity forum to get confirmation from GR experts over there? ueit said: In Newtonian gravity the force is instantaneous. So, yes, in any system for which Newtonian gravity is a good approximation the objects are “pulled towards other object's present positions”. You're talking as though the only reason Newtonian gravity could fail to be a good approximation is because of the retarded vs. current position issue! But there are all kinds of ways in which GR departs wildly from Newtonian gravity which have nothing to do with this issue, like the prediction that sufficiently massive objects can form black holes, or the prediction of gravitational time dilation. And the fact is that the orbit of a given planet can be approximated well by ignoring the other planets altogether (or only including Jupiter), so obviously the issue of the Earth being attracted to the current vs. retarded position of Mars is going to have little effect on our predictions. ueit said: The article you linked from John Baez’s site claims that uniform accelerated motion is extrapolated by GR as well. Well, the wikipedia article says: In general terms, gravitational waves are radiated by objects whose motion involves acceleration, provided that the motion is not perfectly spherically symmetric (like a spinning, expanding or contracting sphere) or cylindrically symmetric (like a spinning disk). So either one is wrong or we're misunderstanding what "uniform acceleration" means...is it possible that Baez was only talking about uniform acceleration caused by gravity as opposed to other forces, and that gravity only causes uniform acceleration in an orbit situation which also has spherical/cylindrical symmetry? I don't know the answer, this might be another question to ask on the relativity forum...in any case, I'm pretty sure that the situation you envisioned where Mars is deflected from its orbit by a passing body would not qualify as either "uniform acceleration" or "spherically/cylindrically symmetric". ueit said: EM extrapolates uniform motion, GR uniform accelerated motion. I’m not a mathematician so I have no idea if a mechanism able to extrapolate a generic accelerated motion should necessarily be as complex or so difficult to simulate on a computer as you imply. You are, of course, free to express an opinion but at this point I don’t think you’ve put forward a compelling argument. You're right that I don't have a rigorous argument, but I'm just using the following intuition--if you know the current position of an object moving at constant velocity, how much calculation would it take to predict its future position under the assumption it continued to move at this velocity? How much calculation would it take to predict the future position of an object which we assume is undergoing uniform acceleration? And given a system involving many components with constantly-changing accelerations due to constant interactions with each other, like water molecules in a jar or molecules in a brain, how much calculation would it take to predict the future position of one of these parts given knowledge of the system's state in the past. Obviously the amount of calculation needed in the third situation is many orders of magnitude greater than in the first two. ueit said: If what you are saying is true then we should expect Newtonian gravity to miserably fail when dealing with a non-uniform accelerated motion, like a planet in an elliptical orbit, right? No. If our predictions don't "miserably fail" when we ignore Mars altogether, they aren't going to miserably fail if we predict the Earth is attracted to Mars' current position as opposed to where GR says it should be attracted to, which is not going to be very different anyway since a signal from Mars moving at the speed of light takes a maximum of 22 minutes to reach Earth according to this page. Again, in the situations where GR and Newtonian gravity give very different predictions, this is not mainly because of the retarded vs. current position issue. Last edited: JesseM As far as I can tell, ueit is basically arguing an interpretation of arbitrary strong determinism, which is then made local by assuming that each particle consults it's own model of the entire universe. But if it's really local, each particle should only be allowed to consult its own model of everything in its past light cone back to the Big Bang--for the particle to have a model of anything outside that would require either nonlocality or a "conspiracy" in initial conditions. As I argued in my last post, ueit's argument about thermal equilibrium in the early universe establishing that all past light cones merge and become identical at some point doesn't make sense. As far as I can tell, ueit is basically arguing an interpretation of arbitrary strong determinism, which is then made local by assuming that each particle consults it's own model of the entire universe. In effect, each particle carries some hidden state $\vec{h}$ which corresponds to a complete list of the results of 'wavefunction collapses'. It's not exactly what I propose. Take the case of gravity in a Newtonian framework. Each object "knows" where all other objects are, instantaneously. It then acts as if it's doing all the calculations, applying the inverse square law. General relativity explains this apparently non-local behavior through a local mechanism where the instantaneous position of each body in the system is extrapolated from the past state. That past state is "inferred" from the space curvature around the object. By analogy, we might think that the EPR source "infers" the past state of the detectors from the EM field around it, extrapolates the future detector orientation and generates a pair of entangled particles with a suitable spin. DrChinese Gold Member DrC said: If there is a wavefunction and it collapses why have we never noticed wavefunction collapse in an experiment? There is not a single shred of evidence, direct, or indirect, that a wavefunction physically exists or wavefunction collapse occurs. This is a fallacious argument. Just because something something is unpredictable does not make it non-deterministic. This is true even in the classical example of a ball at the top of a hill. Not true. If entangled particles are spin correlated due to a prior state of the system, why aren't ALL particles similarly correlated? We should see correlations of spin observables everywhere we look! But we don't, we see randomness everywhere else. So the ONLY time we see these are with entangled particles. Hmmm. Gee, is this a strained explanation or what? And gosh, the actual experimental correlation just happens to match QM, while there is absolutely no reason (with this hypothesis) it couldn't have ANY value (how about sin^2 theta, for example). Why is that? It is sorta like invoking the phases of the moon to explain why there are more murders during a full moon, and not being willing to accept that there are no fewer murders at other times. Or do we use this as an explanation only when it suits us? If this isn't ad hoc science, what is? Not true. If entangled particles are spin correlated due to a prior state of the system, why aren't ALL particles similarly correlated? We should see correlations of spin observables everywhere we look! But we don't, we see randomness everywhere else. So the ONLY time we see these are with entangled particles. Hmmm. Gee, is this a strained explanation or what? That's simple. Even if each particle "looks" for a suitable detector orientation before emission, only for entangled particles we have a set of supplementary conditions (conservation of angular momentum, same emission time) that enable us to observe the correlations. In order to release a pair of entangled particles both detectors must be in a suitable state, that's not the case for a "normal" particle. And gosh, the actual experimental correlation just happens to match QM, while there is absolutely no reason (with this hypothesis) it couldn't have ANY value (how about sin^2 theta, for example). Why is that? I put Malus' law by hand without any particular reason other than reproduce QM's prediction. I'm getting tired of pointing out that the burden of proof is on you. You make the strong claim that no local-realistic mechanism can reproduce QM's prediction. On the other hand I don't claim that my hypothesis is true or even likely. I only claim that it is possible. To give you an example, von Newman's proof against the existence of hidden-variable theories is wrong even if no such theory is provided. It was wrong even before Bohm published his interpretation and will remain wrong even if BM is falsified. So, asking me to provide evidence for the local-realistic mechanism I propose is a red-herring. If this isn't ad hoc science, what is? It certainly is ad-hoc but so what? Your bold claim regarding Bell's theorem is still proven false. DrChinese Gold Member 1. In order to release a pair of entangled particles both detectors must be in a suitable state, that's not the case for a "normal" particle. 2. I put Malus' law by hand without any particular reason other than reproduce QM's prediction. 3. Your bold claim regarding Bell's theorem is still proven false. 1. :rofl: 2. :rofl: 3. Still agreed to by virtually every scientist in the field. JesseM The question of what is or is not a valid loophole in Bell's theorem should not be a matter of opinion, and it also should not be affected by how ridiculous or implausible a theory based on the loophole would have to be. For example, everyone agrees the "conspiracy in initial conditions" is a logically valid loophole, even though virtually everyone also agrees that it's not worth taking seriously as a real possibility. If it wasn't for the light cone objection, I'd say that ueit had pointed out another valid loophole, even though I personally wouldn't take it seriously because of the separate objection of the need for ridiculously complicated laws of physics to "extrapolate" the future states of nonlinear systems with a huge number of interacting parts like the human brain. But I do think the light cone objection shows that ueit's idea doesn't work even as a logical possibility. If he wanted to argue that each particle has, in effect, not just a record of everything in its past light cone, but a record of the state of the entire universe immediately after the Big Bang (or at the singularity, if you imagine the singularity itself has 'hidden variables' which determine future states of the universe), then this would be a logically valid loophole, although I would see it as just a version of the "conspiracy" loophole (since each particle's 'record' of the entire universe's past state can't really be explained dynamically, it would seem to be part of the initial conditions). Your logic here is faulty--even if the observable universe had reached thermal equilibrium, that definitely doesn't mean that each particle's past light cone would become identical at some early time. This is easier to see if we consider a situation of a local region reaching equilibrium in SR. Suppose at some time t0 we fill a box many light-years long with an inhomogenous distribution of gas, and immediately seal the box. We pick a particular region which is small compared to the entire box--say, a region 1 light-second wide--and wait just long enough for this region to get very close to thermal equilibrium. The box is much larger than the region so this will not have been long enough for the whole thing to reach equilibrium, so perhaps there will be large-scale gradients in density/pressure/temperature etc., even if any given region 1 light-second wide is very close to homogenous. So, does this mean that if we take two spacelike-separated events inside the region which happen after it has reached equilibrium, we can predict one by knowing the complete light cone of the other? Of course not--this scenario is based entirely on the flat spacetime of SR, so it's easy to see that for any spacelike-separated events in SR, there must be events in the past light cone of one which lie outside the past light cone of the other, no matter how far back in time you go. In fact, as measured in the inertial frame where the events are simultaneous, the distance between the two events must be identical to the distance between the edges of the two past light cones at all earlier times. Also, if we've left enough time for the 1 light-second region to reach equilibrium, this will probably be a lot longer than 1 second, meaning the size of each event's past light cone at t0 will be much larger than the 1 light-second region itself. The situation is a little more complicated in GR due to curved spacetime distorting the light cones (look at some of the diagrams on Ned Wright's Cosmology Tutorial, for example), but I'm confident you wouldn't see two light cones smoothly join up and encompass identical regions at earlier times--it seems to me this would imply at at the event of the joining-up, this would mean photons at the same position and moving in the same direction would have more than one possible geodesic path (leading either to the first event or the second event), which isn't supposed to be possible. In any case, your argument didn't depend specifically on any features of GR, it just suggested that if the universe had reached equilibrium this would mean that knowing the past light cone of one event in the region would allow a Laplacian demon to predict the outcome of another spacelike-separated event, but my SR example shows this doesn't make sense. OK, I think I understand your point. The CMB isotropy does not require the whole early universe to be in thermal equilibrium. But, does the evidence we have require the opposite, that the whole universe was not in equilibrium? If not, my hypothesis is still consistent with extant data. I wasn't saying anything about the big bang being stochastic, just about the initial singularity in GR being fairly "featurless", you can't extrapolate the later state of the universe from some sort of description of the singularity itself--this doesn't really mean GR is non-deterministic, you could just consider the singularity to not be a part of the spacetime manifold, but more like a point-sized "hole" in it. Of course GR's prediction of a "singularity" may be wrong, but in that case the past light cones of different events wouldn't converge on a single point of zero volume in the same way, so as long as we assume the new theory still has a light cone structure, we're back to my old argument about the past light cones of spacelike-separated events never becoming identical. I don't think your argument applies in this case. For example, the pre-big bang universe might have been a Planck-sized "molecule" of an exotic type, that produced all particles in a deterministic manner. Probably, but that doesn't imply that GR predicts that the Earth will be attracted to Mars' current position, since after all one can ignore Mars altogether in Newtonian gravity and still get a very good prediction of the movement of the Earth. Forget about that example. Take Pluto's orbit or the Earth-Moon-Sun system. In both cases the acceleration felt by each object is non-uniform (the distance between Pluto and Sun ranges from 4.3 to 7.3 billion km, during a Sun eclipse the force acting on the Moon differs significantly from the case of a Moon eclipse). However, both systems are well described by Newtonian gravity hence the retardation effect is almost null. I think the main reason is that, quantitatively, the gravitational radiation is extremely small. The Wikipedia article you've linked says that Earth loses about 300 joules as gravitational radiation from a total of 2.7 x 10^33 joules. If you really think it's plausible that GR predicts Earth can "extrapolate" the motions of Mars in this situation which obviously departs significantly from spherical/cylindrical symmetry, perhaps we should start a thread on the relativity forum to get confirmation from GR experts over there? I'll do that. You're right that I don't have a rigorous argument, but I'm just using the following intuition--if you know the current position of an object moving at constant velocity, how much calculation would it take to predict its future position under the assumption it continued to move at this velocity? How much calculation would it take to predict the future position of an object which we assume is undergoing uniform acceleration? And given a system involving many components with constantly-changing accelerations due to constant interactions with each other, like water molecules in a jar or molecules in a brain, how much calculation would it take to predict the future position of one of these parts given knowledge of the system's state in the past. Obviously the amount of calculation needed in the third situation is many orders of magnitude greater than in the first two. If my analogy with gravity stands (all kinds of motions are well extrapolated in the small mass density regime), the difference in complexity should be about the same as between the Newtonian inverse square law and GR. No. If our predictions don't "miserably fail" when we ignore Mars altogether, they aren't going to miserably fail if we predict the Earth is attracted to Mars' current position as opposed to where GR says it should be attracted to, which is not going to be very different anyway since a signal from Mars moving at the speed of light takes a maximum of 22 minutes to reach Earth according to this page. Again, in the situations where GR and Newtonian gravity give very different predictions, this is not mainly because of the retarded vs. current position issue. See my other examples above. JesseM, I've started a new thread on "Special & General Relativity" forum named "General Relativity vs Newtonian Mechanics". NateTG Homework Helper Have I missed something here? Aren't there solutions of the Schroedinger equation that predict accurately the orbitals of the hydrogen atom, the electron density of H2? Didn't solutions of the SE allow development of the scanning tunnelling EM? Aren't DeBroglie waves wavefunctions? Don't they accurately predict the various interferance patterns of electrons? I think the evidence is almost overwhelming. Of course you could argue that all these phenomena are caused by some property that behaves exactly like the wavefunction, but a gentleman by the name of Occam had something to say about that.If it walks like a duck and quacks like a duck....... Applying Occam's Razor to QM produces an 'instrumentalist interpretation' which is explicitly uninterested in anything untestable, and, instead simply predicts probabilities of experimental results. In other words, as long as there are prediction equivalent theories without a physically real wavefunction, Occam's razor tells us there isn't necessarily one. NateTG Homework Helper But if it's really local, each particle should only be allowed to consult its own model of everything in its past light cone back to the Big Bang--for the particle to have a model of anything outside that would require either nonlocality or a "conspiracy" in initial conditions. In a sense it's a 'small conspiracy' since something like Bohmian Mechanics allows a particle to maintain correlation without any special restriction on the initial conditions that insures correlation. JesseM In a sense it's a 'small conspiracy' since something like Bohmian Mechanics allows a particle to maintain correlation without any special restriction on the initial conditions that insures correlation. Yes, but Bohmian mechanics is explicitly nonlocal, so it doesn't need special initial conditions for each particle to be "aware" of what every other particle is doing instantly. ueit is trying to propose a purely local theory. NateTG Homework Helper Yes, but Bohmian mechanics is explicitly nonlocal, so it doesn't need special initial conditions for each particle to be "aware" of what every other particle is doing instantly. ueit is trying to propose a purely local theory. BM is explicitly non-local because it requires synchronization. The 'instantaneous communication' aspect can be handled by anticipation since BM is deterministic. Now, let's suppose that we can assign a synchronization value to each space-time in the universe which has the properties that: (1) the synchronization value is a continuous function of space-time (2) if space-time a is in the history of space-time b then the synchronization value of a is less than the synchronization value of b. Now, we should be able to apply BM to 'sheets' of space-time with a fixed synchronization value rather than instants. Moreover for flat regions of space-time, these sheets should correspond to 'instants' so the predictions should align with experimental results. Of course, it's not necessarily possible to have a suitable synchronization value. For example, in time-travel scenarios it's clearly impossible because of condition (2), but EPR isn't really a paradox in those scenarios either. JesseM, It seems I was wrong about the GR being capable to extrapolate generic accelerations. I thought that the very small energy lost by radiation would not have a detectable effect on planetary orbits. This is true, but there are other effects, like Mercury's perihelion advance. You said: Of course GR's prediction of a "singularity" may be wrong, but in that case the past light cones of different events wouldn't converge on a single point of zero volume in the same way, so as long as we assume the new theory still has a light cone structure, we're back to my old argument about the past light cones of spacelike-separated events never becoming identical. We know that GR alone cannot describe the big-bang as it doesn't provide a mechanism by which the different particles are created. So, if the pre-big-bang universe was not a null-sized object, but with a structure of some sort, and if it existed long enough in that state for a light signal to travel along the whole thing, would this ensure identical past light cones? Applying Occam's Razor to QM produces an 'instrumentalist interpretation' which is explicitly uninterested in anything untestable, and, instead simply predicts probabilities of experimental results. In other words, as long as there are prediction equivalent theories without a physically real wavefunction, Occam's razor tells us there isn't necessarily one. I disagree. A wavefunction is a much simpler thing than the collection of all humans and their experiments. Occam would tell you to derive the latter from the former (as in MWI) rather than somehow taking it as given. NateTG Homework Helper I disagree. A wavefunction is a much simpler thing than the collection of all humans and their experiments. Occam would tell you to derive the latter from the former (as in MWI) rather than somehow taking it as given. Well, ultimately, it comes down to what 'simplest' means. And that requires some sort of arbitrary notions. I don't know... it's easy to specify a wavefunction; you just write down some equations, and then without any complex further assumptions, you can talk about decoherence and so on to show that humans and their experiments are structures in the wavefunction. But how do you specify the collection of humans and their experiments, without deriving it from something more basic? I think any theory that's anthropocentric like that is bound to violate Occam. NateTG Homework Helper We have one theory which says that: 1. We can predict experimental results using some method X 2. There are things that are not observable used in X. 3. These unobservable things have physical reality. And another theory that says: 1. We can predict experimental results using the same method X. 2. There are things that are not observable used in X Even considering that 'physical reality' is a poorly defined notion, it seems like the latter theory is simpler. The crucial difference here being that in the former theory, 1 is explained by 2 and 3, whereas in the latter theory, 1 is an assumption that comes from nowhere. Occam is bothered by complex assumptions, not complex conclusions. Once you've explained something, you can cross it off your list of baggage. Also, the latter theory isn't complete; either the unobservable things exist or they don't, and you have to pick one. Last edited: 1. We can predict experimental results using QED. 2. The 4-potential $$A^{\mu}$$ is unobservable. Surely we don't have to make a choice, but rely on experiment ? Last edited: I disagree. A wavefunction is a much simpler thing than the collection of all humans and their experiments. Occam would tell you to derive the latter from the former (as in MWI) rather than somehow taking it as given. Actually, it is quite possible that you can do without a wavefunction (I guess Occam would be happy) In http://www.arxiv.org/abs/quant-ph/0509044 , the Klein-Gordon-Maxwell electrodynamics is discussed, the unitary gauge is chosen (meaning the wavefunction is real), and it is proven that one can eliminate the wavefunction from the equations and formulate the Cauchy problem for the 4-potential of electromagnetic field. That means that if you know the 4-potential and its time derivatives at some moment in time, you can calculate them for any moment in time, or, in other words, the 4-potential evolves independently.
2020-08-08 21:20:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6358566880226135, "perplexity": 540.4466451652081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738351.71/warc/CC-MAIN-20200808194923-20200808224923-00372.warc.gz"}
https://proofwiki.org/wiki/Definition:Constructed_Semantics/Instance_3/Factor_Principle
Definition:Constructed Semantics/Instance 3/Factor Principle It has been suggested that this page or section be renamed. One may discuss this suggestion on the talk page. Theorem The Factor Principle: $\left({p \implies q}\right) \implies \left({\left({r \lor p}\right) \implies \left ({r \lor q}\right)}\right)$ Proof By the definitional abbreviation for the conditional: $\mathbf A \implies \mathbf B =_{\text{def}} \neg \mathbf A \lor \mathbf B$ the Factor Principle can be written as: $\neg \left({\neg p \lor q}\right) \lor \left({\neg \left({r \lor p}\right) \lor \left ({r \lor q}\right)}\right)$ This evaluates as follows: $\begin{array}{|ccccc|c|cccccccc|} \hline \neg & (\neg & p & \lor & q) & \lor & (\neg & (r & \lor & p) & \lor & (r & \lor & q)) \\ \hline 0 & 2 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 2 & 1 & 0 & 0 & 0 & 2 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 2 & 2 & 0 & 0 & 0 & 2 & 0 & 0 & 2 & 0 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 & 0 & 0 & 2 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 2 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 \\ 0 & 2 & 2 & 0 & 0 & 0 & 0 & 1 & 2 & 2 & 0 & 1 & 0 & 0 \\ 0 & 2 & 0 & 0 & 0 & 0 & 2 & 2 & 0 & 0 & 0 & 2 & 0 & 0 \\ 0 & 2 & 1 & 0 & 0 & 0 & 0 & 2 & 2 & 1 & 0 & 2 & 0 & 0 \\ 0 & 2 & 2 & 0 & 0 & 0 & 0 & 2 & 2 & 2 & 0 & 2 & 0 & 0 \\ 0 & 2 & 0 & 0 & 1 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 1 & 1 & 1 & 1 & 0 & 2 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ 2 & 0 & 2 & 2 & 1 & 0 & 2 & 0 & 0 & 2 & 0 & 0 & 0 & 1 \\ 0 & 2 & 0 & 0 & 1 & 0 & 2 & 1 & 0 & 0 & 2 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 2 & 0 & 2 & 2 & 1 & 0 & 0 & 1 & 2 & 2 & 0 & 1 & 1 & 1 \\ 0 & 2 & 0 & 0 & 1 & 0 & 2 & 2 & 0 & 0 & 2 & 2 & 2 & 1 \\ 1 & 1 & 1 & 1 & 1 & 0 & 0 & 2 & 2 & 1 & 0 & 2 & 2 & 1 \\ 2 & 0 & 2 & 2 & 1 & 0 & 0 & 2 & 2 & 2 & 0 & 2 & 2 & 1 \\ 0 & 2 & 0 & 2 & 2 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 2 \\ 0 & 1 & 1 & 2 & 2 & 0 & 2 & 0 & 0 & 1 & 0 & 0 & 0 & 2 \\ 2 & 0 & 2 & 0 & 2 & 0 & 2 & 0 & 0 & 2 & 0 & 0 & 0 & 2 \\ 0 & 2 & 0 & 2 & 2 & 0 & 2 & 1 & 0 & 0 & 2 & 1 & 2 & 2 \\ 0 & 1 & 1 & 2 & 2 & 0 & 1 & 1 & 1 & 1 & 2 & 1 & 2 & 2 \\ 2 & 0 & 2 & 0 & 2 & 0 & 0 & 1 & 2 & 2 & 0 & 1 & 2 & 2 \\ 0 & 2 & 0 & 2 & 2 & 0 & 2 & 2 & 0 & 0 & 2 & 2 & 2 & 2 \\ 0 & 1 & 1 & 2 & 2 & 0 & 0 & 2 & 2 & 1 & 0 & 2 & 2 & 2 \\ 2 & 0 & 2 & 0 & 2 & 0 & 0 & 2 & 2 & 2 & 0 & 2 & 2 & 2 \\ \hline \end{array}$ $\blacksquare$
2019-05-20 17:02:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7261276245117188, "perplexity": 2414.0375518408623}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256082.54/warc/CC-MAIN-20190520162024-20190520184024-00186.warc.gz"}
https://www.nist.gov/ncnr/spin-filters/spin-filter-instruments/ng7-30m-sans
An official website of the United States government The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. NG7 30m SANS Share NSF: Small Angle Neutron Scattering (NG7 30m SANS) Neutron spin filters can be applied to measurements requiring different field orientations. The schematic shown above illustrates the use of NSFs on the 30m SANS and CHRNS Very Small Angle Neutron Scattering (VSANS) instruments. In this example, the NSF is the analyzer for a polarized beam measurement with a horizontal field. For more details, please select the provided links for general {Gentile05, Gentile00} and specific {Chen09, Krycka09} applications. All SANS publications using NSFs can be found here. Powerpoint slide provided for presentation use: Neutron Spin Filters on SANS Specifications for NSFs on 30m SANS • 3He cell analyzer only (under low vacuum) • 3He polarization flipping capability • Standard wavelengths: $${\lambda}$$ = 5 $${\unicode{x212B}}$$ or 7.5 $${\unicode{x212B}}$$ • Measurable Q range: 0.015 $${\unicode{x212B}}$$-1 - 0.12 $${\unicode{x212B}}$$-1 • Sample field: H $${\leq}$$ 1.6 T • Four cross-section polarization correction • 3He transmission: $${\leq}$$54% (for desired state) • Flipping ratio: $${\leq}$$ 90 • Manifold sample holder (up to 3 samples simul.) Contacts Magnetic Media Created May 16, 2018, Updated November 19, 2019
2020-01-18 16:46:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3137010633945465, "perplexity": 9673.330037574839}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593295.11/warc/CC-MAIN-20200118164132-20200118192132-00465.warc.gz"}
http://geevi.github.io/2014/hardness-of-coloring.html
### Hardness of Coloring This is the third part, of a series of blog posts, on the impossibility of finding efficient algorithms for certain problems. In the first, we saw that for sudoku and many other puzzles, there is a single explanation for our inability to find polynomial time algorithms. The explanation is that, any one problem (say sudoku) that is NP-Complete, does not have a polynomial time algorithm. This is commonly known as the P $\neq$ NP assumption. If assuming this, a certain problem does not have a polynomial time algorithm, then we say that it is hard. When a problem has a polynomial time algorithm, we say it is easy. In the second post, we saw that for the $3$-SAT problem, even a relaxed version of the problem, of getting an approximation factor better than $7/8$, is hard. We also saw a very silly polynomial time algorithm, which gives a $7/8$ approximation factor. Such pair of results are called optimal, since we know the exact value of the approximation factor below which the problem is easy and above which the problem is hard. The purpose of this post is to explain some recent developments in the hardness results for a particular problem called coloring (to which I have contributed). It is a very common problem, for which we do not have optimal results yet. #### Graph Coloring A graph is an object like the uncolored figure above. It consists of a set of vertices, which are the round things and a set of edges, which are the lines connecting the round things. The colored figure above is a $3$-coloring of the graph on the left, because for every edge, the vertices at the end points have different color and only $3$ colors (red, blue and green) are used. The graph coloring problem is, when given a graph (an arbitrary figure like the one on the left), find a coloring (assignment of colors to the vertices, such that the end points of every edge has different colors), using the least number of colors. In particular, we will be looking at the following question: Given a $3$-colorable graph, can you find a $C$-coloring? The best polynomial time algorithm known for this problem, only guarantee $C = n^{0.199\cdots}$, where $n$ is the number of vertices. It is known that finding a $4$-coloring is hard. We want to know where exactly between $n^{0.199\cdots}$ and $4$, the transition from easy to hard happens. ##### Khot’s Unique Games Conjecture (UGC) Khot observed that PCP with $2$ random queries and unique checks, will imply more hardness of approximation results. ##### Our Results In joint work with Irit Dinur, Prahladh Harsha, Srikanth Srinivasan, we showed that assuming a version of UGC, $C=2^{poly(\log \log n)}$ is hard, an improvement over $C=poly(\log \log n)$ hardness result of Dinur and Shinkar, using similar assumption. Puzzle: $N$ switches each of which can take $3$ positions, control a bulb which glows as red, blue and green. Changing the position of all the switches changes the color of the bulb. Show that the bulb is controlled by exactly one switch. Even if the number of colors the bulbs can take is $100$, for $N=1000$ this is still (al most) true. We show that even if the switches where not allowed to move independently, such statements are still true. #### Hypergraph Coloring $r$-Uniform Hypergraph: A vertex set $V$ and a collection of subsets of $V$ of size $r$. Assign colors to vertices such that no set is monochromatic.Graphs corresponds to the setting of $r=2$. Known algorithms only guarantee $C=n^\alpha$ for some $\alpha < 1$. The state of the art results showed that $C= poly( \log n)$ is hard. ##### Our Results In joint work with Venkat Guruswami, Johan Hastad, Prahladh Harsha and Srikanth Srinivasan, we show $C > > poly(\log n)$ is also hard. Subsequent to our work, Khot and Saket showed $C=2^{(\log n)^{1/19}}$ is hard. Though their result was for $12$-uniform hypergraphs. I further observed that by combining their methods with ours, the same hardness can be obtained for $4$-uniform hypergraphs~\cite{Varma2014}. Hence we improved the hardness results from $C=poly(\log n)$ to $C=2^{(\log n)^{1/19}}$ for the case of $4$-colorable $4$-uniform hypergraphs. #### Covering Problems Covering problems are a generalization of the coloring problem. Motive for studying them is to develop techniques which might later be adapted to graph coloring. An instance of the problem consists of a hypergraph $G=(V,E)$ along with a predicate $P \subseteq {0,1}^r$ and a literal function $L:E\rightarrow {0,1}^r$. An assignment $f:V \rightarrow {0,1}$ covers an edge $e\in E$, if $f|_e \oplus L(e) \in P$. A cover for an instance is a set of assignments such that every edge is covered by one of the assignments. The covering problem is, given a $2$-coverable instance, find a $C$-covering? Theorem: $G$ has a cover of size $C$ with the not-all-equal predicate ${0,1}^r \setminus { \overline 0, \overline 1}$ iff it is $2^C$-colorable. Some predicates like $3$-SAT which has an oddness property (for every $x\in {0,1}^r, x\in P \vee \bar x \in P$). Then $C=2$ is easy, since any assignment and its complement covers the instance. Dinur and Kol asked the question whether the covering problem is hard for all non-odd predicates for any constant $C$. Assuming a slightly modified form of UGC, they proceeded to show that if a non-odd predicate has a pairwise independent distribution in its support, then this is indeed the case. ##### Our Results In joint work with Amey Bhangale and Prahladh Harsha, we show - hardness for all non-odd predicates and for all constants $C$ (making same assumptions as Dinur and Kol), - NP-hardness under some mild conditions on the predicate for $C= \log \log n$, - improved hardness of $C=\log \log n$ (Dinur and Kol proved $\log \log \log n$.).
2017-08-17 23:04:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8201850056648254, "perplexity": 468.54587566893855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104172.67/warc/CC-MAIN-20170817225858-20170818005858-00564.warc.gz"}
https://snakesonaplane.readthedocs.io/en/latest/
# Snakes on a Plane Conda meets Cargo. SOAP lets you easily maintain Conda environments for individual projects. Soap is configured in soap.toml. SOAP always looks for this file in the root of the git repository it was called from. It can also be configured in the tool.soap table of pyproject.toml. Specify environments with a name and a Conda environment YAML file: [envs] dev = "devtools/conda-envs/test_env.yml" user = "devtools/conda-envs/user_env.yml" docs = "devtools/conda-envs/docs_env.yml" Then run commands in an environment with soap run: soap run --env docs "sphinx-build -n auto docs docs/_build/html" SOAP will check that the environment matches the specification with every call, so if you pull in an update to docs_env.yml and run soap run ... again, your environment will be updated. This won’t necessarily update dependencies if the spec hasn’t changed; to do this, run soap update: soap update You can also define your own aliases for commands. For simple commands, define the command as a value in the aliases table: [aliases] greet = "echo 'hello world'" To configure an alias, define a table instead of a string: [aliases.docs] cmd = "sphinx-build -j auto docs docs/_build/html" chdir = true # Run the command in the git repository root directory env = "docs" # Use the docs environment by default description = "Build the docs with Sphinx" # Description for --help In either case, the alias becomes a sub-command: soap greet The environment used by an alias can be defined in the TOML file, but it can also be overridden on the command line: soap docs --env user SOAP will always check that the environment is correct before running aliases, just like for soap run! ## Acknowledgements Project based on the Computational Molecular Science Python Cookiecutter version 1.6. soap Snakes on a Plane: Conda meets Cargo
2023-03-28 12:36:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4358365833759308, "perplexity": 9584.084953549253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948858.7/warc/CC-MAIN-20230328104523-20230328134523-00066.warc.gz"}
https://gamedev.stackexchange.com/questions/81709/the-xna-redistributable-isnt-in-the-prerequisites-list-how-do-i-include-it-whe
# The XNA redistributable isn't in the prerequisites list; how do I include it when publishing? So I created a small project in XNA 4.0, and wanted to try and publish it, and so I did. Yet when attempting to install it on a different computer it didn't seem to work. I've read and studied about how the process of publishing works, and I saw someone in a different thread asking the same question as me, then I realized that inside the prerequisites list which is in PROJECT -> PROPERTIES -> Publish, the XNA redistributable wasn't there therefore it wasn't checked, and when attempting to install it on a different computer they needed to install redistributable on their own to make it work. Any idea why isn't the redistributable in the list? Maybe this isn't even the problem and if that's case any ideas of what IS the problem? Thanks ahead.
2020-02-19 06:21:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30661270022392273, "perplexity": 788.5576900422328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144058.43/warc/CC-MAIN-20200219061325-20200219091325-00334.warc.gz"}
https://mathoverflow.net/questions/222997/whats-wrong-with-my-understanding-of-the-scheme-textisome-lambda-e-lam
# What's wrong with my understanding of the scheme $\text{Isom}(E_\lambda, E_{\lambda'})$? Let $\mathcal{M}_{1,1}$ be the moduli stack of elliptic curves (over the complex numbers). Define $$\begin{eqnarray*} X &:=& \Bbb{A}^1_{\lambda} - \{0,1\}\\ X' &=& \Bbb{A}^1_{\lambda'} - \{0,1\}.\end{eqnarray*}$$ There are morphisms $X \to \mathcal{M}_{1,1}$ and $X' \to \mathcal{M}_{1,1}$ given by the families of curves $$\begin{eqnarray*} E_\lambda := V(y^2 - x(x-1)(x-\lambda)) \\ E_{\lambda' }:= V(y^2 - x(x-1)(x-\lambda')). \end{eqnarray*}$$ By results of Grothendieck, we know that the fiber product $\text{Isom}(E_{\lambda}, E_{\lambda'}) := X \times_{\mathcal{M}_{1,1}} X'$ is a scheme. Its $T$-points are given by $$\text{Isom}(E_{\lambda}, E_{\lambda'})(T) = (T \to X ,T \to X', {E_{\lambda}}_T \stackrel{\simeq}{\to} {E_{\lambda'}}_T ) .$$ The isomorphism above between ${E_{\lambda}}_T$ and ${E_{\lambda'}}_T$ is a $T$-isomorphism. My goal is to try and understand why $X \to \mathcal{M}_{1,1}$ is not étale. To do this, it is enough to show that $\text{Isom}(E_{\lambda},E_{\lambda'}) \to X$ is not \'{e}tale. Since the automorphisms of any elliptic curve in Legendre form are given by $y \mapsto cy$ and $x \mapsto ax +b$, I can see that the scheme $\text{Isom}(E_{\lambda},E_{\lambda'})$ is given by the following conditions in $\Bbb{A}^5:$ $$\text{Isom}(E_{\lambda},E_{\lambda'}) = \operatorname{Spec} \frac{\Bbb{C}[\lambda, \frac{1}{\lambda}, \frac{1}{1-\lambda},\lambda', \frac{1}{\lambda'}, \frac{1}{1-\lambda'},a,\frac{1}{a},b,c]}{(j(\lambda) - j(\lambda'), f_1,f_2,f_3,f_4)}.$$ The polynomials $f_1,f_2,f_3,f_4$ are obtained from equating the coefficients of the relation $$x(x-1)(x-\lambda') = \frac{(ax+b)(ax+b-1)(ax+b-\lambda)}{c^2}.$$ Explicitly, they are given by: $$\begin{eqnarray*} f_1 &=& a^3 - c^2 \\ f_2 &=& 3a^2b - a^2 \lambda - a^2 + a^3(\lambda' + 1) \\ f_3 &=& 3ab^2 - 2ab\lambda - 2ab + a\lambda -a^3\lambda'\\ f_4 &=& b^3 - b^2\lambda - b^2 + b\lambda. \end{eqnarray*}$$ Now if I compute the fiber of the map $\text{Isom}(E_{\lambda}, E_{\lambda'}) \to X$ over the $\lambda = -1$ ($j = 1728$), I get the non-reduced scheme $$\text{Isom}(E_{\lambda}, E_{\lambda'})_{-1} = \operatorname{Spec} \frac{\Bbb{C}[\lambda', \frac{1}{\lambda'}, \frac{1}{1-\lambda'},a,\frac{1}{a}, b,c]}{ \left((2 \lambda'-1)^2 (\lambda'+1)^2 (\lambda'-2)^2,f_1,f_2',f_3',f_4'\right)}$$ where $$\begin{eqnarray*} f_1 &=& a^3 - c^2 \\ f_2' &=& 3b +a (\lambda'+1) \\ f_3' &=& 3b^2 - a^2\lambda' - 1\\ f_4' &=& b^3 - b. \end{eqnarray*}$$ Hence $X \to \mathcal{M}_{1,1}$ is ramified. However: On the other hand, I have computed the cardinality of the fiber $\text{Isom}(E_{\lambda},E_{\lambda'}) \to X$ to always be 12. Indeed consider the $\Bbb{C}$-point of $X$ corresponding to $\lambda = -1$. There are three possibilities for $\lambda'$, namely $-1,2,1/2$. An elliptic curve with $j$-invariant 1728 has automorphism group of order 4, and so the fiber over $-1$ has cardinality $3\times 4 = 12$. The story is the same for the other values of $\lambda$. My question is: Why am I always getting 12? I am not taking into account some non-reduced issue here? I am also confused because in my head, the fiber cardinality should jump for a ramified morphism. Edit I was wrong previously. The fiber over $\lambda = -1$ is reduced, as Macaulay2 tells me (using the command isNormal) that the same ring with coefficients in $\Bbb{Q}$ is normal, hence reduced. Tensoring with $\Bbb{C}$ over $\Bbb{Q}$ still preserves reducedness (since $\Bbb{Q}$ is perfect). The key point is that the element $(2\lambda'-1)(\lambda'+1)(\lambda'-2)$ is already in the ideal $(f_1,f_2',f_3',f_4')$ (also confirmed by Macaulay2). • Forget about the messy equations. Etaleness of a map to the DM moduli stack says exactly that formal fibers of the family are the universal deformations. In this case it is formally smooth of relative dimension 1, so it is equivalent to check if the first-order deformation of each geometric fiber is nontrivial. Bringing in $X'$ is a red herring; forget about it. But why do you think it is not etale? After all, every elliptic curve over a $\mathbf{Z}[1/2]$-scheme does acquire "Legendre form" over an etale cover. Do you know the size of the automorphism group of a "Legendre structure"? – nfdc23 Nov 8 '15 at 11:00 • @nfdc23 I'm very much a beginner in deformation theory - Why would having nontrivial first order deformations imply that the formal fibers are universal deformations? Here, do you mean "nontrivial first order deformations" inside $E_\lambda$? Is that the same as $E_\lambda$ not being isotrivial? – Will Chen Nov 8 '15 at 13:39 • @oxeimon: I meant it as I wrote it: if the deformation theory is "rigid" (no nontrivial automorphisms as deformations) and formally smooth of relative dimension 1 then a formal deformation over a complete dvr that is nontrivial to first order must be a universal deformation. This is a basic exercise. I don't claim any relation to isotriviality, which cannot be detected at the infinitesimal level. (I'm not sure what you mean by "inside $E_{\lambda}$.) – nfdc23 Nov 8 '15 at 16:10 • @BenLim How do you know it is non-reduced? I think your first relation (involving $\lambda'$) is actually in the ideal generated by $f_1',\dots,f_4'$, in which case it actually is reduced. – Charles Rezk Nov 9 '15 at 21:43 • @CharlesRezk I think you're right. Macaulay2 says that $(2\lambda'-1)(\lambda'+1)(\lambda'-2)$ is in that ideal. – Ben Lim Nov 9 '15 at 22:23 Let $Leg: \mathbb P^1-\{0,1,\infty\}\to \mathcal M_{1,1}$ be the Legendre map. (This associates to $\lambda$ the elliptic curve given by $y^2 = x(x-1)(x-\lambda)$.) 1. The coarse moduli space map $j:\mathcal M_{1,1}\to \mathbb A^1$ is of degree $1/2$. 2. The morphism $\mathbb P^1-\{0,1,\infty\}\to \mathbb A^1$ is of degree $6$. (Indeed, given a $j$-invariant, there are precisely 6 possibilities for the $\lambda$-invariant of that curve: $\lambda$, $1/\lambda$, $1-\lambda$, $1/(1-\lambda)$, $\lambda/(1-\lambda)$ and $(1-\lambda)/\lambda$.) In other words, the degree of $j\circ L$ is $6$. It follows that the degree of $L$ is the degree of $j\circ L$ divided by the degree of $j$. This gives $6\times 2 = 12$. • @BenLim Concerning degrees of maps of stacks: consider $G$ a finite group. What would you say the degree of $\{pt\} \to BG$ is? And what about $BG\to \{pt\}$? – Ariyan Javanpeykar Nov 8 '15 at 20:24 • @BenLim Concerning your confusion: The Legendre map is etale. An elliptic curve in Legendre form has rational $2$-torsion, and any isomorphism of an elliptic curve respecting all of the $2$-torsion points has to be $\pm 1$. A proof of this can be found in Katz-Mazur. (Compare this to the fact that an automorphism of an elliptic curve respecting all of the $n$-torsion points ($n>2$) has to be trivial.) – Ariyan Javanpeykar Nov 8 '15 at 20:27 • Ok. However, if it indeed is \'{e}tale, then why am I getting that the fiber of $\text{Isom}(E_{\lambda}, E_{\lambda'}) \to X$ over $\lambda = -1$ is non-reduced? – Ben Lim Nov 8 '15 at 20:28 • @BenLim (Here's me trying to explaining degrees.) The map $\{pt\} \to [\{pt\}/G]$ is of degree $\# G$. The composition with the coarse map $[\{pt\}/G] \to \{pt\}$ is the identity. So this means that the "degree" of $BG \to \{pt\}$ is $1/\# G$. You could think of the "point" of $BG$ as a $1/\# G$-th point. There are some other answers on Mathoverflow explaining this. Look up "groupoid cardinality" for instance. – Ariyan Javanpeykar Nov 8 '15 at 20:31 • @BenLim I can't see where your mistake is in your computation at the moment. In any case, there has to be a mistake because $Isom_S(E,E^\prime)\to S$ is a finite unramified morphism of schemes, whenever $S$ is a scheme and $E$ and $E^\prime$ are elliptic curves over $S$. (Note that $Isom_S(E,E^\prime)$ is the sheaf of isomorphisms respecting the zero section.) Are you sure you are computing the scheme-theoretic fibre correctly? – Ariyan Javanpeykar Nov 8 '15 at 20:43
2019-10-24 06:04:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9270352721214294, "perplexity": 210.27716602256655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987841291.79/warc/CC-MAIN-20191024040131-20191024063631-00512.warc.gz"}
http://math.stackexchange.com/questions/259310/proving-that-if-gx-is-injective-and-gfx-is-injective-then-fx-is
# Proving that if $g(x)$ is injective, and $g(f(x))$ is injective, then $f(x)$ is injective Conjecture: If $g(x)$ is injective, and $g(f(x))$ is injective, then $f(x)$ is injective How can I prove that conjecture formally? Thanks! - I think you should specify domains and codomains of f and g. – Moritzplatz Dec 15 '12 at 15:03 ## 1 Answer Let $f(a)=f(b)$. Hence $g(f(a))=g(f(b))$. Since $gf$ is injective. Therefore $a=b$ - Very nice Amr. Thanks! – pie Dec 15 '12 at 15:02 Yes I didnt see that – Amr Dec 15 '12 at 15:04 Note that this works even if $g$ is not assumed to be injective. – Santiago Canez Dec 15 '12 at 15:07 @pie you can accept this answer. – leo Dec 15 '12 at 15:14 If $f(x)=x^2$, then $g\circ f$ will not be injective (on $\mathbb{R}$). – Benji Dec 15 '12 at 15:20
2016-05-28 00:09:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9934766888618469, "perplexity": 380.3210314107336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277286.54/warc/CC-MAIN-20160524002117-00233-ip-10-185-217-139.ec2.internal.warc.gz"}
https://wwrenderer-staging.libretexts.org/render-api?sourceFilePath=Library/Utah/Calculus_II/set3_Transcendental_Functions/set3_pr5.pg&problemSeed=1234567&courseID=anonymous&userID=anonymous&course_password=anonymous&answersSubmitted=0&showSummary=1&displayMode=MathJax&language=en&outputFormat=nosubmit
a) Let $\displaystyle | x | < \frac{\pi}{2}$ and $y=\left(\cos^{-1}x\right)^{3}$. Then the derivative $D_{x}y$ = . b) Let $\displaystyle |x|<\frac{\pi}{2}$ and $z=\ln\left(\sec x + \tan x \right)$. Then the derivative $D_{x}z$ = . You can earn partial credit on this problem.
2023-04-01 19:24:57
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8960790634155273, "perplexity": 374.3567052572343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00754.warc.gz"}
https://www.physicsforums.com/threads/1-1-2-1-3-1-n-80-if-u.17344/
# 1+1/2+1/3+ +1/n<80 if u.=> 1. Mar 29, 2004 ### quddusaliquddus 1+1/2+1/3+...+1/n<80 if u.......=> prove: 1 + 1/2 + 1/3 + .. + 1/n<80 if you eliminate terms with 9 as a digit in its denominater? Generalise to deleting other digits in denom.? 2. Mar 29, 2004 ### quddusaliquddus Anyone out there who can help?.....please?...... 3. Mar 30, 2004 ### Janitor No matter how big n is? Can't guide you on this, but was your source reliable? Do you mean that 1/19, for instance, is excluded from the sum because it has a 9 in the 19? 4. Mar 30, 2004 ### quddusaliquddus yes. that is what was meant im afraid. 5. Mar 30, 2004 ### Zurtex Hmm, been thinking about this now, because you need to proove that this sum is always less than 80 is not the same as: $$\sum_{n=1}^{\infty}{ \left( \frac{1}{n} \right)} - \sum_{n=1}^{\infty} \left( \frac{1}{10n-1} \right)$$ Last edited: Mar 30, 2004 6. Mar 30, 2004 ### matt grime The only problem there is that you are asuming that the sums converge so that you can rearrange the summation. I think a possible proof goes along the lines: take the fisrt 8 terms: 1+1/2+...+1/8 < 8 then take the valid terms with two digits in the denominator: 1/10+1/11+...+1/88 < 72/10 72 possible terms, all less than or equal to 1/10 now it would be nice, but I've not counted if there were 72*9 terms with 3 digits in the denominator in the sum cos we can replace them with 72*9/100 Then we can bound above by a gp if this continues and get a bound of 8/(1-9/10) = 80 Last edited: Mar 30, 2004 7. Mar 30, 2004 ### Chen Zurtex, that's only for the terms 1/9, 1/19, 1/29, etc. What about 1/90, 1/91, 1/92, etc? 8. Mar 30, 2004 ### uart I've got an idea that just might work with this, especially as you're only after an upper bound. Say you just look at how many terms in the sum are excluded (and hence how many are included) for each "decade" (eg 1..9, 10..99, 100,999, etc) and then place an upper bound on each "decade" sum as the number of terms included in each "decade" times the maximum sum term within that decade. For example, in the "decade" from 1000 to 9999 there are only 5832 terms so the an upper bound would be 5.832 PS: Not sure if "decade" is really the best word to describe the intervals I'm using but I hope everyone knows what I mean. :) Last edited: Mar 30, 2004 9. Mar 30, 2004 ### matt grime I think that if you've counted correctly, you've done some of the leg work for my proof above (that appeared lafter fist bein posted) as 5000 is about 72*18 as I'd require 10. Mar 30, 2004 ### uart Just running with that idea a bit more. You can easily make an acurrate sum of the first few "decades" without invoking any upper bounds. This will reduce the size of the overall upper bound you come up with. But eventually you'll need to just bound each "decade" as described above. Here's my rough calc of the number of terms exluded in each decade. You need to make some sort of series out of this and get a closed form expression to use in the infinite sum. Should be do-able I think. 1..9 : 1 = 1 of 9 excluded 10..99 : 8*1 + 10 = 18 of 90 100..999 : 8*(1+18) + 100 = 252 of 900 1000..9999 : 8*(1+18+252) + 1000=3168 of 9000 etc Last edited: Mar 30, 2004 11. Mar 30, 2004 ### uart Yeah Matt, I didn't see your post until after I posted mine, but I think we are both looking at pretty much the same idea. :) 12. Mar 31, 2004 ### uart OK I've got this one fully sorted now. It's actually much easier to count the "not contain 9's" direct rather than count the "contain 9's" as I did above. The number of numbers that don't contain the digit "9" in each power of 10 range is quite easy to track as follows. 0..9 : 9 numbers (*see note) 10..99 : 9*8 numbers 100..999 : 9*8*8 numbers 1000..9999 : 9*8^3 etc Note: I included zero in the first group just to help make the pattern consistant. So the upper bound to the sum of all the terms that don't contain "9" is simply : UB = 9 * ( 1 + 0.8 + 0.8^2 + 0.8^3 + ....) = 45 As I suggested earlier this upper bound can be made a bit tighter by just manually summing some of the early terms in the series and postponing the upper bound sum a little. For example if you sum all the terms up to 1/8 then you can immediately reduce the upper bound by 9 - Sum(1/1, 1/2, ... 1/8) which reduces the upperbound to under 39 13. Mar 31, 2004 ### quddusaliquddus Im sorry i dont understand. Does this prove that by throwing out all these terms with 9 as denom, the UB of the original sum (1 + 1/2 + 1/3 + ... + 1/n) is 80? 14. Mar 31, 2004 ### matt grime No, that it is bounded above by 80 for any n. 15. Mar 31, 2004 ### quddusaliquddus what about using the formula for 1/1 + 1/2 + ... + 1/n (n<>infinity), and substracting the formulae for 1/9+1/19+1/29+... + 1/90 + 1/91 + ... + 1/99 + 1/109 + ...? 16. Mar 31, 2004 ### matt grime Which formula are these? I know of none for how to sum the first n reciprocals nor do I know a formula for the sum of all the reciprocals of natural numbers less than n which have at least one 9 in their decimal representation 17. Mar 31, 2004 ### quddusaliquddus i think while working on another problem, i cam across the first sum you just mentioned - unfortunately i've lost it. I have never come across the second one though. Sorry-its not too helpful 18. Mar 31, 2004 ### NateTG Chiming in here: Of the first $$n$$ numbers, approximately $$n * (\frac{9}{10})^{\log_{10}(n)}$$ do not contain $$9$$: 1:1 10:9 (missing 9) 100:81 (missing 9 19 29 39 49 59 69 79 89 90 91 92 93 94 95 96 97 98 99) 1000:729 (missing 19*9 (each of the non-9 leading digits incl. 0) + 100 = 271) and so on. Now: $$\log_2(2n)\geq \sum_{i=1}^{n}\frac{1}{n}\geq \log_2(n)$$ since $$\frac{1}{2} \leq \sum_{i=2^{n-1}}^{2^n}\frac{1}{n} \leq 1$$ (Consider that there are $$2^{n-1}$$ terms each of which is less than or equal to $$\frac{1}{2^{n-1}}$$ and greater than or equal to $$\frac{1}{2^{n}}$$) For convenience, let's define $$f:\mathbb{N} \rightarrow \{0,1\}$$ with $$f(x)=1$$ iff $$x$$ contains 9 Then for $$n$$ a power of 10 we have: $$\sum_{i=1}^{n}\frac{f(i)}{n} \geq \sum_{i=1}^{\log_{10}n} \frac{1}{9^i} \sum_{j=1}^{\frac{n}{10^i}} \frac{1}{n}> \sum_{i=1}^{\log_{10}n} \frac{1}{9^i} (\log_2{n}- i log_2(10))$$ Which is a fairily tight approximation. If you simplify it, you might be able to get your proof. 19. Mar 31, 2004 ### quddusaliquddus ....Digesting...... lemme take all that in. 20. Mar 31, 2004 ### uart Yes it proves that the sum can be no more than 45, so therefore it can also be no more than 80. You can show it is no more than 39 by just doing a tiny bit more work and you can even show that it is no more than 26.7 by exactly summing the terms below 10^4 and using the bounding function for terms 10^4 and greater. But you dont have a formula for these, that's the whole point of finding a "bound". When you cant figure out how to sum something exactly then the next best thing is to try and find something easier to sum but which you at least know is always as large or larger than the original thing. That is, you cant find the sum but you can say with certainty that is is not greater than X (eg 80 or whatever the case may be). The bounding approximation that I'm using is 1/1 + 1/1 + 1/1 +...1/1 for all no_digit_nine numbers less than ten, 1/10 + 1/10 + 1/10 + ...1/10 for all no_nine_digit numbers between ten and 99 etc etc. You see how it works, it's much easier than the original sum and is always at least as big or bigger, so it leads to a bound.
2017-04-26 12:11:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7925446033477783, "perplexity": 764.3110999161225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121305.61/warc/CC-MAIN-20170423031201-00474-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.quatomic.com/composer/learning/probability-density-timeevolution/
# Exercise 3 In this exercise you will investigate the time evolution of superposition in a harmonic potential. Open the file "Exercise 3 - probability density timeevolution.flow". Then a node diagram appears with many of the same parts as in the previous exercises. There is also a field called "Time evolution", which is just a "pre-loop". Inside this you see the following: • Time Evolution: The wave function is time-developed using the time-dependent Schr\"{o}dinger equation. • Position Plot: The current wave function is displayed. Start the time development by clicking on the green play button at the top left. Observe the development of the wavefunction. 1. How does evolution change if the sign is change or we add an imaginary coefficient ? 2. Maintain the wave function as an equal linear combination of the two lower energy eigenvalues. If you change the angular frequency of the potential (here called 'a')what happens to; the potential wave function? the magnitude of the fluctuation of $\langle x\rangle$? and the time dynamics? 3. Try to include more states in your linear combination. Can you get $\langle x\rangle$ to be static even if the wave function develops in time? 4. Try to include eg. the 5 lowest eigenstates $c_n$, with $n = 0,1,2,3,4$. Select the coefficients as $c_n = \lambda n / \sqrt {n!}$, Where $\lambda = 0.5$ is a suitable value. Is there anything special about the resulting wave function? Note, composer itself ensures that the coefficients are normalized - the above $| c_n |^2$ corresponds (after correct normalization) to a Poisson distribution, and the wave function is in practice what is called a coherent state. What happens if $\lambda$ doubles? 5. You can also get Composer to calculate the integral over $| \psi (x, t) |^2$. Enter the file "Exercise 3 - probability integral timeevolution.flow" and run the program. Try to include more coefficients in the linear combinations and see if you have a good intuition about the dynamics.
2020-01-17 12:51:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6220638155937195, "perplexity": 601.5596935403943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589560.16/warc/CC-MAIN-20200117123339-20200117151339-00246.warc.gz"}
https://scicomp.stackexchange.com/tags/heuristics/hot
# Tag Info 5 The convergence of classical iterative solvers for linear systems is determined by the spectral radius of the iteration matrix, $\rho(\mathbf{G})$. For a general linear system, it is difficult to determine an optimal (or even good) SOR parameter due to the difficulty in determining the spectral radius of the iteration matrix. Below I have included many ... 5 You will typically get the most significant steps forward if you can concisely state a problem in terms of mathematics. When you have a concise formulation in terms of what the free variables are what the objective function is what the constraints are then the next step is to find algorithms that are well-suited to the problem at hand (e.g., can you show ... 5 The following paper S. D. Prestwich, "Local search and backtracking vs non-systematic backtracking," in AAAI 2001 Fall Symp. Uncertainty Computation, 2001. Alternative link to a PDF. has a thorough comparison of local search vs. backtracking-like algorithm. In the introduction, it even features a question: "What is the essential difference between local ... 2 Doing a fast search in GitHub you can find A MATLAB implementation; A Python implementation; and A C implementation. 2 Established methodologies for benchmarking optimization software can be found in publications such as Benchmarking Optimization Software with Performance Profiles, Benchmarking Derivative-Free Optimization Algorithms, and Derivative-free optimization: a review of algorithms and comparison of software implementations. Generally speaking, algorithms are ... 1 Yes, you are optimizing a knapsack problem. The objects, or "items" in most knapsack problem (KP) definitions, in your case is a set $S=\{s_{00}, s_{01}, ..,s_{0n}, s_{10}, .. s_{kn}\}$, which contains composite keyword-bid pairs, so $s_{ij}$ denotes an object labeled "keyword $i$ bid $j$". The reason you should think of your items as composite this way ... 1 The algorithm described by Rashedi et al in the paper you mentioned is not strictly a gravitational search algorithm. In their equation (7), the magnitude of the force of attraction is inversely proportional to the distance between agents. Rashedi et al claim that they get better results with equation (7) instead of the (classically) correct inverse square ... 1 The running time of the algorithm is the sum of its subcomponents. Thus, the complexity is the asymptotically worst case complexity of the subcomponents. Experimentally, you just run the algorithms at different problem sizes (or numbers of constraints, or both) and plot/fit the resulting data. You need to make sure that you run the problem out to a large ... Only top voted, non community-wiki answers of a minimum length are eligible
2021-09-24 13:12:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7024301290512085, "perplexity": 659.813377967198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057524.58/warc/CC-MAIN-20210924110455-20210924140455-00214.warc.gz"}
https://plainmath.net/91574/how-can-we-draw-a-triangle-give-one-of-i
# How can we draw a triangle give one of its vertex and the orthocentre and circumcentre? How can we draw a triangle give one of its vertex and the orthocentre and circumcentre? You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Emaidedip6g How can we draw a triangle give one of its vertex and the orthocentre and circumcentre?
2022-11-30 06:57:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6405251622200012, "perplexity": 916.8859806110418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710733.87/warc/CC-MAIN-20221130060525-20221130090525-00068.warc.gz"}
https://ccjou.wordpress.com/2016/03/31/%e6%af%8f%e9%80%b1%e5%95%8f%e9%a1%8c-april-4-2016/
## 每週問題 April 4, 2016 $A^2=0$,則 $A$ 的最大秩是多少? Let $A$ be an $n\times n$ matrix and $A^2=0$. What is the maximum value of $\hbox{rank}A$? This entry was posted in pow 向量空間, 每週問題 and tagged , , . Bookmark the permalink.
2016-10-24 03:19:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 27, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6944364905357361, "perplexity": 3321.2687911799617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719465.22/warc/CC-MAIN-20161020183839-00178-ip-10-171-6-4.ec2.internal.warc.gz"}
https://proxies123.com/tag/density/
## SEO Keyword Density Issue – Webmasters Stack Exchange I have a website that has a keyword density of 8% and 4% for my keywords, but I only used the keyword once. The website doesn’t have a lot of actual text. Does this high keyword density hurt my site’s SEO even though I only used it once? I checked my keyword density using the SEO Review Tools density checker. ## mp.mathematical physics – Diagonalization of the generalized 1-particle density matrix Let $$mathscr{H}$$ be a complex separable Hilbert space and $$mathscr{F}$$ be the corresponding fermionic Fock space generated by $$mathscr{H}$$. Let $$rho: mathscr{L}(mathscr{F}) to mathbb{C}$$ be a bounded linear functional on all bounded operators of $$mathscr{F}$$ with $$rho(I)=1$$ and $$rho(A^*)=rho(A)^*$$, and define the 1-particle density matrix (1-pdm) by the unique bounded self-adjoint $$Gamma: mathscr{H}oplus mathscr{H}^* to mathscr{H}oplus mathscr{H}^*$$ such that $$langle x|Gamma yrangle = rho((c^*(y_1)+c(y_2))(c(x_1)+c^*(x_2)))$$ where $$x=x_1 oplus bar{x}_2$$ and $$y=y_1 oplus bar{y}_2$$ (I use the notation $$bar{x} (cdot) = langle x|cdotrangle$$) and $$c,c^*$$ are the annihilation/creation operators. In references V. Bach (Generalized Hartree-Fock theory and the Hubbard model)(Theorem 2.3) and J.P Solovej (Many Body Quantum Mechanics)(9.6 Lemma and 9.9 Theorem), the authors claim that (under suitable conditions) $$Gamma$$ is diagonalizable by a Bogoliubov transform $$W:mathscr{H}oplus mathscr{H}^* to mathscr{H}oplus mathscr{H}^*$$ so that $$W^* Gamma W = operatorname{diag}{(lambda_1,…,1-lambda_1,…)}$$. The main idea of the proof is that $$Gamma$$ is diagonalizable by an orthonormal basis, and that if $$xoplus bar{y}$$ is an eigenvector with eigenvalue $$lambda$$, then $$yoplus bar{x}$$ is an eigenvector with eigenvalue $$1-lambda$$. The proof is fine when $$lambdane 1/2$$, since the 2 eigenvectors are orthonormal to each other. However, if $$lambda=1/2$$, then things become a little more difficult. J.P Solove solves this in the case where the eigenspace of $$lambda =1/2$$ is even-dimensional, but as far as I know, I can’t understand why would it be. Question. Is there something I’m forgetting? If not, is there a way or are there references that complete the proof? ## simplifying expressions – Calculate density of states for 1D system The density of states for a 1D system can be written as: $$Dleft(omegaright)=frac{L}{pi}frac{1}{domegaleft(kright)/dk}$$ I have some expressions for $$omegaleft(kright)$$: ``````w1(k_) = Sqrt((k1 (m1 + m2)/(m1*m2))*(1 - Sqrt(1 - (2*(1 - Cos(k*a)) m1*m2)/((m1 + m2)^2)))); w2(k_) = Sqrt((k1 (m1 + m2)/(m1*m2))*(1 + Sqrt(1 - (2*(1 - Cos(k*a)) m1*m2)/((m1 + m2)^2)))); `````` I need to calculate $$Dleft(omegaright)$$ analytically eliminating $$k$$ but I can only do that if I simplify the expression for w1,2(k) by assigning values to k1,m1,m2,a: ``````k1 = 1; a = 1; m1 = 1; m2 = 2; w1(k_) = Sqrt((k1 (m1 + m2)/(m1*m2))*(1 - Sqrt(1 - (2*(1 - Cos(k*a)) m1*m2)/((m1 + m2)^2)))); w2(k_) = Sqrt((k1 (m1 + m2)/(m1*m2))*(1 + Sqrt(1 - (2*(1 - Cos(k*a)) m1*m2)/((m1 + m2)^2)))); sol = Solve(p == D(w2(k), k) && w == w2(k), {p}, {k}); Simplify((p /. sol((2)))^-1) `````` Output: ``````(2 Sqrt((-3 + 2 w^2)^2))/Sqrt(6 - 11 w^2 + 6 w^4 - w^6) `````` Is it possible to do it for the general case? ## mg.metric geometry – Fast way to generate random points in 2D according to a density function I’m looking for a fast way to generate random points in 2D according to a given 2D density function. For instance something like this: Right now I’m using a modified version of “Poisson disc” method I found here: https://www.jasondavies.com/poisson-disc/ To make this fast, it relies on a grid on fixed interval. In my case, I don’t know the right size of the grid cells since the density changes over the plane. (To generate this image I used no grid at all and it’s pretty slow) Is there a “right way” to generalize this algorithm for this need? Maybe a different approach altogether? ## calculus – Some question about proving \$displaystylelimsup_{ntoinfty}|cos{n}|=1\$ by using density of \${a+bpi|a,binmathbb{Z}}\$ I have seen Proving \$displaystylelimsup_{ntoinfty}cos{n}=1\$ using \${a+bpi|a,binmathbb{Z}}\$ is dense and got this question. Hagen von Eitzen gave the solution as following: Pick an integer $$n$$. By density of $$Bbb Z+piBbb Z$$, there exist $$a_n,b_ninBbb Z$$ with $$frac 1{n+1}. If $$a_m=a_n$$, then $$|b_npi-b_mpi|<1$$, which implies $$b_n=b_m$$ and ultimately $$n=m$$. We conclude that $$|a_n|to infty$$. As $$cos|2a_n|=cos 2a_n=cos(2a_n+2pi b_n)>cosfrac 2nto 1,$$ the desired result follows. I was wondering why $$|a_n|to infty.$$ Could someone give more details about it? –Moreover, is $$|a_n|$$ increasing to $$infty?$$ ## unity – Controlling noise density I’m trying to fake terrain blending by using an opacity mask on my road mesh to reveal the grass underneath. I’m currently multiplying some Perlin noise by a Rectangle node displaying my texture at 80% width which is giving me the following: Obviously this doesn’t look great for reasons that should be apparent. The blotches are unnaturally spaced and they abruptly end with a sharp line at the end of the texture. What I would like to do is generate noise that looks more along the lines of this: Is there a way to apply some sort of gradient to my noise to achieve this? I know I can just use the texture itself but I would like to be able to adjust properties like the extent of the noise during runtime. ## neutral density – Do I still need filters for photographing landscape under an eclipse, if I’m not zooming in on the sun itself? Context: I find myself due to be in the path of the upcoming annular solar eclipse. This was unplanned and I am unable to get a proper filter delivered on time. The general consensus on the internet appears to be that photographing an eclipse requires a solar filter, or at least a 16 stop, ND 100,000 filter, e.g.: https://www.bhphotovideo.com/explora/photography/tips-and-solutions/how-to-photograph-a-solar-eclipse But it seems people frequently take photographs under a normal sun without apparently needing such extreme levels of filtering. Perhaps such advice is geared towards people trying to fill the frame with the sun using a super telephoto lens? It feels intuitive that aiming a telephoto lens directly at the sun would be more dangerous, like starting fires with a magnifying class. So what I am wondering is, if I were to take a landscape photo with a mild telephoto lens (e.g. a 85mm), with an eclipsing sun in a corner, do I still need the recommended protection? Or would a 10 stop, ND 1000 filter be sufficient? Would this be different with 35mm or wider lens? ## neutral density – What is the best option for fitting an ND filter to an 82mm lens? I’d recommend the square filters you’ve found. A great many filter types are made in the 100mm size, which is the size I’d recommend for this. The most popular system for this is the Cokin system, but there are competing systems. HiTech makes them, too, for instance, as does Lee, as you’ve found. The basic idea is that you buy a filter holder and then as many adapter rings as you need for the lens filter sizes you want to use the filter system with. One nice thing about such a system is that you can get graduated ND filters which you can then slide up and down in the holder to position the boundary line where you want it. The system also allows easy rotation. The Cokin system is much broader than this, offering such things as filters that fade from one color to another, but these sort of filters aren’t very useful in the digital world, IMHO, where a gradient overlay in a photo editor can achieve the same effect. Graduated ND is still quite useful today, though. ## physics – Simulating Gas Density and Pressure in a 2D World I’m building a small spaceship simulation app that looks a lot like a game for an upcoming talk I’m giving where I use this sample app to teach the F# programming language. This small app is something like FTL meets Oxygen Not Included where you have a top-down 2D grid of tiles (similar to an old RPG) where each tile has its own mixture of gasses – right now oxygen and carbon dioxide, but potentially others. I’ve got a few things I’m trying to simulate: 1. When new gasses are added to a tile by something like a vent or a life support system, that gas should expand to neighboring tiles if possible 2. When a pressure changes (e.g. opening a door to another area of the ship or a hull breach), air should flow from the high pressure tile to the low pressure tile next to it. Given this, and given that some gasses naturally sift to the top of others, I’m trying to figure out a small set of simple rules to govern this behavior. Previously I had all gasses equalizing with their neighbors and no concept of pressure, but that made it very difficult to treat scenarios like hull ruptures, so I’m looking for something a bit more realistic without getting complex or hyper-accurate. For example, given tile A with 15g oxygen and 6g CO2 and neighboring tile B of 3g oxygen and 1g CO2, some air should clearly flow from A to B. However, what flows? Is it the lightest gasses? The heaviest gasses? A random or representative sampling of gasses in A? Are there any relevant physics principles I should be aware of? Note: I posted here instead of in physics because I don’t care extremely about nuanced accuracy, just something simple and believable ## Finding the cumulative distribution function and the probability density function of a set X which is not a set A material point M moves at a constant angular velocity around a circle with the centre at (0,0) and radius 1 (uniform angle distribution). Let X be the distance of point M from point (1,0). Find the cumulative distribution function and the probability density function of X. What I know so far With some help of geometry you’ll get $$X = d(x)=sqrt{2-2x}$$ as the function which describes the distance between (1,0) and M Then we got $$F_Xleft(tright)=Pleft(Xle tright)=left{ begin{array}{lr} 0 & t<0\ frac{t^2}{4} & 0≤t≤2\ 1 & t>1\ end{array} right.$$ It’s very easy to check this, simply $$sqrt{2-2x} ≤ t$$ and you’ll see that $$frac{2-t^2}{2} = x$$ But that’s the strange point here: P is defined by $$Pleft(xright)=frac{1-x}{2}$$ (that’s why for t=0 P is equal to 0, for t=2 P is equal to 1, and if you go below 0 or above 1 the value cannot change to something lower than 0/higher than 1 because P is a probability measure). But why is P(x) defined in such a way? And also, why do we calculate here with$$frac{2-t^2}{2} = x$$ and not $$frac{2-t^2}{2}le x$$?
2020-07-14 09:53:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 48, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6059901714324951, "perplexity": 1249.778847732625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657149819.59/warc/CC-MAIN-20200714083206-20200714113206-00418.warc.gz"}
https://plainmath.net/8846/pic-of-triangle-find-the-measure
# [Pic of triangle] Find the measure. Question [Pic of triangle] Find the measure. 2020-11-23 The two tick marks on the sides of the triangle mean that those two sides are congruent. A triangle with two congruent sides is called an isosceles triangle. The base angles of an isosceles triangle are congruent. The base angles are the angles opposite of the congruent sides. The base angles of the triangle are then x∘ and 13∘ so x=13. ### Relevant Questions In any triangle ABC, E and D are interior points of AC and BC,respectively. AF bisects angle CAD and BF bisects angle CBE. Prove that measures (AEB) + measure (ADB) = 2x measure of (AFB). Find the perimeter of the triangle. To find: The value of measure LM. Triangle DEF is shown below. What is the measure of $$\displaystyle∠{D}$$? [Pic of triangle] Find the volume of the pyramid. Write your answer as a fraction or mixed number. As depicted in the applet, Albertine finds herself in a very odd contraption. She sits in a reclining chair, in front of a large, compressed spring. The spring is compressed 5.00 m from its equilibrium position, and a glass sits 19.8m from her outstretched foot. a)Assuming that Albertine's mass is 60.0kg , what is $$\displaystyle\mu_{{k}}$$, the coefficient of kinetic friction between the chair and the waxed floor? Use $$\displaystyle{g}={9.80}\frac{{m}}{{s}^{{2}}}$$ for the magnitude of the acceleration due to gravity. Assume that the value of k found in Part A has three significant figures. Note that if you did not assume that k has three significant figures, it would be impossible to get three significant figures for $$\displaystyle\mu_{{k}}$$, since the length scale along the bottom of the applet does not allow you to measure distances to that accuracy with different values of k. The student engineer of a campus radio station wishes to verify the effectivencess of the lightning rod on the antenna mast. The unknown resistance $$\displaystyle{R}_{{x}}$$ is between points C and E. Point E is a "true ground", but is inaccessible for direct measurement because the stratum in which it is located is several meters below Earth's surface. Two identical rods are driven into the ground at A and B, introducing an unknown resistance $$\displaystyle{R}_{{y}}$$. The procedure for finding the unknown resistance $$\displaystyle{R}_{{x}}$$ is as follows. Measure resistance $$\displaystyle{R}_{{1}}$$ between points A and B. Then connect A and B with a heavy conducting wire and measure resistance $$\displaystyle{R}_{{2}}$$ between points A and C.Derive a formula for $$\displaystyle{R}_{{x}}$$ in terms of the observable resistances $$\displaystyle{R}_{{1}}$$ and $$\displaystyle{R}_{{2}}$$. A satisfactory ground resistance would be $$\displaystyle{R}_{{x}}{<}{2.0}$$ Ohms. Is the grounding of the station adequate if measurments give $$\displaystyle{R}_{{1}}={13}{O}{h}{m}{s}$$ and R_2=6.0 Ohms? The measure of the supplement of an angle is $$\displaystyle{40}^{{\circ}}$$ more than three times the measure of the original angle. Find the measure of the angles. Instructions: Use the statement: " Let the original angle be x " to begin modeling the working of this question. a) Write algebraic expression in terms of x for the following: I) $$40^{\circ}$$ more than three times the measure of the original angle II) The measure of the Supplement angle in terms of the original angle, x b) Write an algebraic equation in x equating I) and II) in a) c) Hence solve the algebraic equation in b) and find the measure of the angles. factor in determining the usefulness of an examination as a measure of demonstrated ability is the amount of spread that occurs in the grades. If the spread or variation of examination scores is very small, it usually means that the examination was either too hard or too easy. However, if the variance of scores is moderately large, then there is a definite difference in scores between "better," "average," and "poorer" students. A group of attorneys in a Midwest state has been given the task of making up this year's bar examination for the state. The examination has 500 total possible points, and from the history of past examinations, it is known that a standard deviation of around 60 points is desirable. Of course, too large or too small a standard deviation is not good. The attorneys want to test their examination to see how good it is. A preliminary version of the examination (with slight modifications to protect the integrity of the real examination) is given to a random sample of 20 newly graduated law students. Their scores give a sample standard deviation of 70 points. Using a 0.01 level of significance, test the claim that the population standard deviation for the new examination is 60 against the claim that the population standard deviation is different from 60. (a) What is the level of significance? State the null and alternate hypotheses. $$H_{0}:\sigma=60$$ $$H_{1}:\sigma\ <\ 60H_{0}:\sigma\ >\ 60$$ $$H_{1}:\sigma=60H_{0}:\sigma=60$$ $$H_{1}:\sigma\ >\ 60H_{0}:\sigma=60$$ $$H_{1}:\sigma\ \neq\ 60$$ (b) Find the value of the chi-square statistic for the sample. (Round your answer to two decimal places.) What are the degrees of freedom? What assumptions are you making about the original distribution? We assume a binomial population distribution.We assume a exponential population distribution. We assume a normal population distribution.We assume a uniform population distribution. If $$\displaystyle{m}∠{1}+{m}∠{5}={100}$$, find the measure of $$\displaystyle∠{2}$$. $$\displaystyle{A}{\left(\frac{{1}}{{3}}\right)}{2}{\left(\frac{{4}}{{7}}\right)}={\left(\frac{{8}}{{7}}\right)}{8}$$ $$\displaystyle{m}∠{2}=□=\frac{{1}}{{9}}$$ Type the correct answer, then press Enter.
2021-06-15 10:15:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7749343514442444, "perplexity": 538.3003161492741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00143.warc.gz"}
https://homework.cpm.org/category/CON_FOUND/textbook/a2c/chapter/2/lesson/2.1.3/problem/2-40
### Home > A2C > Chapter 2 > Lesson 2.1.3 > Problem2-40 2-40. Find the slope between points A and B. $\frac{6-5}{-2-3}=-\frac{1}{5}$ Now find the slope between points B and C. $\frac{7-6}{-5+2}=-\frac{1}{3}$
2020-08-04 03:20:21
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4197344481945038, "perplexity": 6426.439157626909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735851.15/warc/CC-MAIN-20200804014340-20200804044340-00098.warc.gz"}