url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://depth-first.com/articles/2010/11/01/chemcell-easily-convert-names-and-cas-numbers-to-chemical-structures-in-excel/ | # ChemCell - Easily Convert Names and CAS Numbers to Chemical Structures in Excel
Chemical databases often start as a list of names or Chemical Abstracts Service (CAS) Registry numbers contained in an Excel spreadsheet. But as more and more expectations get placed on these ad hoc datasets, a point inevitably comes when the assignment of chemical structures becomes necessary. Whether for the purpose of performing substructure search, generating structure images, clustering, or assigning molecular weight, generating chemical structures from common names and CAS Numbers can be a major problem. Given the task of doing so for hundreds of structures, many organizations resort to manual data entry. But what if there were an inexpensive, quick alternative? This article discusses one solution.
## ChemCell
ChemCell is a macro that enables Microsoft Excel to convert columns of chemical names and CAS Numbers into SMILES strings. A poster I gave at the 4th annual Collaborative Drug Discovery (CDD) Community Meeting describes ChemCell in more detail:
## Using ChemCell
To generate a SMILES string for a name contained in cell A4, click in any empty table cell and use this formula:
=getSMILES(A4)
getSMILES works just like any other Excel formula: it can be pasted down every row in a column, the resulting values can be sorted, and other calculations can be based off of it.
## How it Works
ChemCell uses Chemical Structure Lookup Service (CSLS), a web service created for the purpose of providing structural information based on chemical names. By invoking the getSMILES function, your spreadsheet is calling CSLS and parsing the result.
Although it's possible to use PubChem to perform one-off structure lookup based on CAS Number and/or name, the CSLS Web API is implemented in such a way so as to easily enable the exposure of this functionality through Excel.
## Limitations
ChemCell's recall and accuracy were tested against a random sample of 1,000 name/structure pairings found in the ChEBI 3-star dataset. Rate of recall was found to be 70% (structures found) with 76% accuracy (exact matches). Most mismatches were due to unassigned stereochemistry in CSLS that was assigned in ChEBI. In other words, agreement between ChEBI and CSLS in terms of molecular atom connectivity was high.
## Just the Beginning
Converting names and CAS numbers to structures is but one possible use of the underlying ChemCell software. The core system could be used for a number of purposes, including: generation of Standard InChI Key (currently supoported); returning structure images; calculating logP, finding molecular weight, assigning IUPAC Name; and a number of other capabilities.
As more cheminformatics Web services like CSLS start to pop up, they could be integrated through Excel by making some very simple changes to the ChemCell code.
## Conclusions
ChemCell is a very small piece of software that exposes cheminformatics Web services through the familiar and ubiquitous interface of Microsoft Excel. Although the initial proof of concept succeeds relatively well in assigning structures to arbitrary names and CAS Numbers, the underlying approach could be adapted to expose a number of other interesting cheminformatics services. | 2019-11-21 02:07:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26423099637031555, "perplexity": 3120.753478814012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670643.58/warc/CC-MAIN-20191121000300-20191121024300-00369.warc.gz"} |
https://pos.sissa.it/396/354/ | Volume 396 - The 38th International Symposium on Lattice Field Theory (LATTICE2021) - Oral presentation
Investigation of the Perturbative Expansion of Moments of Heavy Quark Correlators for $N_f=0$
L. Chimirri* and R. Sommer
Full text: pdf
Pre-published on: May 16, 2022
Published on:
Abstract
The QCD-coupling is a necessary input in the computation of many observables, and the parametric error on input parameters can be a dominant source of uncertainty. The coupling can be extracted by comparing high order perturbative computations and lattice evaluated moments of mesonic two-point functions with heavy quarks, which provide a high energy scale for perturbation theory. The truncation of the perturbative series is an important systematic uncertainty.\\
We report on our attempt to study this issue by measuring pseudo-scalar two-point functions in volumes of $L=2\, \text{fm}$ with twisted-mass Wilson fermions in the quenched approximation. We use full twist, the non-perturbative clover term and lattice spacings down to $a=0.015\,\text{fm}$ to tame the sizable discretization effects. Our preliminary results indicate that either higher order perturbative corrections
or the continuum limit are not under sufficient control despite our small lattice spacings and quark masses
extending beyond $2\,m_{\text{charm}}$.
DOI: https://doi.org/10.22323/1.396.0354
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access | 2022-06-26 19:58:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4624159038066864, "perplexity": 2135.235761563087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00467.warc.gz"} |
https://electronics.stackexchange.com/questions/206521/how-to-use-external-st-link-to-debug-program-stm32f103-mcu?noredirect=1 | # How to use external ST-Link to debug/program STM32F103 MCU?
I'm using STM32F103 MCU for my own project and want to use the ST-Link of STM32F411 Nucleo board for external debugging / programming purposes.
I've set the CN2 jumpers OFF and my actual question is in the pinouts of SWO (CN2). How I proceed on this is as follows:
• PIN 1 (of SWO) is VDD_Target
• PIN 2 is SWCLK
• PIN 3 is GND
• PIN 4 is SWDIO
• PIN 5 is NRST
• PIN 6 is SWO
To the best of my knowledge, I shouldn't use all these pins above. As in, I've connected
• PIN 2 to PIN 37 (or PA14) in MCU
• PIN 3 to GND
• PIN 4 to PIN 34 (or PA13)
• PIN 5 to PIN 7 or (RESET) in target MCU.
Not sure if I should connect SWO pin as it's set as "reserved" (Why?). Also I'm giving 3.3 V to VIN pin of MCU, means that I don't need to connect VDD (PIN 1 of stlink).
Please refer to this table I've taken from the official datasheet:
Here's the general pinout configuration of the MCU here:
I've tested "almost" everything with oscilloscope and tester and everything seems okay. What else I'm missing here? Should I do anything with BOOT0 or BOOT1 pins?
First of all, you are right, if your board has already got a supply voltage source you do not have to connect ST-Link's VDD pin.
The second thing I would recommend you is to open the STM32F411 Nucleo board's reference manual and look at the schematics. Especially the part where the ST-Link is connected to the controller on the board.
By ST Microelectronics, SWCLK, SWDIO, NRST and SWO (and GND of course) are connected to the target MCU. The additional SWO pin is used for debug purposes, you can access printed data with the printf function through this pin with ST-Link Utility.
The Printf via SWO Viewer displays the printf data sent from the target through SWO.
So I can recommend you to connect SWO as well, can be useful later. Connect your MCU to ST-Link just like they have connected the Nucleo's MCU to it.
As for boot configurations, there are three selectable options, easiest it to stick with the Main Flash and tie Boot0 to GND, but I do not know your requirements so it is up to you to choose.
• While you can "get away with" not connecting the VDD pin, it isn't there to be a supply, but rather there to detect the target's supply voltage (see how it is connected to an analog input). A more sophisticated debug interface could keep its lines low without that, and only raise them to the corresponding supply level, supporting multiple target voltages. The reset line is not normally needed (unlike with many other SWD implementations). It is key to recovering from bad loads and firmwares which disable the SWD pins, but can be manually manipulated as well. – Chris Stratton Dec 17 '15 at 13:52
• On an STM32F1xx board the main reason for making BOOT0 externally select-able would be if there's a desire to use the factory ROM UART (etc) bootloader. On the '103 this does not support USB as it does on many of ST's later parts. If one desired a USB bootloader on the '103 it has to be in the main flash memory, so the BOOT0 pin isn't useful for that. – Chris Stratton Dec 17 '15 at 13:56
• My boot0 and boot0 is set to zero to make sure the the program will be written to Flash memory. Apart from that, I'll try using st-link utility in Windows machine. I'm currently using Mac OS, but not sure if it my problem is OS related. – baqx0r Dec 17 '15 at 14:43
• I have used an STM32F407 Discovery Kit's ST-Link to program an STM32F303. I have simply connected the (2-5 pins) Kit's SWD connector to my board SWCLK, GND, SWDIO, NRST, SWO (just like it is done on the Discovery and Nucleo) and it worked with ST-Link Utility. The boot0 pin is tied to GND through a 10k resistor, based on the STM32F303 hardware reference manual. – Bence Kaulics Dec 17 '15 at 15:26
First of all thanks to everyone for their contribution.
After two sleepless nights and struggle, I could find out the issue. The problem was in pin connections in my custom board: I thought that, in my MCU, Pin 9 (VDDA) is short-circuited with PINs 24-36-48, and Pin 8 (VSSA) with PINs 23-35-47, but it's not so.
I needed to give another 3.3V and GND to VDDA and VSSA and st-link started working.
Solution Method: I used Maple Mini schematics to understand the connections of STM32F103. It turned out that, they've short-circuited VDDA with VDD1, VDD2 & VDD3, and VSSA with VSS1, VSS2 and VSS3. I think, I should've understood this from the naming VSSA, as it's not VSS0 or VSS4. | 2020-01-19 07:04:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1979081779718399, "perplexity": 5292.50510349316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594333.5/warc/CC-MAIN-20200119064802-20200119092802-00307.warc.gz"} |
https://zbmath.org/?q=an:29.0223.05&format=complete | zbMATH — the first resource for mathematics
Derivation of the formula for the sum of positive integer powers of the sequence of natural numbers in independent form. (Ableitung der Formel für die Summe der positiven, ganzen Potenzen der natürlichen Zahlenreihe in independenter Form.) (Czech) JFM 29.0223.05
MSC:
11B57 Farey sequences; the sequences $$1^k, 2^k, \dots$$
Keywords:
Sums of powers of integers | 2021-07-26 03:33:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771801590919495, "perplexity": 6962.298353638287}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152000.25/warc/CC-MAIN-20210726031942-20210726061942-00205.warc.gz"} |
https://abhijeetkrishnan.me/first-research-paper/ | My First Publication
I submitted my first research paper to the 16th AIIDE conference on May 28th, 2020 at 7:40 PM GMT. It had been nearly a year’s worth of work that had gone into the paper. I would like to share the research journey which culminated in the final publication.
I began work in Spring 2019 when my advisor offered me the project due to their grant for it being funded by the LAS. The original whitepaper was for modelling user mental models using a rule-based model (think Prolog statements) and attempting to “align” it to a correct model or understand how it differs. The choice of using a rule-based model was arbitrary and was motivated by my advisor’s prior experience with them, as well as the belief that using a rule-based model would make the user models yield some explanatory power about the user. Since we are a rather games-focused lab, we were going to use games and players as our test bed.
The value to LAS from this project was in having a way for their intelligence officers to learn software faster since the results from our research could inform ways to accelerate their usage of internal software, provide shortcuts for oft-used operations and generally speed-up the learning process. The potential research results mentioned in the white paper were for general software interaction principles which could inform UI/UX design. The results I was hoping to get out of it, and my initial research direction, was a way to generate levels for games based on the learned player model, which could uniquely challenge the player.
Going into the project, I found myself clueless about a lot of things. I did not know how to model a player’s mental model computationally (or what prior work existed). I did not know enough about rule-based systems I could use as the language of the player model. I had only a passing familiarity with different logic systems (e.g., epistemic, first-order, temporal) as a result of past AI courses I had taken. To remedy this, I began to trawl through the literature. My initial lit reviews led me to the fields of student modelling and intelligent tutoring systems, which I found a lot of relevant material in.
I was fortunate to have several professors and PhD students in adjacent labs in my department who had published in these fields. Samiha Marwan explained to me her research on knowledge components, which is a common way to model student learning in computer-assisted education research. Dr. Noboru Matsuda in had developed SimStudent, which was a computational student model that could be “trained” like a student. This was very much like what I wanted to accomplish, and it used a rule-based representation for the artificial student model. From the paper, I discovered that it used an older student modelling system called FOIL, which could generalize over first-order logic statements provided to it, mimicking human induction. I could not find a working version of this system, so I tried to look for alternatives.
I also separately set up my test bed. I used a game called Laserverse, which is a block-based puzzle game developed in PuzzleScript which centres around bouncing lasers off mirrors to open doors to the goal. The game was developed by UG researchers in the POEM Lab during a Summer REU project, which culminated in a publication. I would previously worked to develop and ASP-based level generator for the game, so I was familiar with it. The authors had provided extensive documentation and were available in-person for help. I had to fiddle a bit with the PuzzleScript source code in order to log player actions taken in-game.
My foray into student modelling was very interesting, but ultimately did not yield anything useful. I learned a good bit about how student models in the earliest ITSs worked. However, there was no work that I could find which used a logic-based representation of a student model. Student modelling did make me refine my thinking about the project, however. In particular, the idea of constraint-based student models made me question if I could identify a player’s actions as right or wrong, at least in the context of a puzzle game. This helped me course correct later in the project. I also toyed with coming up with a logic-based student model of my own, since my advisor has worked on Ceptre, which is a linear logic-based language designed to model game rules. However, the feeling that “something like this must already exist” led me to keep looking for existing solutions. I also investigated some psychology papers for insight into how players form mental models of games they play.
I ran into the field of inductive logic programming (ILP). This field is concerned with developing methods to induce logical rules from sets of ground truth statements written in first-order logic. My biggest stumbling block here was a general unfamiliarity with the field. I found a very useful textbook on ILP which helped me navigate the ILP papers I had to read.
ILP attempts to solve the problem of generalization from examples (i.e., machine learning). Given a collection of positive and negative examples of an (as yet unknown) rule, can you generate a hypothesis which best fits this rule? Say we were given $(x,y)$ pairs $(1,2), (2, 4), (3, 8), (4, 16)$, would we be able to come up with the general rule $y = 2^x$? The general problem of forming a hypothesis based on training examples is well known in machine learning, but ILP attempts to work with training examples and hypotheses presented in symbolic forms, such as predicate logic.
The most modern ILP system I could find was Metagol, written by Andrew Cropper for his PhD thesis. The system works using SWI-Prolog and was easy to use. It takes in positive and negative examples of a rule as input, along with some predicate definitions and optional background knowledge, and return a logic program which best describes the examples provided. As a concrete example, I am using directly the code provided in the README of the Metagol project.
:- use_module('metagol').
%% metagol settings
body_pred(mother/2).
body_pred(father/2).
%% background knowledge
mother(ann,amy).
mother(ann,andy).
mother(amy,amelia).
mother(linda,gavin).
father(steve,amy).
father(steve,andy).
father(gavin,amelia).
father(andy,spongebob).
%% metarules
metarule([P,Q],[P,A,B],[[Q,A,B]]).
metarule([P,Q,R],[P,A,B],[[Q,A,B],[R,A,B]]).
metarule([P,Q,R],[P,A,B],[[Q,A,C],[R,C,B]]).
:-
%% positive examples
Pos = [
grandparent(ann,amelia),
grandparent(steve,amelia),
grandparent(ann,spongebob),
grandparent(steve,spongebob),
grandparent(linda,amelia)
],
%% negative examples
Neg = [
grandparent(amy,amelia)
],
learn(Pos,Neg).
This produces the output -
% clauses: 1
% clauses: 2
% clauses: 3
grandparent(A,B):-grandparent_1(A,C),grandparent_1(C,B).
grandparent_1(A,B):-mother(A,B).
grandparent_1(A,B):-father(A,B).
A novelty here are the metarules, which is a new concept introduced by Andrew Cropper himself for his PhD thesis. They define a kind of meta-predicate of how predicates can be combined. They constrain the sort of hypotheses that can be learned, and a set of simple metarules can be used to learn a wide variety of useful hypotheses.
I worked with ILP for a long time in attempting to get a solution for my player modelling problem. I even contributed a few examples from a DeepMind paper on ILP to the Metagol repo. However, I found that Metagol required some very contrived metarules in order to learn simple rules, like movement. I needed to almost provide knowledge of the rule to be learned in the form of metarules (which Andrew explained to me was the biggest limitation of MIL). I also encountered problems with my test bed since I would need a Prolog-based representation of Laserverse to work with Metagol. My prior experience in building a logic-based level generator for the game had taught me that working with logic is very cumbersome and fiddly, since expressing mechanics for the game in logic is hard and debugging them when things aren’t working is even harder (and annoying, given the lack of sophisticated development tools). I considered the possibility of using a simple game already written in Prolog as a test bed instead.
I tried a lot to get Metagol to learn the mechanics of the game. As an aid, I tried to break down the problem I was solving more formally. I realized it involved recording player actions and the resulting changes in game state, taking this sequence of $S_0 - A_1 - S_1 - A_2 - …$ and attempting to learn rules about the player - which actions they take in certain states, long-term patterns in their actions and so on. It slowly began to dawn on me that this problem was fundamentally unsupervised, whereas ILP required labelled examples of what matched the rule and what did not. Although there were ILP approaches which could do unsupervised learning, I began to look for alternative approaches, bolstered by the thought that since the problem felt so tractable, there must be some prior work on it.
I did attempt a lot to solve it myself, but could not come up with a satisfactory solution, mostly because I was unable to find a way to detect long-term patterns in a sequence. I investigated some data mining approaches but could not find any applicable technique. I finally hit the jackpot when I stumbled upon action model learning (AML), a field dedicated to solving exactly what my problem entailed. AML is situated in the automated planning community and is a method to reconstruct a planning domain from examples of action traces. A planning domain is a rule-based description of an environment (usually in PDDL), and an action trace is exactly the $S_0 - A_1 - S_1 - A_2 - …$ sequence I’d earlier described.
One limitation I found was that AML systems try only to learn the predicates of an action, rather than any rules about long-term trends in the actions taken. In the context of games, they would only learn the mechanics of an action rather than deduce which sequence of steps the player tends to perform when confronted with a specific obstacle. I decided to reframe the problem given this limitation as trying to learn a player’s mental model of their knowledge of a game’s mechanics and leave the long-term learning for future work.
I found FAMA as the most modern AML system, and corresponded with the authors for help getting it working (I ran into memory overflow issues frequently, and they were very helpful in their responses). I did encounter papers for older AML systems, but could not easily find their source codes online (although I managed to find some by emailing the respective authors when writing the paper for my written prelim). I identified Sokoban as an alternative to Laserverse as a block-based puzzle game which fortunately also had a PDDL representation. I also attempted to understand the working of FAMA from the paper and found that it compiled the AML problem into one of planning. It uses a cleverly constructed planning domain for which when a solution plan is found, can be turned into the solution for the AML problem. I faced much more difficulty in understanding the working of older AML systems like ARMS and SLAF, and since I wasn’t able to use them, did not pursue the endeavour.
I spent the next few weeks reading the FAMA paper, pursuing some promising leads mentioned there (e.g., human-aware planning), and trying to learn action models using it. I realized during this time that the learned action models themselves were meaningless as a player model, even if they were learned using action traces from human players. The learning process of FAMA is nothing like how a human learns, and so we have no evidence for the learned player model resembling a human’s knowledge of the mechanics in any way. Basically, there was no quantitative metric for accuracy or precision, nor did we have a ground truth - the player’s actual knowledge of the game’s mechanics represented in PDDL.
In addition, we didn’t have anything useful we would be able to learn from the PDDL player model. I was able to adapt an evaluation metric from the FAMA paper to produce a “proficiency score”, which measured how well a player knew the game’s mechanics (which again wasn’t calibrated against any ground truth). I also speculated that the player model of the game’s mechanics could be used as an actual game by supplying it as the input domain to a planner. This had some artistic merit.
I attempted to find a way to calibrate these scores, or to find evidence that the learned player models resembled a human player model, but any solution would require a user study. I found it difficult to organize one due to the COVID-19 pandemic hitting at the time, so opted to mention that we would do so in a future work.
Separately, an undergraduate researcher I was working with came up with a simplified AML algorithm (dubbed Blackout) to learn actions from play traces. He used simple principles to verify that the learned predicates matched the play traces and used some novel “invariant”-based heuristics to eliminate spurious predicates. The algorithm shares similarities with the earliest works on AML, and the invariant-based heuristics do not have a formal proof for their correctness. However, quantitative measures of Blackout’s accuracy and precision found it comparable to FAMA, and Blackout far exceeded it in terms of performance, so we believed we had a publishable result.
I reframed the problem further to present the use of AML as a novel, domain-agnostic player modelling technique which could learn a rule-based player model, which described the player’s knowledge of a game’s mechanics. I described the use of AML to do player modelling and described the results of experiments comparing FAMA with Blackout. This became the final draft of the paper which I intended to submit for publication.
My advisor provided a lot of valuable advice in re-writing the paper for publication. First was the problem of length - my paper exceeded a page limit of 6 + 1 by quite a bit. They helped me find the key findings of my paper and discard the irrelevant bits. They also taught me to prop up the positive results of my work more - I had been very critical of the lack of experimental correlation of the learned mental models with actual players and had mentioned a slew of other shortcomings (games are difficult to represent in PDDL, real-time games are probably impossible to etc.) and had devoted a lot of page space to it.
The reviewer feedback was nerve-wracking to receive but was in the end very helpful. The feedback seemed mostly positive, although there were concerns regarding the correlation of player models to actual humans (which I expected). One reviewer’s comments made me think about the problem much more generally, leading to my rewrite of it for the written prelim. Overall, I was satisfied with the reviewer comments, and felt that I provided a good rebuttal. I was slightly disappointed with having a poster presentation instead of an oral, however.
The actual presentation during AIIDE was uneventful. A digital poster presentation is not the best way to interact with people, and I would have been much more comfortable with the person physically in front of me while presenting. I was also not able to network and socialize as well as I would have liked. A few people did stop by the Discord channel dedicated to my paper to ask questions, and at least one seemed to want to incorporate my work in their own research.
The main thing to do to further this work is develop a robust evaluation metric for measuring the accuracy of a learned player model. I have suggested one in my written paper and will try implementing it when possible. I believe AML methods can be augmented with other features derived from psychological insights of mental model formation to better align them to an actual player. I am working on methods to interpret RL-models trained on board games, so the results from this work could be repurposed to “interpret” a neural network-based player model into a rule-based model.
The journey was very instructive, and I am happy to have published my first paper. I found that the difficulty of research lies in dealing with a lot of unknown pieces while simultaneously trying to combine them into something useful. I have not improved my sense of any problem space better (since I’m not continuing work in player modelling or AML), but I believe my research skills and process have been sharpened. Specifically, I believe an evaluation-first approach to solving a problem is very useful, and that starting to write your paper from the very first day leads to more productivity.
Tags:
Updated: | 2021-09-26 03:24:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48771294951438904, "perplexity": 924.3495005123873}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057796.87/warc/CC-MAIN-20210926022920-20210926052920-00399.warc.gz"} |
https://forum.effectivealtruism.org/users/kit/replies | # All of Kit's Comments + Replies
Getting a feel for changes of karma and controversy in the EA Forum over time
Thanks for this.
Without having the data, it seems the controversy graph could be driven substantially by posts which get exactly zero downvotes.
Almost all posts get at least one vote (magnitude >= 1), and balance>=0, so magnitude^balance >=1. Since the controversy graph goes below 1, I assume you are including the handling which sets controversy to zero if there are zero downvotes, per the Reddit code you linked to.
e.g. if a post has 50 upvotes:
0 downvotes --> controversy 0 (not 1.00)
1 downvote --> controversy 1.08
2 downvotes --> controve... (read more)
1FJehn8moThat's a valid point. Here's the controversy graph if you exclude all posts that don't have any downvotes: Overall trend seems to be similar though. And it makes me even more interested what happened in 2018 that sparked so much controversy^^
People from 80k, Founders Pledge and GWWC have already replied with corrections.
0Milan_Griffes8moThose weren't corrections... The statements I make in the original post are largely about what an org is focusing on, not what it is formally tracking.
(I downvoted this because a large fraction of the basic facts about what organisations are doing appear to be incorrect. See other comments. Mostly I think it's unfortunate to have incorrect things stated as fact in posts, but going on to draw conclusions from incorrect facts also seems unhelpful.)
2Milan_Griffes8moCould you give some examples of the basic facts I stated that appear incorrect?
Which effective altruism projects look disingenuous?
I'm totally not a mod, but I thought I'd highlight the "Is it true? Is it necessary? Is it kind?" test. I think it's right in general, but especially important here. The Forum team seems to have listed basically this too: "Writing that is accurate, kind, and relevant to the discussion at hand."
I'm also excited to highlight another piece of their guidance "When you disagree with someone, approach it with curiosity: try to work out why they think what they think, and what you can learn from each other." On this:
• Figuring out what someone thinks usually involv
... (read more)
5Aaron Gertler1yI agree with everything that Kit has said here. This post might have been in sufficient violation of the Forum's rules to remove (being slightly inaccurate and slightly unkind), but I'm leaving it up (without asking the author to consider changes, as I typically would -- see following comment) because I think Kit's comment suitably addresses my concerns. EA orgs aren't run by angels. Any community where money changes hands will attract people who want to deceive others, with or without good intentions. But it's really good to reach out to people before accusing them of deception; they could be making an honest error, you could be making an honest error, or the issue could simply be a difference of opinion within a moral gray area. We're working in a field with many complex questions (moral and logistical), and the best first reaction to confusion is communication.
The Risk of Concentrating Wealth in a Single Asset
I think this is the best intro to investing for altruists that I've seen published. The investment concepts it covers are the most important ones, and the application to altruists seems right.
(For context: I used to work as a trader, which is somewhat but not very relevant, and have thought about this kind of thing a bit.)
4MichaelDickens1yThank you, I appreciate the positive feedback, especially from someone as knowledgeable as you!
GiveDirectly plans a cash transfer response to COVID-19 in US
I would guess that the decision of which GiveDirectly programme to support† is dominated by the principle you noted, of
the dollar going further overseas.
Maybe GiveDirectly will, in this case, be able to serve people in the US who are in comparable need to people in extreme poverty. That seems unlikely to me, but it seems like the main thing to figure out. I think your 'criteria' question is most relevant to checking this.
† Of course, I think the most important decision tends to be deciding which problem you aim to help solve, which would precede the question of whether and which cash transfers to fund.
GiveDirectly plans a cash transfer response to COVID-19 in US
The donation page and mailing list update loosely suggest that donations are project-specific by default. Likewise, GiveWell says:
GiveDirectly has told us that donations driven by GiveWell's recommendation are used for standard cash transfers (other than some grant funding from Good Ventures and cases where donors have specified a different use of the funds).
(See the donation page for what the alternatives to standard cash transfers are.)
If funding for different GiveDirectly projects are sufficiently separate, your donation would pretty much just incr... (read more)
1warrenjordan2yThank you for the clarification. Confused about this one as I have not donated directly to GiveDirectly - I thought that if I were to donate $100 for standard cash transfer, some % of that goes directly to recipients. They state 89% [https://www.givewell.org/charities/give-directly#footnote145_d3encfl] for specific African countries. I would hope there would be some comparable % for standard cash transfers to US recipients. What questions come to mind for you? Some that I think of... * What is the criteria for someone to receive this benefit? What does that vetting process look like? * What would coverage look like? * How do they ensure that the funds will actually benefit the recipients and where do they draw those margins? Concerning the Recent 2019-Novel Coronavirus Outbreak [Comment not relevant] [This comment is no longer endorsed by its author]Reply A small observation about the value of having kids For the record, I wouldn't describe having children to 'impart positive values and competence to their descendants' as a 'common thought' in effective altruism, at least any time recently. I've been involved in the community in London for three years and in Berkeley for a year, and don't recall ever having an in-person conversation about having children to promote values etc. I've seen it discussed maybe twice on the internet over those years. -- Additionally: This seems like an ok state of affairs to me. Having childre... (read more) Assumptions about the far future and cause priority In the '2% RGDP growth' view, the plateau is already here, since exponential RGDP growth is probably subexponential utility growth. (I reckon this is a good example of confusion caused by using 'plateau' to mean 'subexponential' :) ) In the 'accelerating view', it seems that whether there is exponential utility growth in the long term comes down to the same intuitions about whether things keep accelerating forever that are discussed in other threads. 1Jc_Mourrat2yOk, but note that this depends crucially on whether you decide that your utility looks more like log(GDP), or more like (GDP)^0.1, say. I don't know how we can be confident that it is one and not the other. Assumptions about the far future and cause priority Thanks! In my understanding, [a confident focus on extinction risk] relies crucially on the assumption that the utility of the future cannot have exponential growth in the long term I wanted to say thanks for spelling that out. It seems that this implicitly underlies some important disagreements. By contrast, I think this addition is somewhat counterproductive: and will instead essentially reach a plateau. The idea of a plateau brings to mind images of sub-linear growth, but all that is required is sub-exponential growth, a much weaker claim. I think this will... (read more) 3Jc_Mourrat2yThanks for your detailed and kind comments! It's true that naming this a "plateau" is not very accurate. It was my attempt to make the reader's life a bit easier by using a notion that is relatively easier to grasp in the main text (with some math details in a footnote for those who want more precision). About the growth rate, mathematically a function is fully described by its growth rate (and initial condition), and here the crux is whether or not the growth rate will go to zero relatively quickly, so it seems like a useful concept to me. (When you refer to footnote 15, that can make sense, but I wonder if you were meaning footnote 5 instead.) I agree with all the other things you say. I may be overly worried about our community becoming more and more focused on one particular cause area, possibly because of a handful of disappointing personal experiences. One of the main goals of this post was to make people more aware of the fact that current recommendations are based in an important way on a certain belief on the trajectory of the far future, and maybe I should have focused on that goal only instead of trying to do several things at once and not doing them all very well :-) Updated Climate Change Problem Profile I’m curious to know what you think the difference is. Both problems require greenhouse gas emissions to be halted. I agree that both mainline and extreme scenarios are helped by reducing greenhouse gas emissions, but there are other things one can do about climate change, and the most effective actions might turn out to be things which are specific to either mainline or extreme risks. To take examples from that link: • Developing drought-resistant crops could mitigate some of the worst effects of mainline scenarios, but might help little in extreme scen ... (read more) Updated Climate Change Problem Profile Thanks for this. I found it interesting to think about. Here are my main comments. Mainline and extreme risks I think it would be better to analyse mainline risks and extreme risks separately. • Depending on whether or not you put substantial weight on future people, one type of risks may be much more important than the other. The extreme risks appear to pose a much larger existential threat than mainline risks, so if you value future generations the extreme risks may be much more important to focus on. The opposite may be true for people who apply high pure ti ... (read more) 1mchr3k2yI’m curious to know what you think the difference is. Both problems require greenhouse gas emissions to be halted. The neglectedness guidelines focus on the level of existing funding. I argue that this is an insufficient view - that if you have two problems who require$100 or $200 of total funding to solve completely, if they both have$50 of funding today, they are not equally neglected. The denominator matters - the $200 problem is much further from being solved. Perhaps I’m proposing a slightly different framework - but it’s definitely not one divorced from the notion of caring about the good done per effort put in. I just don’t believe that climate change is really at saturation point for the level of effort. Fair point about use of language. I’ll try and address this in a future edit. Summary of my academic paper “Effective Altruism and Systemic Change” Regarding increasing marginal returns (IMR), which seems to be the primary contribution of this paper and not obviously addressed by replies to other types of systemic change objections: Perhaps rather than 'Are IMR commonly found in cause areas?', I would ask 'where are IMR found?' and, for the purposes of testing the critique, 'in which cases are relevant actors not already aware of those IMR?' This is because I expect the prevalence of IMR to vary substantially between areas. (I see that you also call for concrete examples i... (read more) Are we living at the most influential time in history? Punting strategies, in contrast, affect future generations primarly via their effect on the people alive in the most influential centuries. That seems like a sufficiently precise definition. Whether there are any interventions in that category seems like an open question. (Maybe it is a lot more narrow than Will's intention.) Are we living at the most influential time in history? Not appropriated: lost to value drift. (Hence, yes, the historical cases I draw on are the same as in my comment 3 up in this thread.) I'm thinking of this quantity as something like the proportion of resources which will in expectation be dedicated 100 years later to the original mission as envisaged by the founders, annualised. Are we living at the most influential time in history? Got it. Given the inclusion of (bad) value drift in 'appropriated (or otherwise lost)', my previous comment should just be interpreted as providing evidence to counter this claim: But I shouldn’t think that the chance of my funds being appropriated (or otherwise lost) is as high as 2% per year. [Recap of my previous comment] It seems that this quote predicts a lower rate than there has ever† been before. Such predictions can be correct! However, a plan for making the prediction come true is needed. It seems that the plan should be different to what ... (read more) 1Max_Daniel2yJust to make sure I understand - you're saying that, historically, the chance of funds (that were not intended just to advance mutual self-interest) being appropriated has always been higher than 2% per year? If so, I'm curious what this is based on. - Do you have specific cases of appropriation in mind? Are you mostly appealing to charities with clear founding values and religious groups, both of which you mention later? [Asking because I feel like I don't have a good grasp on the probability we're trying to assess here.] Are we living at the most influential time in history? Thanks. I agree that we might endorse some (or many) changes. Hidden away in my first footnote is a link to a pretty broad set of values. To expand: I would be excited to give (and have in the past given) resources to people smarter than me who are outcome-oriented, maximizing, cause-impartial and egalitarian, as defined by Will here, even (or especially) if they plan to use them differently to how I would. Similarly, keeping the value 'do the most good' stable maybe means something like keeping the outcome-oriented, maximizing, cause-impartial a... (read more) Are we living at the most influential time in history? I was very surprised to see that 'funds being appropriated (or otherwise lost)' is the main concern with attempting to move resources 100 years into the future. Before seeing this comment, I would have been confident that the primary difficulty is in building an institution which maintains acceptable values† for 100 years. Some of the very limited data we have on value drift within individual people suggests losses of 11% and 18% per year for two groups over 5 years. I think these numbers are a reasonable estimate for people who have held certain ... (read more) Sorry - 'or otherwise lost' qualifier was meant to be a catch-all for any way of the investment losing its value, including (bad) value-drift. I think there's a decent case for (some) EAs doing better at avoiding this than e.g. typical foundations: • If you have precise values (e.g. classical utilitarianism) then it's easier to transmit those values across time - you can write your values down clearly as part of the constitution of the foundation, and it's easier to find and identify younger people to take over the fund who also endor ... (read more) 2Max_Daniel2yI think you make good points, and overall I feel quite sympathetic to the view you expressed. Just one quick thought pushing a bit in the other direction: But perhaps this example is quite relevant? To put it crudely, perhaps we can get away with keeping the value "do the most good" stable. This seems more analogous to "maximize profits" than to any specification of value that refers to a specific content of "doing good" (e.g., food aid to country X, or "abolish factory farming", or "reduce existential risk"). More generally, the crucial point seems to be: the content and specifics of values might change, but some of this change might be something we endorse. And perhaps there's a positive correlation between the likelihood of a change in values and how likely we'd be to agree with it upon reflection. [Exploring this fully seems quite complex both in terms of metaethics and empirical considerations.] Are we living at the most influential time in history? Thanks! I hadn't seen the Cotton-Barratt piece before. Extinction risk reduction punts on the question of which future problems are most important to solve, but not how best to tackle the problem of extinction risk specifically. Building capacity for future extinction risk reduction work punts on how best to tackle the problem of extinction risk specifically, but not the question of which future problems are most important to solve. They seem to do more/less punting than one another along different dimensions, so, depending on one's definition of ... (read more) Are we living at the most influential time in history? I agree that, among other things, discussion of mechanisms for sending resources to the future would needed to make such a decision. I figured that all these other considerations were deliberately excluded from this post to keep its scope manageable. However, I do think that one can interpret the post as making claims about a more insightful kind of probability: the odds with which the current century is the one which will have the highest leverage-evaluated-at-the-time (in contrast to an omniscient view / end-of-time evaluation, which is what this thread m... (read more) Are we living at the most influential time in history? This was very thought-provoking. I expect I'll come back to it a number of times. I suspect that how the model works depends a lot on exactly how this definition is interpreted: a time t is more influential (from a longtermist perspective) than a time t iff you would prefer to give an additional unit of resources,[1] that has to be spent doing direct work (rather than investment), to a longtermist altruist living at t rather than to a longtermist altruist living at t. In particular, I think you intend direct work to include extinction risk reduction,... (read more) How I see it: Extinction risk reduction (and other type of "direct work") affects all future generations similarly. If the most influential century is still to come, extinction risk reduction also affects the people alive during that century (by making sure they exist). Thus, extinction risk reduction has a "punting to future generations that live in hingey times" component. However, extinction risk reduction also affects all the unhingey future generations directly, and the effects are not primarily mediated through the people alive in the most influential ... (read more) 7Stefan_Schubert2yI agree that it seems important to get more clarity over the direct work vs buck-passing/punting distinction. Building capacity for future extinction risk reduction work may be seen as more "meta"/"buck-passing/"punting" still. There has been an interesting discussion on direct vs meta-level work to reduce existential risk; see Toby Ord [https://www.fhi.ox.ac.uk/the-timing-of-labour-aimed-at-reducing-existential-risk/] and Owen Cotton-Barratt [https://www.fhi.ox.ac.uk/wp-content/uploads/Allocating-risk-mitigation.pdf]. Are we living at the most influential time in history? Using a distribution over possible futures seems important. The specific method you propose seems useful for getting a better picture of maxcentury most leveraged. However, what we want in order to make decisions is something more akin to maxleverage of century . The most obvious difference is that scenarios in which the future is short and there is little one can do about it score highly on expected ranking and low on expected value. I am unclear on whether a flat prior makes sense for expectancy, but it seems more reasonable than for proba... (read more) While I agree with you that is not that action relevant, it is what Will is analyzing in the post, and think that William Kiely's suggested prior seems basically reasonable for answering that question. As Will said explicitly in another comment: Agree that it might well be that even though one has a very low credence in HoH, one should still act in the same way. (e.g. because if one is not at HoH, one is a sim, and your actions don’t have much impact). I do think that the focus on is t... (read more) Key points from The Dead Hand, David E. Hoffman Thanks! Here are some places you might start. (People who have done deeper dives into nuclear risk might have more informed views on what resources would be useful.) • Baum et al., 2018, A Model For The Probability Of Nuclear War makes use of a more comprehensive list of (possible) close calls than I've seen elsewhere. • FLI's timeline of close calls is a more (less?) fun display, which links on to more detailed sources. Note that many of the sources are advocacy groups, and they have a certain spin. • Picking a few case studies that seemed important and ... (read more) I recently started to feel that celebrating Petrov was a bad choice: he just happened to be in the right place in the right time, and as you say, there were many false positives at the time. Petrov's actions were important, but they provide no lessons to those who aspire to reduce x-risk. A better example might be Carl Sagan, who (if I'm correct) researched nuclear winter and succesfully advocated against nuclear weapons by conveying the risk of nuclear winter. This seemed to have contributed to Gorbachov's conviction to mitigate nuclear war risk. This stor ... (read more) How urgent are extreme climate change risks? Open Phil (then GiveWell Labs) explored climate change pretty early on in their history, including the nearer-term humanitarian effects. Giving What We Can also compared climate change efforts to health interventions. (Each page is a summary page which links to other pages going into more detail.) 1spanrucker2yThanks Kit! I look forward to reading them. Cluster Headache Frequency Follows a Long-Tail Distribution I'm very excited to see people doing empirical work on what things we care about are in fact dominated by their extremes. At least after adjusting for survey issues, statements like The bottom 90% accounts for 30% of incidents seem to be a substantial improvement on theoretical arguments about properties of distributions. (Personal views only.) Cluster Headache Frequency Follows a Long-Tail Distribution I'm less optimistic about the use of surveys on whether people think tryptamines will/did work: • 'And do they work?' doesn't seem like a question that will be accurately answered by asking people whether it worked for them. (Reversion to the mean being my main concern.) • Non-users are asked whether tryptamines 'could be effective for treating your cluster headaches', which could be interpreted as a judgement on whether it works for anyone or whether it will work for them (for which the correct answer seems to be 'maybe'). Users are asked whether it worked for them specifically. Directly computing the difference between these answers doesn't seem meaningful. Debrief: "cash prizes for the best arguments against psychedelics" Huh. The winning response, one of the six early responses, also engages explicitly with the arguments in the main post in its section 1.2 and section 2. This one discussed things mentioned in the post without explicitly referring to the post. This one summarises the long-term-focused arguments in the post and then argues against them. I worry I'm missing something here. Dismissing these responses as 'cached arguments' seemed stretched already, but the factual claim made to back that decision up, that 'None of these engaged with the pro-p... (read more) 6Milan_Griffes2yThanks, I think I overstated this in the OP (added a disclaimer noting this). I still think there's a thing here but probably not to the degree I was holding. In particular it felt strange that there wasn't much engagement with the trauma argument or the moral uncertainty / moral hedging argument ("psychedelics are plausibly promising under both longtermist & short-termist views, so the case for psychedelics is more robust overall.") There was also basically no engagement with the studies I pointed to. All of this felt strange (and still feels strange), though I now think I was too strong in the OP. Debrief: "cash prizes for the best arguments against psychedelics" I also came to note that the request was for 'the best arguments against psychedelics, not for counter-arguments to your specific arguments in favour'. However, I also wrote one of the six responses referred to, and I contest the claim that None of these engaged with the pro-psychedelic arguments I made in the main post The majority of my response explicitly discusses the weakness of the argumentation in the main post for the asserted effect on the long-term future. To highlight a single sentence which seems to make this clear, I say: I don't se ... (read more) Huh. The winning response, one of the six early responses, also engages explicitly with the arguments in the main post in its section 1.2 and section 2. This one discussed things mentioned in the post without explicitly referring to the post. This one summarises the long-term-focused arguments in the post and then argues against them. I worry I'm missing something here. Dismissing these responses as 'cached arguments' seemed stretched already, but the factual claim made to back that decision up, that 'None of these engaged with the pro-p... (read more) How bad would nuclear winter caused by a US-Russia nuclear exchange be? On the specific questions you're asking about whether empirical data from the Kuwaiti oil field destruction is taken into account: it seems that the answer to each is simply 'yes'. The post says that the data used is adapted from Toon et al. (2007), which projects how much smoke would reach the stratosphere specifically. The paper explicitly considers that event and what the model would predict about them: Much interest in plume rise was directed at the Kuwati oil fires set by Iraqi forces in 1991. Small (1991) estimated that oil well fires produce energy a ... (read more) 0bengold2yI think the issue is not the energy source/density, the issue is amount of particles in the atmosphere, Sagan/TTAPS is on record saying that the amount of particles is the same magnitude in the Kuwait fires as in their model, in addition at least in theirs simulations the burning of oil/gas deposits within cities like in gas stations cars etc... is what produced the most amount of the particles and particles in the correct mass that would rise and produce the most damage by "self lofting" into the upper layers - hence his predictions. Also the the nuclear mushroom is completely irrelevant it contributes negligible amounts to particles in the atmosphere, it is not surprising that some smoke is thrown from the blast, but to get "Nuclear Winter" from the model/simulations the main source are the fires and the proposition that the particles will "self loft" and rise and rise and rise....., yet it seems that the fires do not produce any "self lofting", in addition as far as i recall they are also not blocking as much of sun energy as proposed. Note that it is not that they were a little off, they were completely wrong, more over it was probably completely politically motivated (it is for a good cause so it is ok to inflate inflate and inflate) but we should be really skeptical. How bad would nuclear winter caused by a US-Russia nuclear exchange be? On your general point about paying attention to political biases, I agree that's worthwhile. A quibble related to that which might matter to you: the Wikipedia article you're quoting seems to attribute the incorrect predictions to TTAPS but I could only trace them to Sagan specifically. I could be missing something due to dead/inaccessible links. How bad would nuclear winter caused by a US-Russia nuclear exchange be? There are a whole bunch of things I love about this work. Among other things: • An end-to-end model of nuclear winter risk! I'm really excited about this. • The quantitative discussions of many details and how they interact are very insightful. e.g. ones which were novel for me included how exactly smoke causes agriculture loss, and roughly where the critical thresholds for agricultural collapse might be. The concrete estimates for the difference in smoke production between counterforce and countervalue, which I knew the sign of but not the magnitude, are f ... (read more) How bad would nuclear winter caused by a US-Russia nuclear exchange be? I have one material issue with the model structure, which I think may reverse your bottom line. The scenario full-scale countervalue attack against Russia has a median smoke estimate of 60Tg and a scenario probability of 0.27 x 0.36 = ~0.1. This means the probability of total smoke exceeding 60Tg has to be >5%, but Total smoke generated by a US-Russia nuclear exchange calculates a probability of only 0.35% for >60Tg smoke. What seems to be going on is that the model incorporates estimated smoke from each countervalue targeting scenario as {scenario pr... (read more) How many people would be killed as a direct result of a US-Russia nuclear exchange? Agreed. The discussion of the likelihood of countervalue targetting throughout this piece seems very important if countervalue strikes would typically produce considerably more soot than counterforce strikes. In particular, the idea that any countervalue component of a second strike would likely be small seems important and is new to me. I really hope the post is right that any countervalue targetting is moderately unlikely even in a second strike for the countries with the largest arsenals. That one ‘point blank’ line in the 2010 NPR was certainly surprising to me. On the other hand, I'm not compelled by most of the arguments as applied to second strikes specifically. Would US and Russian nuclear forces survive a first strike? This is fascinating, especially with details like different survivability of US and Russian SLBMs. My main takeaway is that counterforce is really not that effective, so it remains hard to see why it would be worth engaging in a first strike. I'd be interested to hear if you ever attempt to quantify the risk that cyber, hypersonic, drone and other technologies (appear to) change this, or if this has been attempted by someone already. Relatedly: If improvements in technology allowed either country to reliably locate and destroy those targets, they would ... (read more) Would US and Russian nuclear forces survive a first strike? Quibbles/queries: The one significant thing I was confused about was why the upper bound survivability for stationary, land-based ICBMs is only 25%. It looks like these estimates are specifically for cases where a rapid second strike (which could theoretically achieve survivability of up to 100%) is not attempted. Do you intend to be taking a position on whether a rapid second strike is likely? It seems like you are using these numbers in some places, e.g. when talking about ‘Countervalue targeting by Russia in the US’ in your third post, when you might be ... (read more) 5Luisa_Rodriguez2yGood catch! I wasn’t considering the fact that a countervalue attack might be ‘launched on warning,’ rather than after a first strike had already destroyed a portion of a country’s nuclear arsenal. I’ve updated the Guesstimate model in the third post [https://forum.effectivealtruism.org/posts/FfxrwBdBDCg9YTh69/how-many-people-would-be-killed-as-a-direct-result-of-a-us#2tGJcdtb5eoHyKYeP] to reflect that full-scale countervalue targeting could get a bit deadlier than I originally accounted for, and I'll make sure my post on nuclear winter takes this into account as well. I don’t have the bandwidth to update all of the figures in the post immediately, but I should be able to do so soon. Right, again! Thanks for flagging this. Here’s an updated version of my calculations [https://docs.google.com/spreadsheets/d/1lk-L00RjnZsI_Jwxcpmj_er1pu0PEa2qrAAh2HWoELA/edit?usp=sharing] . I now find that somewhere between ~990 and 1,500 US nuclear weapons would survive a counterforce first strike by Russia. I’ll update the post to reflect these changes soon. Which nuclear wars should worry us most? This series (#2, #3) has begun as the most interesting-to-me on the Forum in a long time. Thanks very much. If you have written or do write about how future changes in arsenals may change your conclusions about what scenarios to pay the most attention to, I'd be interested in hearing about it. In case relevant to others, I found your spreadsheet with raw figures more insightful than the discrete system in the post. To what extent do you think the survey you use for the probabilities of particular nuclear scenarios is a reliable source? (I previously di... (read more) 1Luisa_Rodriguez2yAlso, I see you’ve left some great feedback on posts 2 and 3. I’ll be replying to those comments shortly. 5Luisa_Rodriguez2yHi Kit, Thanks for your comments — I’m glad to hear you’re enjoying the series! I haven’t written about this yet, but I’ll consider working it in as I continue to explore the topic in the next few months. I’ll update this thread if I do. I’ll be sharing a post on the probability of a US-Russia nuclear war soon. It talks a little bit about the relative merits and weaknesses of some of these probability estimates. Cash prizes for the best arguments against psychedelics being an EA cause area effect from boosting efficacy of current long-termist labor + effect from increasing the amount of long-termist labor Let's go. Upside 1: effect from boosting efficacy of current long-termist labor Adding optimistic numbers to what I already said: • Let's say EAs contribute$50m† of resources per successful drug being rolled out across most of the US (mainly contributing to research and advocacy). We ignore costs paid by everyone else.
• This somehow causes rollout about 3 years earlier than it would otherwise have happened, and doesn't trade off aga
... (read more)
4Milan_Griffes3yAn EA contribution of far less than $50m would be leveraged. The$2.4bn estimate doesn't apply well to psychedelics, because there's no cost of drug discovery here (the drugs in question have already been discovered). As a data point, MAPS [https://maps.org/] has shepherded MDMA through the three phases of the FDA approval process with a total spend of ~$30m. The current most important question for legal MDMA & psilocybin rollout in the US is not when, but at what quality. We're at a point where the FDA is likely (>50% chance) going to reschedule these drugs within the next 5 years (both have received breakthrough therapy designation [https://www.fda.gov/patients/fast-track-breakthrough-therapy-accelerated-approval-priority-review/breakthrough-therapy] from the FDA). Many aspects of how FDA rescheduling goes are currently undetermined (insurance, price, off-label prescription, setting in which the drugs can be used). A savvy research agenda + advocacy work could tip these factors in a substantially more favorable direction than would happen counterfactually. Doing research & advocacy in this area scales fairly linearly (most study designs I've seen cost between$50k-$1m, advocates can be funded for a year for$60-90k). From the OP: So somewhere between 34.1% - 65.4% of SSC readers report having a relevant mental health issue (depending on how much overlap there is between the reports of anxiety & reports of depression). I think SSC readers are an appropriate comparison class for long-term-focused EAs. That said, I agree with the thrust of this part of your argument. There just aren't very many people working on long-termist stuff at present. Once all of these people are supported by a comfortable salary, it's not clear that further spend on them is leveraged (i.e. not clear that there's a mechanism for converting more money to more research product for the present set of researchers, once they're receiving a comfortable salary). So perhaps the argume Cash prizes for the best arguments against psychedelics being an EA cause area Psychedelic interventions seem promising because they can plausibly increase the number of capable people focused on long-termist work, in addition to plausibly boosting the efficacy of those already involved. Pointing out that there are two upsides is helpful, but I had just made this claim: The math for [the bold part] seems really unlikely to work out. It would be helpful if you could agree with or contest with that claim before we move on to the other upside. - Rationality projects: I don't care to arbitrate what counts as EA. I'm going to steer c... (read more) 3Milan_Griffes3yIsn't much of the present discussion about "what counts as EA?" Maybe I'm getting hung up on semantics. The question I most care about here is: "what topics should EAs dedicate research capacity & capital to?" Does that seem like a worthwhile question? 3Milan_Griffes3yRight. I'm saying that the math we should care about is: * effect from boosting efficacy of current long-termist labor + effect from increasing the amount of long-termist labor + effect from short-termist benefits I think that math is likely to work out. Given your priors, we've been discounting "effect from short-termist benefits" to 0. So the math is then: * effect from boosting efficacy of current long-termist labor + effect from increasing the amount of long-termist labor And I think that is also likely to work out, though the case is somewhat weaker when we discount short-termist benefits to 0. (I also disagree with discounting short-termist benefits to 0, but that's doesn't feel like the crux of our present disagreement.) Cash prizes for the best arguments against psychedelics being an EA cause area I'm not arguing against trying to compare things. I was saying that the comparison wasn't informative. Comparing dissimilar effects is valuable when done well, but comparing d-values of different effects from different interventions tells you very little. 3Milan_Griffes3yProbably the crux here is that I think rationality training & the psychedelic experience can achieve similar kinds of behavior change (e.g. less energy spent on negative self-talk & unhelpful personal narratives) such that their effect sizes can be compared. Whereas you think that rationality training & the psychedelic experience are different enough that believable comparison isn't possible. Does that sound right to you? Cash prizes for the best arguments against psychedelics being an EA cause area To explicitly separate out two issues that seem to be getting conflated: • Long-term-focused EAs should make use of the best mental health care available, which would make them more effective. • Some long-term-focused EAs should invest in making mental health care better, so that other long-term-focused EAs can have better mental health care and be more effective. The former seems very likely true. The latter seems very likely false. You would need the additional cost of researching, advocating for and implementing a specific new treatment (here, psilocybin) acros... (read more) 5Milan_Griffes3yfwiw I think negative self-talk (a kind of emotional block) & unhelpful personal narratives are big parts of the subjective experience of depression. Comparing dissimilar effects is a core part of EA-style analysis, right? 3Milan_Griffes3yDoes this mean you think that projects like CFAR [https://rationality.org/] & Paradigm Academy [http://paradigmacademy.co/] shouldn't be associated with the EA plank? Psychedelic interventions seem promising because they can plausibly increase the number of capable people focused on long-termist work, in addition to plausibly boosting the efficacy of those already involved. (See section 3(a) of the OP.) The marginal value of each additional value-aligned + capable long-termist is probably quite high. Cash prizes for the best arguments against psychedelics being an EA cause area I believe you when you say that psychedelic experiences have an effect of some (unknown) size on emotional blocks & unhelpful personal narratives, and that this would change workers' effectiveness by some (unknown) amount. However, even assuming that the unknown quantities are probably positive, this doesn't tell me whether to prioritise it any more than my priors suggest, or whether it beats rationality training. Nonetheless, I think your arguments should be either compelling or something of a wake-up call for some readers. For example, if a ... (read more) 5Milan_Griffes3yGot it. (And thanks for factoring in kindness!) There hasn't been very much research on psychedelics for "well" people yet, largely because under our current academic research regime, it's hard to organize academic RCTs for drug effects that don't address pathologies. The below isn't quite apples-to-apples, but perhaps it's helpful as a jumping-off point. CFAR's 2015 longitudinal study [https://rationality.org/studies/2015-longitudinal-study] found: Carhart-Harris et al. 2018 [https://www.enthea.net/files/carhart-harris_et_al_2017_psilo_trial.pdf], a study of psilocybin therapy for treatment-resistant depression, found: Not apples-to-apples, because a population of people with treatment-resistant depression is clearly different than a population of CFAR workshop participants. But both address a question something like "how happy are you with your life?" Even if you add a steep discount to the Carhart-Harris 2018 [https://www.enthea.net/files/carhart-harris_et_al_2017_psilo_trial.pdf] effect, the effect size would still be comparable to the CFAR effect size – let's assume that 90% of the treatment effect is an artifact of the study due to selection effects, small study size, and factors specific to having treatment-resistant depression. Assuming a 90% discount, psilocybin would still have an adjusted Cohen's d = 0.14 (6 months after treatment), roughly in the ballpark of the CFAR workshop effect (d = 0.17). Cash prizes for the best arguments against psychedelics being an EA cause area Boring answer warning! The best argument against most things being 'an EA cause area'† is simply that there is insufficient evidence in favour of the thing being a top priority. I think future generations probably matter morally, so the information in sections 3(a), 3(b) and 4 matter most to me. I don't see the information in 3(a) or 3(b) telling me much about how leveraged any particular intervention is. There is info about what a causal mechanism might be, but analysis of the strength is also needed. (For example, you say that psychedelic in... (read more) 5Milan_Griffes3yCurious for your take on this part of the OP: Why isn't GV psychedelics grantmaking housed under Open Phil? As an aside, I wouldn't say that any Good Ventures things are 'housed under Open Phil'. I'd rather say that Open Phil makes recommendations to Good Ventures. i.e. Open Phil is a partner to Good Ventures, not a subsidiary. Technically, I've therefore answered a different question to the one you asked: I've answered the question 'why aren't these grants on the Open Phil website'. Why isn't GV psychedelics grantmaking housed under Open Phil? Answer by KitMay 06, 201913 From Good Ventures' grantmaking approach page: In 2018, Good Ventures funded164 million in grants recommended by the Open Philanthropy Project, including $74 million to GiveWell’s top charities, standout charities, and incubation grants. (These grants generally appear in both the Good Ventures and Open Philanthropy Project grants databases.) Good Ventures makes a small number of grants in additional areas of interest to the foundation. Such grants totaled around$19 million in 2018. Check out Our Portfolio and Grants Database to learn more about the g
... (read more)
5Milan_Griffes3yThanks! I just flipped through the Good Ventures grants database [http://www.goodventures.org/our-portfolio] & spot-checked ~30 of their 2018 grants. Every grant I checked was made under the aegis of Open Phil, except for the aforementioned psychedelic grants & these grants to Alzheimer's research: 1 [http://www.goodventures.org/our-portfolio/grants/university-of-southern-california-research-on-microbiome-and-alzheimers] , 2 [http://www.goodventures.org/our-portfolio/grants/northwestern-university-research-on-microbiome-and-alzheimers] , 3 [http://www.goodventures.org/our-portfolio/grants/brigham-and-womens-hospital-development-of-a-blood-test-for-alzheimers] , 4 [http://www.goodventures.org/our-portfolio/grants/university-of-chicago-research-on-microbiome-and-alzheimers] The same question comes up for the Alzheimer's grants – seems like they could be neatly placed in Open Phil's other scientific research portfolio [https://www.openphilanthropy.org/focus-area/scientific-research/other-scientific-research-areas] , but weren't.
6BenMillwood3yThere's an unanswered question here of why Good Ventures makes grants that OpenPhil doesn't recommend, given that GV believes in the OpenPhil approach broadly. But I guess I don't find it that surprising that they do so. People like to do more than one thing?
As an aside, I wouldn't say that any Good Ventures things are 'housed under Open Phil'. I'd rather say that Open Phil makes recommendations to Good Ventures. i.e. Open Phil is a partner to Good Ventures, not a subsidiary.
Technically, I've therefore answered a different question to the one you asked: I've answered the question 'why aren't these grants on the Open Phil website'.
Legal psychedelic retreats launching in Jamaica
I figured the OP was suggesting that people go to the retreat? (or maybe be generically supportive of the broader project of running retreats)
Not sure where this is going; doesn't immediately seem like it counters what I said about your comparison to specific fundraising + analysis posts, or about why readers might be confused as to why this is here.
0Milan_Griffes3yCan you point me to the place(s) where the OP is suggesting people go on these retreats? Perhaps this is the part you have in mind: Or maybe this is more a subtextual thing you're picking up on?
2Milan_Griffes3yI'm not sure where it's going either :-) You drew a distinction between the comparison posts I linked to & the OP. I was confused by the distinction you were drawing. I asked for clarification.
Legal psychedelic retreats launching in Jamaica
Right. The stuff about psychedelics as Cause X was maybe a bit of a red herring. You probably know how to sell your business much better than I do, but something which I think is undervalued in general is simply opening your pitch with why exactly you think someone should care about your thing. I actually hadn't considered creative problem-solving or career choice as reasons to go on this retreat.
My earlier comment was a reply to the challenge of 'how this post is substantively different from previous content like...' and this now seems fairly obvious, so I probably have little more useful to say :)
8Aaron_Nesmith-Beck3yFair enough! I probably should have pointed out those reasons in the original post (although I did link to the paper on psychedelics and creative problem-solving). I probably also unconsciously assumed those reasons are more obvious to most people than they are, because I'm thinking about this stuff all the time.
Load More | 2021-11-29 21:47:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3579607307910919, "perplexity": 2428.1451317418723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358842.4/warc/CC-MAIN-20211129194957-20211129224957-00080.warc.gz"} |
https://mathspp.com/blog/til/archives_month:feb_2022 | TIL (Today I Learned)
The TIL series of articles contains very short articles documenting something I learned “today”.
TIL #034 – multi-channel transposed convolution
Today I learned about multi-channel transposed convolutions.
TIL #033 – transposed convolution
Today I learned about the transposed convolution transformation in CNNs.
TIL #032 – t-SNE for dimensionality reduction
Today I learned about t-SNE for dimensionality reduction.
TIL #031 – understanding SVG viewBox
Today I understood how the viewBox of SVGs really works.
Today I learned that True is equal to 1 and False is equal to 0.
Today I learned how to disassemble Python code with the module dis.
Today I learned how to use the package rich by Will McGugan. | 2023-02-02 14:36:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2887984812259674, "perplexity": 9406.447698726195}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500028.12/warc/CC-MAIN-20230202133541-20230202163541-00494.warc.gz"} |
https://www.atmos-chem-phys.net/19/5753/2019/ | Journal cover Journal topic
Atmospheric Chemistry and Physics An interactive open-access journal of the European Geosciences Union
Journal topic
Atmos. Chem. Phys., 19, 5753-5769, 2019
https://doi.org/10.5194/acp-19-5753-2019
Atmos. Chem. Phys., 19, 5753-5769, 2019
https://doi.org/10.5194/acp-19-5753-2019
Research article 02 May 2019
Research article | 02 May 2019
# Rapid ice aggregation process revealed through triple-wavelength Doppler spectrum radar analysis
Rapid ice aggregation process in radar data
Andrew I. Barrett1,2, Christopher D. Westbrook1, John C. Nicol1, and Thorwald H. M. Stein1 Andrew I. Barrett et al.
• 2Institute for Meteorology and Climate Research, Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany
Abstract
We have identified a region of an ice cloud where a sharp transition of dual-wavelength ratio occurs at a fixed height for longer than 20 min. In this paper we provide evidence that rapid aggregation of ice particles occurred in this region, creating large particles. This evidence comes from triple-wavelength Doppler spectrum radar data that were fortuitously being collected. Through quantitative comparison of the Doppler spectra from the three radars we are able to estimate the ice particle size distribution (of particles larger than 0.75 mm) at different heights in the cloud. This allows us to investigate the evolution of the ice particle size distribution and determine whether the evolution is consistent with aggregation, riming or vapour deposition. The newly developed method allows us to isolate the signal from the larger (non-Rayleigh scattering) particles in the distribution. Therefore, a particle size distribution retrieval is possible in areas of the cloud where the dual-wavelength ratio method would fail because the bulk dual-wavelength ratio value is too close to zero.
The ice particles grow rapidly from a maximum size of 0.75 to 5 mm while falling less than 500 m in under 10 min. This rapid growth is shown to agree well with theoretical estimates of aggregation, with aggregation efficiency being approximately 0.7, and is inconsistent with other growth processes, e.g. growth by vapour deposition or riming. The aggregation occurs in the middle of the cloud and is not present throughout the entire lifetime of the cloud. However, the layer of rapid aggregation is very well defined at a constant height, where the temperature is −15C and lasts for at least 20 min (approximate horizontal distance: 24 km). Immediately above this layer, the radar Doppler spectrum is bi-modal, which signals the formation of new small ice particles at that height. We suggest that these newly formed particles, at approximately −15C, grow dendritic arms, enabling them to easily interlock and accelerate the aggregation process. The large estimated aggregation efficiency in this cloud is consistent with recent laboratory studies for dendrites at this temperature.
1 Introduction
Ice microphysical processes are an important part of cloud and precipitation formation; most surface precipitation begins as ice particles . However, numerical models, of either weather or climate, have difficulty in accurately simulating ice cloud. For example, the CMIP5 models have regional cloud ice water paths that differ from observations by factors of 2–10 . This challenge is partly because observations of ice particles are sparse and because processes controlling the formation and evolution of ice particles, such as aggregation, are poorly understood and crudely parameterized in most models.
Additionally, measuring the number and size of ice particles within clouds is challenging. The two main methods, in situ aircraft observations and active remote sensing observations, both have their deficiencies. First, active remote sensing instruments, such as the radar and lidar, are good at measuring the bulk scattering quantities, such as radar reflectivity. However, converting these bulk quantities to cloud microphysical properties requires numerous assumptions (e.g. the shape of individual hydrometeors and the particle size distribution). In contrast, aircraft observations measure the size and number of ice particles directly, but only within a small sample volume, at a single height at any given time and during sporadic case studies. Furthermore, ice particle size distributions have been shown to be biased as a result of shattering of ice particles on aircraft-mounted instrument inlets , which results in an artificially increased concentration of small ice crystals.
Nevertheless, cloud microphysical observations and in particular particle size distributions are important for many applications. One important application is the better understanding of processes that occur within clouds. For example, size distributions measured from aircraft have been used to study aggregation in cirrus clouds . Furthermore, the size distribution itself affects the relative importance of vapour deposition, riming and aggregation for ice particle growth. Vapour deposition and evaporation rates are proportional to the first moment of particle size distribution, while riming is related to higher moments (product of projected area and fall speed), and aggregation rates depend on the breadth of the particle size distribution through the difference in fall speeds. Thus the shape and breadth of the particle size distribution are an important control on the relative importance of the processes involved. Another important application is to provide observations with which numerical models can be evaluated and their parameterizations can be improved.
In this paper, we report radar observations of one cloud system, where large vertical gradients in cloud microphysical properties were observed at a fixed height for at least 20 min. By exploring the radar data beyond the standard bulk quantities and exploiting observations from multiple radars together with their Doppler spectra, we are able to estimate the size distribution of particles at different heights and therefore diagnose the most likely process for the rapid but consistent changes in cloud properties with height. The changes of cloud microphysical properties with height apparently result from rapid aggregation of ice particles. These observations were made using three co-located, vertically pointing radars at different frequencies (3, 35 and 94 GHz).
Analysis of the radar Doppler spectra has previously been performed for the onset of drizzle in stratiform clouds , and the application of multi-frequency Doppler spectra has been used to determine the rain size distribution . For the ice phase, the three different frequencies have been used simultaneously to categorize rimed and unrimed particles from the surface and from aircraft-based radar observations . However, this is the first attempt to retrieve the ice particle size distributions from multi-frequency Doppler spectrum observations. These retrievals are then used to evaluate the microphysical processes active within the clouds.
The aggregation process can be characterized by the aggregation kernel k (Mitchell1988, Eq. 9):
$\begin{array}{}\text{(1)}& k=\frac{\mathit{\pi }}{\mathrm{4}}{E}_{\mathrm{agg}}{\left({D}_{\mathrm{1}}+{D}_{\mathrm{2}}\right)}^{\mathrm{2}}|v\left({D}_{\mathrm{1}}\right)-v\left({D}_{\mathrm{2}}\right)|\phantom{\rule{0.25em}{0ex}},\end{array}$
where D1 and D2 are the diameters of the two potentially aggregating particles and v(D) is the fall velocity of the particle. The aggregation efficiency of ice particles (Eagg; the probability that two particles experiencing a “close approach” will collide and stick together) is typically low, although a large range of values has been reported and understanding of how aggregation efficiency varies with environmental parameters is still sparse. Eagg has previously been found to depend on both the particle habit and the temperature at which the collisions occur; however, a large range of values has been reported. An increase in the aggregation efficiency at about −15C has been reported in several laboratory studies. One such study , where small particles were drawn past a large stationary ice target, showed a weak temperature dependence of Eagg, with a broad peak around −12C and maximum values of 0.10.2. used a 10 m tall cloud chamber containing large concentrations of small ice particles settling under gravity and reported a much sharper peak of Eagg around −15C, with values of 0.40.9, but the best estimate at other temperatures was below 0.2. found aggregation efficiencies for planar snow crystals drawn past a cylindrical target of 0.3–0.85 depending on the particle size. reported that both the maximum dimension of ice aggregates and the probability of seeing aggregates increased at around −15C, which was linked to the preferred formation of dendritic particles at this temperature. This is supported by other studies showing a larger Eagg in the presence of dendritic particles. found an Eagg of around 0.55 for clouds dominated by dendrites at the cloud top but much lower values around 0.07 when dendrites were not present. Low Eagg values of 0.09 were also found for tropical anvil clouds where dendritic particles were not present at temperatures of −3 to −11C . In the early stage of aggregation, reported that the aggregates were made up of a small number of dendritic particles. These studies seem to suggest that dendrites, which typically form at around −15C, can significantly increase the aggregation efficiency because the dendritic branches interlock with other particles, whereas the aggregation efficiency is much lower when dendritic particles are not present. In this study, retrievals from radar observations will be used to estimate the aggregation efficiency and will be compared with the laboratory-derived values.
showed that the assumed particle size distribution is the single-largest sensitivity in the model physics for mixed-phase altocumulus clouds. The importance of correctly simulating the ice particle size distribution has been shown in several other studies . Therefore understanding and correctly implementing the aggregation process in numerical models of cloud physics is important for the overall development of the cloud system.
This paper is organized with an overview of the instruments and data in Sect. 2, an overview of the case study in Sect. 3, and details about the retrieval in Sect. 4. Section 5 details the cloud properties retrieved and their uncertainties, and Sect. 6 summarizes the evidence for aggregation, with conclusions drawn in Sect. 7.
2 Data and methods
We use data from three co-located radars at the Chilbolton Observatory in Hampshire, southern England, on the afternoon of 17 April 2014. The radars operate at frequencies of 3 GHz (9.75 cm wavelength, 25 m antenna, 0.28 beamwidth; Goddard et al.1994a), 35 GHz (8.58 mm wavelength, 2.4 m antenna, 0.25 beamwidth; Illingworth et al.2007) and 94 GHz (3.19 mm wavelength, 0.46 m antenna, 0.5 beamwidth; Eastment1999). The 35 and 94 GHz cloud radars are situated immediately next to one another, whereas the 3 GHz radar is sited less than 50 m away (Fig. 2). The sampling of the three radars was synchronized to within 0.1 s, and full pulse-to-pulse power and phase measurements were recorded. For the 3 GHz radar, Doppler spectra were calculated every second and incoherently averaged over 10 s. For the 35 and 94 GHz cloud radars, spectra were calculated every 0.11 and 0.08 s respectively and again incoherently averaged over 10 s. Assuming typical wind speeds of 20 m s−1 aloft, the averaged spectra correspond to a 200 m section of cloud. Ground clutter was removed from the spectra by masking returns with velocity being near zero. Noise levels were estimated from measurements beyond the range of meteorological echoes (>10 km) and subtracted from the individual spectra prior to averaging. The data from each radar were interpolated onto a common range and velocity grids (60 m range by 0.0195 m s−1 velocity).
Figure 1Dual-wavelength ratios as a function of ice particle diameter for the three pairs of radar frequencies used in this study. Dual-wavelength ratios from the scattering model are shown with solid lines. For comparison, mean dual-wavelength ratios of unrimed aggregates within 250 µm wide diameter bins from are shown by points for four different aggregate-monomer types.
Figure 2A photograph of the three co-located radars at the Chilbolton Observatory, Hampshire, England. From left to right: the 3 GHz CAMRa radar, 94 GHz radar and 35 GHz radar.
Because of the large antenna, it is necessary to apply a near-field correction to the 3 GHz data at heights below about 6 km (Sekelsky2002). This correction factor was derived empirically by comparing 3 GHz reflectivity profiles against those measured by the 35 GHz instrument (which has a much smaller antenna) in a number of Rayleigh-scattering ice clouds. The magnitude of the correction was 1 dB at 5 km, rising to 3 dB at 3 km.
Table 1A summary of the terminology used throughout this paper, where “F” denotes the radar frequency.
## 2.1 Data quality, calibration and attenuation correction
The multi-wavelength approach allows us to measure the diameter of ice particles that are comparable in size to the shortest radar wavelength or larger (e.g. Kneifel et al.2015, 2016). For ice particles comparable in size to the radar wavelength, non-Rayleigh scattering becomes important. For suitably large particles, it becomes possible to size the particles based on the different radar returns at different wavelengths.
In contrast to the bulk retrieval that makes a single retrieval for particles of all fall velocities together, the Doppler spectrum approach allows for retrievals of particle size and number concentration to be made separately on particles of distinct fall velocities. We can use the multi-wavelength approach to determine the representative particle size from the “spectral dual-wavelength ratio” (sDWR; i.e. the difference in reflectivity of particles within a small range of fall velocities; see Table 1 for a full summary of radar quantities used in the paper) but can additionally separate the particles based on their fall velocity, allowing us to retrieve the ice particle size distribution.
A correction to the velocities measured by the radar is also applied. Unfortunately, the three radars were not pointing precisely vertically for this case (as determined by biases in the mean Doppler velocity in the Rayleigh-scattering part of the cloud), and initial testing suggested that there was a large sensitivity to the velocity offsets in the spectra (see Sect. 6.1). The 3 GHz radar was pointing vertically, but after analysing the data, the 35 and 94 GHz radars were determined to be off zenith by approximately 0.2 and 0.15 respectively in opposing directions. These offsets were determined by assessing the mean Doppler-velocity differences between the three radars as a function of height. The correlation of these velocity differences with the atmospheric wind profile (determined from ECMWF forecast fields) enabled an estimation of the pointing angle errors.
The mispointing of the radars is small and likely does not result in a substantial mismatch in sample volume given the 10 s integration time. However, this small mispointing means that the radar detects a small component of the horizontal wind in addition to the fall velocities of the ice particles. Although the pointing angle error is small, the horizontal wind component detected is of the order of a few centimetres per second, which is large enough to affect our comparison of the Doppler spectra from the three radars. Therefore, we have made a correction to the velocity measurements for the 35 and 94 GHz radars to ensure that the spectra are well aligned and can be compared. This correction is important because even a small shift in velocity can substantially affect the estimates of sDWR. In practice, the correction applied is +0.0585 m s−1 (three velocity bins) for the 35 GHz radar and −0.0390 m s−1 (two velocity bins) for the 94 GHz radar throughout the cloud layer. This correction is imperfect; however, we do not have independent measurements of the horizontal wind speed with sufficient accuracy and a high enough vertical resolution to make a reliable height-dependent correction or indeed any direct measurement of the mispointing. Radiosonde data and the ECMWF model output show that the horizontal wind speed was near-constant with height throughout the cloud layer on this day, and inspection of many individual Doppler spectra indicates that our simple correction aligns to the spectra very well in this case.
To reduce the noise in the spectra, each individual spectrum has been smoothed in velocity space by averaging over a 0.0585 m s−1 window, which equates to three velocity bins. We mask out regions where significant turbulence is present because the vertical air motions are large and vary on small timescales and space scales compared to the particle fall velocities that we are trying to measure. Near the cloud base, there is a layer of substantial turbulence caused by sublimation of ice particles as they fall into subsaturated air, and this leads to destabilization of the atmosphere in this layer. In this turbulent layer, the implicit assumption that measurements at a specific velocity are of a single particle size is invalid. Hence, we identify regions where turbulence alters the spectra by calculating the contribution of turbulence to spectral width using O'Connor et al. (2005, Eqs. 10–15). Points where the velocity variance from turbulence exceed a threshold value of 10−3 m2 s−2 are not considered when performing our retrievals. This threshold value was chosen such that all affected regions were suitably masked and, in the remaining data, that the width of the Doppler spectrum was determined by microphysical rather than turbulent contributions. Additionally, any points in the spectra that are 20 dB down from the peak of the spectra are removed in order to minimize the impact of noise.
3 The case – 17 April 2014
Figure 3a shows the radar reflectivity measured at Chilbolton for the thick stratiform ice cloud observed on 17 April 2014. This cloud formed in north-westerly flow, ahead of a cold front. The surface cold front reached Chilbolton at about 18:00 UTC. The front was not associated with any surface precipitation at Chilbolton and only very light precipitation across some other parts of southern England.
The evolution of the cloud reflectivity and the ratio of 35 and 94 GHz reflectivity are shown in Fig. 3. The cloud top height was approximately 9 km, where 35 GHz reflectivity values are around −15 dBZ and increase to 19 dB at approximately 4 km altitude near the cloud base. The temperature at the cloud top was −45C, and the freezing level was at about 2.7 km. Throughout most of the cloud the DWR values are below 1 dB. However, at around 4 km altitude there is a rapid increase in DWR with decreasing height, which indicates an increase in particle size such that the backscattered return at 94 GHz is no longer from Rayleigh scattering. The region of these large DWR values is consistent in height (onset at 4.5 km altitude; Fig. 3c) and is evident for at least 35 min. The largest DWR values occur at around 16:15 UTC, with peak values reaching 7 dB. The profile of DWR values at 16:15 UTC is shown in Fig. 3c.
Radar data from a larger portion of the same cloud were analysed in . Earlier in the day (before 15:40 UTC), the cloud did not show this sharp transition to high DWR values around 4.5 km. also used the triple-frequency radar data to determine that the cloud contained primarily aggregate snowflakes, consistent with the scattering model (lines in Fig. 1). Scattering properties of unrimed aggregates from are also consistent with observations and give very similar characteristics to the scattering model (points in Fig. 1). We focus on the time from 15:45 to 16:20 UTC, where there are dual-wavelength ratios of up to 8 dB below 4.5 km (Fig. 3b and c).
Figure 3Overview of the cloud structure on 17 April 2014 showing the (a) 35 GHz radar reflectivity and (b) the ratio of 35 GHz reflectivity to 94 GHz reflectivity throughout the sampling period. (c) The vertical profile of DWR at 16:15 UTC.
We attempt to understand what causes the rapid change in cloud properties during this period of substantial DWR35∕94 and the rapid change in height. Looking at the spectral reflectivity at each height (sZ35; Fig. 4a) together with the spectral dual-wavelength ratio (sDWR35∕94; Fig. 4b) reveals the changes of the cloud properties with altitude. From these data, the origin of the large changes in the sharp transition can be identified. At 5.4 km, there is an increase in the signal coming from slow-falling particles (0.4–0.6 m s−1; Fig. 4a). At this height, only the fastest-falling particles have sDWR${}_{\mathrm{35}/\mathrm{94}}>\mathrm{1}$ dB. At 4.5 km, the reflectivity and spectral reflectivity of the slow-falling particles have increased. The sDWR35∕94 increases up to 8 dB for the fastest-falling particles, and by 4 km the increase in sDWR35∕94 is seen for the majority of particles. Interestingly, the fall velocity of these particles does not increase as the particles grow larger and produce large sDWR35∕94 values.
Figure 4Height profile of (a) spectral reflectivity at 35 GHz (sZ35) and (b) spectral dual-wavelength ratio (sDWR35∕94) recorded at 16:15 UTC. Temperatures from the ECMWF model at 16:00 UTC are shown every 1 km and at 5.3 km, where the small-particle mode is first evident.
Figure 5Illustration of the retrieval method and the retrieved size distribution at three heights at 16:15 UTC. (a)(c) are just above the layer of secondary ice nucleation, (d)(f) are within that layer and (g)(i) are below this layer, where the dual-wavelength differences are largest. (a), (d) and (g) show the 3, 35 and 94 GHz spectra at that height. (b), (e) and (h) show the distribution of sDWR35∕94 data points within a window around the central time (90 s by 300 m), with the black line denoting the median power difference for each velocity bin. (c), (f) and (i) show the retrieved ice particle size distribution, with the colour of the line relating to the velocity of the data used to determine that data point. The grey shaded region marks particle diameters smaller than 0.75 mm, where there is no reliable information available to size the ice particles. The higher altitude plots are from earlier times to account for an approximately 1 m s−1 fall velocity of the ice particles.
4 Retrieval of the ice particle size distribution
To retrieve the ice particle size distribution from the cross-calibrated and velocity-matched Doppler spectra (see Sect. 2.1) at three wavelengths, we use the method described below. The method is illustrated at three separate heights in Fig. 5. The following is calculated for each individual velocity bin, within each radar range gate and at all times:
1. Calculate the spectral dual-wavelength ratio ($s\mathrm{DWR}=s{Z}_{\mathrm{35}}-s{Z}_{\mathrm{94}}$). This is simply calculated as the difference between the spectral reflectivity (sZ) at 35 and 94 GHz (Fig. 5a, d and g).
2. Determine the particle diameter D from sDWR. The relationship between particle diameter and particle DWR from the scattering model (Fig. 1) is used to convert the sDWR value to particle diameter. We use this scattering model based on its good agreement with observational data for this case ; other scattering models may be more appropriate for different cases.
3. Calculate the mass m of an ice particle with diameter D, assuming the mass–size relationship of m=0.0185D1.9 for all ice particles. Use of this mass–diameter relationship is supported by , who found that the fractal dimension of snowflakes on this day was 1.9, and hence the exponent of 1.9 is appropriate; other mass–diameter relationships may be more appropriate for different cases.
4. Determine the radar reflectivity of a single ice particle with diameter D and mass m using the scattering model.
5. Determine the total number of particles within the velocity bin. This is calculated by dividing the total spectral reflectivity sZ by the single-particle reflectivity calculated in the previous step.
The size and number of ice particles within the velocity bin is now known. The particle size distribution can be estimated by performing this same process for each velocity bin.
Up to this point, we have determined the diameter D of the ice particles within each velocity bin and also the particle size distribution dN∕dV (where dN is the concentration of ice particles with velocity between V and V+dV). We can convert dN∕dV to the ice particle size distribution dN∕dD (concentration of ice particles with diameter between D and D+dD). This is the common way to express a particle size distribution that is independent of the measuring sample interval (dD or dV). To do so, we need to know the relationship between the velocity bin width dV and the diameter bin width dD. To determine this, we use a 300 m by 90 s window (five range gates by nine individual averaged spectra) centred on the current radar pixel and compute the power-law fit to the measured Doppler velocity and retrieved diameter values, of the form V=cDd. A power-law relationship is used because it is both easily differentiable and common in microphysical scaling relationships (e.g. Locatelli and Hobbs1974). We use the differential of this power-law fit to compute $\frac{\text{d}V}{\text{d}D}$ – the diameter bin width for each velocity bin. The size distribution is then calculated as
$\begin{array}{}\text{(2)}& \frac{\text{d}N}{\text{d}D}=\frac{\text{d}N}{\text{d}V}\phantom{\rule{0.25em}{0ex}}\frac{\text{d}V}{\text{d}D}.\end{array}$
There is a relatively large sensitivity of the retrieved size distribution to the power-law fit, but this is primarily in terms of the number concentration rather than the diameter of the particles or the shape of the size distribution (see Sect. 6.1 for a complete sensitivity analysis).
Retrieval of the size and number concentration of ice particles is only possible for particles larger than about 0.75 mm in diameter (corresponding to a sDWR35∕94 of about 1 dB; Fig. 1). For smaller particles, sZ is very similar at all three radar frequencies, and differences are not easily distinguished from noise in the spectra. For particles larger than about 3 mm in diameter, the sDWR35∕94 saturates at about 8–9 dB (Fig. 1) as a result of the fractal geometry of the aggregates (see Stein et al.2015), and therefore retrieval of particle diameter from sDWR35∕94 is no longer possible. Therefore, where sDWR35∕94 is larger than 6 dB, the diameter and number concentration are retrieved using sDWR3∕35 instead, following the same method as above. This pair of frequencies does not saturate until significantly larger particle diameters and therefore, for larger particles, have a larger sensitivity to change in diameter than for the 35/94 GHz pair. We do not use the 3∕35 GHz pair for the full range of particle diameters because the 3 GHz is affected more by noise than the 35 GHz spectra and therefore negatively impacts the retrieval of particle sizes when the DWR is small. It would be equally valid to calculate the size and number concentration of the larger particles using the 3 / 94 GHz pair instead, and this indeed enables a consistency check that the retrieval works well and that the input Doppler spectra are well matched.
5 Retrieved cloud properties and validation
Throughout most of the cloud, the 35/94 GHz dual-wavelength ratio (DWR35∕94) is near zero (<1 dB; Fig. 3b), implying that the ice particles are relatively small and are still in the Rayleigh-scattering regime at 94 GHz (maximum diameter 0.75 mm). DWR only exceeds 2 dB after 15:45 UTC and between 4.3 km and the cloud base.
From 16:00 to 16:20 UTC, there is a sharp transition from DWR${}_{\mathrm{35}/\mathrm{94}}<\mathrm{1}$ dB at 4.5 km to peak DWR35∕94 values at 4 km, with the maximum DWR${}_{\mathrm{35}/\mathrm{94}}=\mathrm{8}$ dB. The altitude of this sharp transition is consistent after 16:02 UTC, with the largest DWR35∕94 values being present after 16:10 UTC. There is also evidence of this transition layer as early as 15:45 UTC.
More detail can be seen by examining the Doppler spectra for the different radars at a few fixed heights in detail. The Doppler spectra measured at 5.89, 4.81 and 4.15 km (Fig. 5a, d and g) show three spectra with rather different shapes. At 4.15 km, the spectra have only a single mode, but throughout most of the velocity range, sZ35 is much greater than sZ94. The sDWR35∕94 reaches 8 dB (Fig. 5h), and the largest particles are sized at around 5 mm. The retrieved size distribution is approximately inverse exponential (Fig. 5i).
At 5.89 km (Fig. 5a–c), in contrast, the spectra for all three radars are very similar with a single peak; all sDWR35∕94 values are below 1 dB (Fig. 5b). The small sDWR35∕94 values mean that it is not possible to reliably size the ice particles here, other than to say that they are all smaller than 0.75 mm.
About 1 km lower in the cloud, at 4.81 km (Fig. d–f), the mean velocity and reflectivity have both increased, but there is also a bi-modal structure to the spectra captured at both frequencies. This second mode is related to newly formed, small ice particles that are falling slower than the majority of older, larger ice particles. Furthermore, at 4.81 km, there are larger and faster-falling particles present than at 5.89 km. The largest sDWR35∕94 values now approach 4 dB (Fig. 5e), and particles larger than 0.75 mm are present, with the largest retrieved diameter of 1.2 mm. The size distribution (Fig. 5f) of the reliably sized particles (those larger than 0.75 mm and outside the grey region of the plot) is inverse exponential.
The consistent and narrow range of heights over which this rapid change in size occurs is just below the region where new particles are seen around 5.4 km and the Doppler spectra are bi-modal (Fig. 5d). These new particles fall slowly, which suggests that they are small and are formed at this level. These particles begin to fall faster as they grow in size. Particles forming around −15C would initially grow as dendrites . As these particles grow, the sDWR35∕94 starts to increase for the larger (faster falling) particles, which we take to be aggregates. This increase in sDWR35∕94 implies an acceleration of the aggregation process at this height.
The reduction of the size distribution slope between 4.81 and 4.15 km remains consistent for at least 30 min from 15:45 UTC onwards but is not present earlier in the cloud. The observations shown in Fig. 5 are similar throughout this time period, which explains the sharp increase in DWR of between 4.8 and 4.1 km (Fig. 3) during this time period.
## 5.1 Evolution and validation of retrieved size distributions
To evaluate how accurate the retrieved ice particle size distributions are, we would ideally like to compare them against in situ data. However, in situ observations are not available for this case. Therefore, we evaluate the retrievals against other retrieval methods.
Figure 6Time–height plots of Λ, the slope of the ice particle size distribution derived from the (a) multi-wavelength Doppler spectrum method and (b) the dual-wavelength ratio (DWR) method. The grey regions mark areas of the cloud where no retrieval of Λ was possible. See text for details. Panel (c) shows a profile of values averaged over 2 min centred on 16:15 UTC. The grey lines show the expected changes in Λ for three different values of aggregation efficiency (1.0, 0.7, 0.2), assuming that the ice particle size distribution at 4.5 km evolves due to aggregation alone.
By fitting an inverse-exponential curve to the retrieved particle size distribution data from our Doppler spectrum method, we can estimate the slope of the size distribution, Λ in $\text{d}N/\text{d}D={N}_{\mathrm{0}}\mathrm{exp}\left(-\mathrm{\Lambda }D\right)$ (Fig. 6a). By means of verification, we also calculate the slope of a purely inverse-exponential size distribution fitted to match the DWR35∕94 values only (Fig. 6b). There is excellent agreement between the two methods in the regions where the size distribution is broader and less steep. Figure 6c shows a 2 min average of Λ, which again shows the excellent agreement throughout the whole profile, particularly the height of the rapid change of Λ between 5 and 4 km. The only region of disagreement is just below 4 km, where the spectra method suggests even broader size distribution than the DWR method. This could be evidence that the inverse-exponential size distribution approximation in this region is not appropriate because DWR35∕94 was almost saturated at 8–9 dB. However, both methods agree that there is a rapid increase in ice particle size occurring as they fall from 4.5 to 3.6 km and a broadening of the ice particle size distribution. In the next section, we present evidence that this rapid change is occurring as a result of aggregation and not occurring through vapour deposition or riming.
The spectral method developed here is more sensitive to the presence of a few large particles than the DWR method. With the spectral method, the influence of a few non-Rayleigh scatterers can be seen in the spectra before the reflectivity of the individual scatterers is large enough to contribute significantly to the total reflectivity (which is a weighted average of sDWR over all particles). Therefore, the retrieved particle size distributions higher in the cloud are more reliable with the spectral method than the DWR method because we are able to isolate the signal from the larger particles in the distribution. However, the spectral method is sensitive to noise in the spectra, and hence when the overall signal becomes weak, and the noise is therefore a more significant contributor, the retrieved particle size distributions are also noisy.
6 Evidence for rapid aggregation of dendrites
In this section we examine whether the changes in particle size and size distribution could be explained by processes other than aggregation. Specifically we address whether vapour deposition or riming could lead to the observed changes.
Ice particles grow from smaller than 0.75 mm in diameter (DWR < 1 dB), above this transition layer, to larger than 5 mm by the time they reach 4 km (Fig. 3c). Mean radar Doppler velocities just above this transition layer are 1–1.2 m s−1 (Fig. 5d), indicating that, on average, ice particles will take 400–500 s to fall from 4.5 to 4 km, although the largest particles responsible for the large DWR values will fall faster than the average particle.
The growth of ice particles by vapour deposition cannot produce large ice particles quickly enough to match our observations. Calculations using the vapour deposition growth equation from are presented to demonstrate this. The equations used were
$\begin{array}{}\text{(3)}& \frac{\mathrm{d}m}{\mathrm{d}t}=\frac{\mathrm{4}\mathit{\pi }\phantom{\rule{0.125em}{0ex}}C\phantom{\rule{0.125em}{0ex}}S{S}_{i}\phantom{\rule{0.125em}{0ex}}F}{\left(\frac{{L}_{\text{s}}}{{R}_{\text{v}}T}-\mathrm{1}\right)\frac{{L}_{\text{s}}}{KT}+\frac{{R}_{\text{v}}T}{{e}_{\text{si}}\left(T\right)D}},\end{array}$
$\begin{array}{}\text{(4)}& m=\mathrm{0.0185}{D}^{\mathrm{1.9}},\end{array}$
where the rate of change of particle mass m with time t is a function of the ice particle capacitance C (assumed to be D∕4 here, following , where D is the diameter), supersaturation with respect to ice SSi and the ventilation coefficient $F=\mathrm{0.65}+\mathrm{0.44}×{\mathrm{0.6}}^{\mathrm{0}}.33{\mathit{Re}}^{\mathrm{0.5}}$. Re is the Reynolds number; $\mathit{Re}=\mathit{\rho }DV\left(D\right)/\mathit{\mu }$, calculated from the air density ρ, particle diameter D, terminal velocity V(D) and dynamic viscosity of air μ. Terms in the denominator are the latent heat of sublimation Ls, the specific gas constant for vapour Rv, temperature T, thermal conductivity of air K and saturated vapour pressure over ice esi; Eq. (4) is the mass–size relationship.
These calculations, for a liquid-saturated atmosphere at −10C, show that typical ice particles would, at their absolute fastest, take over 40 min (2534 s) to grow from 0.75 to 5 mm in diameter. Similarly, calculate that it takes over 30 min to grow a particle of 3 mm through vapour deposition. We therefore can rule out pure vapour deposition as the source of the largest particles, which develop in less than 10 min.
Riming of the ice particles by collecting liquid water is another possible explanation; however, there is no evidence of significant supercooled liquid water present at this height. There were no strong backscatter returns in the lidar measurements (not shown) which would indicate the presence of liquid droplets, and the liquid water path measured by the microwave radiometer is below the noise level of the instrument (about 20 g m−2) throughout the observation period. Furthermore, the triple-frequency analysis for the scattering models in do not show agreement with the expected triple-wavelength signature of rimed particles but rather for aggregate snow crystals.
The sharp and consistent transition of cloud properties with height after 15:45 UTC is therefore likely a result of aggregation. The first indication that aggregation is the most important process in this part of the cloud is the continual decrease in Λ (the slope of the ice particle size distribution) with height down from the top of the transition layer. This change with height indicates that there are more large particles and fewer small particles as the particle size distribution evolves, consistent with aggregation:
$\begin{array}{ll}\text{(5)}& \frac{\text{d}\mathrm{\Lambda }}{\text{d}z}=& \phantom{\rule{0.25em}{0ex}}\frac{\mathrm{\Lambda }}{b{\mathit{\chi }}_{\text{f}}}\frac{\text{d}{\mathit{\chi }}_{\text{f}}}{\text{d}z}\left[\mathrm{1}-\frac{\mathrm{2}\mathrm{\Gamma }\left(b+\mathit{\delta }+\mathrm{1}\right)\mathrm{\Gamma }\left(b+d+\mathrm{1}\right)}{\mathrm{\Gamma }\left(\mathit{\delta }+\mathrm{1}\right)\mathrm{\Gamma }\left(\mathrm{2}b+d+\mathrm{1}\right)}\right]& -\frac{\mathit{\pi }{E}_{\mathrm{agg}}{I}_{\text{l}}{\mathit{\chi }}_{\text{f}}{\mathrm{\Lambda }}^{b+d-\mathrm{1}}}{\mathrm{4}abc\mathrm{\Gamma }\left(b+d+\mathrm{1}\right)\mathrm{\Gamma }\left(\mathrm{2}b+d+\mathrm{1}\right)}.\end{array}$
We calculated the expected change of Λ with height using Eq. (5), following , for several different values of aggregation efficiency (Eagg). In Eq. (5), a, b, are constants in the mass–diameter relationship m=aDb; c, d, are constants from the fall velocity–diameter relationship V=cDd; δ=1.0 following ; Γ is the gamma function; χf is the snow flux in kg m−2 s−1; and Il is calculated from Eq. (20) of , dependent on b and d. In our calculations, it takes the value of 11.524. These calculations assume that aggregation and vapour deposition together are the primary processes affecting the evolution of the size distribution and that changes to the total mass are only due to vapour deposition or sublimation, not the accretion of liquid drops.
To estimate the aggregation efficiency in this part of the cloud, we need to know the slope of the particle size distribution at the top of the layer and the vertical profile of snow flux. The Λ value is estimated from the retrieved size distribution. The snow flux profile is estimated from the retrieved particle diameters, which are converted to a mass, multiplied by the measured Doppler velocity and then integrated across the spectra. Using the retrieved profile of size distribution properties and snow flux profile at 16:15 UTC as input, the expected change of Λ with height for Eagg values of 0.2, 0.7 and 1.0 are shown in Fig. 6c. The evolutions of Λ between 4.5 and 4.0 km altitude, as calculated by either the Doppler spectrum method or the simpler DWR method, are both consistent with theoretical evolution, with an Eagg of around 0.7. The Eagg=0.2, reported in , cannot reproduce the observed broadening of the size distribution through this shallow layer of cloud and leads to Λ being overestimated by almost an order of magnitude at 3.5 km. Eagg=0.7 is at the higher end of values reported in the literature. However, found $\mathrm{0.4}<{E}_{\text{agg}}<\mathrm{0.9}$ at −15C, whereas for all other temperatures sampled, the best estimate was Eagg≤0.2. Similarly, reported that Eagg values greater than unity were required for small particles for a good fit to observed aggregation within tropical cirrus anvils. Our results are consistent with the high values of Eagg of at −15C but do not support the Eagg<0.2 reported by . This suggests that the free-fall experiments in the 10 m cloud chamber may be more representative of the natural aggregation in the atmosphere than the stationary target experiments of . speculate that the higher Eagg at −15C is because the dendritic branches of the crystals are able to interlock and that this can increase Eagg by at least a factor of 3. Increased aggregation efficiency in the presence of dendritic crystals also agrees with observations by . Our observations are consistent with these hypotheses.
Further evidence to support the hypothesis of rapid aggregation in this part of the cloud is seen in the vertical profiles of snow flux and number flux (Fig. 7). These quantities have been calculated by determining the number and total mass of ice particles at each height and for each velocity bin from the Doppler spectrum retrieval. The mass (or number) in each velocity bin is then multiplied by the Doppler velocity measured by the radars in order to determine the flux. Only flux values of particles > 0.75 mm in diameter are shown because the number of smaller particles cannot be reliably estimated with this combination of radars. Confidence is given to the reliability of our retrievals by the coherent structures seen in time and height (Fig. 7a and b). The vertical profile of snow flux and number flux (Fig. 7c) also supports our rapid aggregation hypothesis because the decrease in number flux from 4.5 km downwards is substantially larger than the decrease in snow flux over the same heights. The decrease in number (flux) relative to mass (flux) is exactly what is expected from aggregation. The overall decrease in snow flux with height could be explained by sublimation of the ice particles in subsaturated air (included in our calculations in Eq. 5) or through some process where large particles become significantly smaller (e.g. collisional breakup; not included in Eq. 5). Nevertheless, these properties also support rapid aggregation in this part of the cloud.
Figure 7Time–height plots of the retrieved quantities of (a) snow flux and (b) number flux. These quantities are calculated for particles with retrieved diameter > 0.75 mm only and therefore underestimate the true snow and number flux. Panel (c) shows the profile of these two quantities retrieved at 16:15 UTC.
## 6.1 Sensitivity to uncertainties in the retrieval
The retrieval of the properties of the ice particle size distribution is naturally sensitive to uncertainties in the input quantities. To determine the extent to which our retrieval is sensitive to these uncertainties, the retrieval has been repeated with a range of different assumptions. The sensitivity analysis looks at three different aspects: (1) the impact of improperly aligning the Doppler spectra from the radars along the velocity axis, (2) the impact of improperly aligning the Doppler spectra based on reflectivity or calibration errors, and (3) the impact of using a different mass–diameter relationship in the retrieval. The details of the different sensitivity tests are given in Table 2.
Figure 8The ice particle size distribution at 4.15 km (equivalent to Fig. 5i) under various uncertainty assumptions. See Table 2 for details about the uncertainties included in these retrievals.
Figure 9Panels (a)(d) show the vertical profile of Λ at 16:15 UTC (equivalent to Fig. 6c) under various uncertainty assumptions. Panel (e) shows the mean (solid line) and range (shaded region) as a function of height for all individual retrievals shown in panels (a)(d). The mean is calculated from the base-10 logarithm of the plotted values. The unperturbed retrieval is plotted on panels (a)(d) in black for comparison. The theoretical curves for changes of Λ with height due to aggregation and starting from 4.5 km altitude for Eagg values of 1.0, 0.7 and 0.2 are shown on panel (e), as in Fig. 6c.
Figure 10Vertical profiles of c and d from the power-law fits to the velocity and diameter retrievals between 16:14 and 16:16 UTC. The mean (solid) and the spread approximated by the standard deviation of c and d values in time at each height (dashed) are shown.
Table 2Details of the changes to the retrieval input or parameters for the sensitivity testing.
Figure 8 shows how the retrieved ice particle size distribution at 4.15 km altitude and at 16:15 UTC varies under the different uncertainty assumptions. There are some large variations in the maximum ice particle diameters retrieved – in particular for uncertainties related to changing the velocity (aspect 1; blue lines) and also in the number concentration retrieved at a particular diameter, which can vary by an order of magnitude. However, the overall character of the size distribution is usually unchanged, and when the characteristic slope of the size distribution Λ is calculated, it is largely insensitive to the uncertainties.
This insensitivity of Λ to these uncertainties can be seen in Fig. 9. In Fig. 9a–d, the vertical profile of Λ at 16:15 UTC is shown for each of the different uncertainties. This can be compared with Fig. 6c, and the retrieved Λ profile from the unperturbed set-up is plotted in black in Fig. 9a–d. Although there is some variation in Λ for the different uncertainty assumptions, the vertical profile continues to show rapid decreases in Λ with height down from 4.5 km, consistent with large aggregation efficiency values (Fig. 9e). The largest deviation is seen for the uncertainty where Z35 and sZ35 are reduced by 1 dB. This change results in larger Λ values at all heights due to a reduction of DWR35∕94 by 1 dB. The lower sDWR results in the retrieved particle diameters being smaller such that the largest particles have lower number concentrations and therefore a steeper slope is diagnosed. Nevertheless, the change of Λ with height for this uncertainty is also consistent with rapid aggregation. Therefore we conclude that none of the uncertainties assessed substantially change the conclusion that aggregation is likely the dominant mechanism for changing the ice particle size distribution from 4.5 km downwards between 15:45 and 16:20 UTC.
The estimation of the aggregation efficiency value is largely dependent on the mass–size and velocity–size relationships used, because these control the values a, b, c and d, which are the main terms in Eq. (5) determining the change of Λ with height. b and d also contribute substantially to change of I1. These values are, however, relatively well constrained. First, Eq. (5) is totally insensitive to a because it appears only once and is cancelled out because it also contributes to the mass flux χf. Second, b=1.9 is known for this case . Therefore no sensitivity exists to the choice of mass–size relationship. c and d have been estimated from the power-law fits as part of the retrieval process and are quite well constrained within the aggregation region (Fig. 10).
7 Conclusions
We have shown that the use of radar Doppler spectrum data from three co-located, vertically pointing radars at frequencies of 3, 35 and 94 GHz can produce estimates of the ice particle size distribution and can be used to identify and explore processes such as aggregation. Different radar reflectivity for different radar frequencies shows evidence that there are particles present that are large enough that they are no longer within the Rayleigh-scattering regime. Using the Doppler spectra from the three radars, we can determine the size and estimate the number of these ice particles.
In the case presented in this paper, we identify a region where the 35 to 94 GHz dual-wavelength ratio (DWR) increases rapidly with decreasing height, indicative that large ice particles are forming quickly. We have ruled out vapour deposition as the cause of these large particles because that process is too slow. Similarly the rapid growth is not a result of riming because there was no evidence of significant liquid water. We therefore argue that these large particles, up to 5 mm in diameter, are a result of aggregation. Our observations are consistent with theoretical calculations of ice particle size distribution evolution resulting purely from aggregation. In this case an aggregation efficiency around 0.7 fits the observations.
Aggregation as the cause of the rapid growth of ice particles is supported by the consistent and narrow range of heights over which this change occurs. The rapid aggregation occurs just below the region where the Doppler spectra are bi-modal, indicating the presence of small, newly formed ice particles. It appears that the small ice particles forming at approximately 5.3 km (−15.4C) and appearing clearly in the Doppler spectra at 4.8 km (Fig. 5d) grow into dendritic ice particles at temperatures around −15C and either aggregate with other similarly formed particles or initiate aggregation with pre-existing ice particles falling through this layer. The aggregation initiated by these particles is then evident in the large particles present at 4.1 km, which could not have been formed by vapour deposition or riming.
These observations of rapid aggregation at temperatures around −15C add support to cloud chamber studies , which also suggest that aggregation at around −15C is much more efficient that at other temperatures. The resulting changes to the ice particle size distribution through this aggregation process strongly affect many microphysical process rates (e.g. vapour deposition, sedimentation velocity and further aggregation), and therefore failure to capture these aggregation regions in models can lead to significant errors in the simulated cloud fields.
This multi-wavelength Doppler spectrum technique shows the ability to determine the size distribution of ice particles in large portions of ice clouds simultaneously. Previously, ice particle sizes have been determined using ice particle sizing instruments attached to aircraft, which suffer from two issues: small sample sizes and shattering of large ice particles on the instrument inlet, resulting in many small particles in the sample volume and leading to unreliable estimates of both large and small ice particle concentrations . Therefore further studies of cloud microphysical structure and processes using this method are encouraged.
For the benefit of future studies, we give some advice here for achieving the best results. To achieve reliable, quantitative results from this method, the radars need to be very precisely pointed vertically. We find that mispointing by 0.2 is sufficient in resolving a non-negligible contribution from the horizontal wind in the Doppler spectra, which adds extra challenges to comparing the spectra from the three radars. Ideally the three radars should also have the same beamwidth; spectral broadening increases for wider beams and again makes comparing spectra from different radars more challenging, especially in the tails of the spectra where the largest DWR values are expected. Despite these challenges, we have shown that this technique enables the generation of ice particle size distributions from remote sensing data. We were unable to make reliable retrievals in regions of strong turbulence (e.g. due to instability created by sublimation) because the assumption that the spectra were unchanged over the 10 s averaging window was violated. Although no low-level clouds were present on this day, the technique to cross-calibrate the radars near the cloud top enables the retrieval to be performed even when supercooled-liquid or liquid clouds or rain are partially attenuating the radar signal at lower levels. Retrievals of this type have the potential to benefit the cloud microphysics community through both statistical sampling of clouds and aiding studies of individual processes such as the aggregation process detailed in this paper. Further studies comparing the retrieved size distributions against data obtained from aircraft are currently being performed.
One weakness of our current experimental set-up is that we can only size particles larger than 0.75 mm. Particles smaller than 0.75 mm are in the Rayleigh-scattering regime at all three wavelengths, and therefore their size cannot be determined. The addition of an extra shorter wavelength (e.g. at frequencies of 150 or 220 GHz, as advocated by ) would enable sizing of particles down to approximately 0.45 or 0.3 mm (for 150 and 220 GHz respectively). Such observations would provide a unique opportunity for increasing our understanding of cloud microphysics, both statistically and through process studies, as demonstrated in this paper.
Data availability
Data availability.
The radar data used in this paper can be accessed at the Centre for Environmental Data Archival (http://www.ceda.ac.uk/) or by contacting the authors.
Author contributions
Author contributions.
AB performed most of the data analysis, analysed the radar data and wrote the main part of the text. CW conceived and led the research project, contributed code for the Westbrook scattering model, and assisted with the radar data analysis. JN created the pre-processing code for the radar data. TS provided code and experience from previous work with the data. All authors discussed the scientific findings and contributed to the final paper.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
This research was funded by the Natural Environment Research Council, grant NE/K012444/1. We are grateful to the staff at the Chilbolton Facility for Atmospheric and Radio Research for operating and maintaining the radars and, in particular, Alan Doo, Allister Mallett, Chris Walden, John Bradford and Darcy Ladd for their assistance in collecting the triple-wavelength measurements.
The article processing charges for this open-access
publication were covered by a Research
Centre of the Helmholtz Association.
Review statement
Review statement.
This paper was edited by Timothy Garrett and reviewed by Paul Connolly and three anonymous referees.
References
Barrett, A. I., Hogan, R. J., and Forbes, R. M.: Why are mixed-phase altocumulus clouds poorly predicted by large-scale models? Part 1. Physical processes, J. Geophys. Res.-Atmos., 122, 9903–9926, 2017. a
Battaglia, A., Westbrook, C. D., Kneifel, S., Kollias, P., Humpage, N., Löhnert, U., Tyynelä, J., and Petty, G. W.: G band atmospheric radars: new frontiers in cloud physics, Atmos. Meas. Tech., 7, 1527–1546, https://doi.org/10.5194/amt-7-1527-2014, 2014. a
Brown, P. R. A. and Francis, P. N.: Improved Measurements of the Ice Water Content in Cirrus Using a Total-Water Probe, J. Atmos. Ocean. Tech., 12, 410–414, 1995. a, b, c
Chase, R. J., Finlon, J. A., Borque, P., McFarquhar, G. M., Nesbitt, S. W., Tanelli, S., Sy, O. O., Durden, S. L., and Poellot, M. R.: Evaluation of Triple-Frequency Radar Retrieval of Snowfall Properties Using Coincident Airborne In Situ Observations During OLYMPEX, Geophys. Res. Lett., 45, 5752–5760, https://doi.org/10.1029/2018GL077997, 2018. a
Connolly, P. J., Emersic, C., and Field, P. R.: A laboratory investigation into the aggregation efficiency of small ice crystals, Atmos. Chem. Phys., 12, 2055–2076, https://doi.org/10.5194/acp-12-2055-2012, 2012. a, b, c, d, e
Eastment, J.: A fully coherent 94 GHz radar for the characterisation of clouds, in: 29th International Conference on Radar Meteorology, Montreal, Canada, 12–16 July 1999, 442–445, 1999. a
Field, P. and Heymsfield, A.: Importance of snow to global precipitation, Geophys. Res. Lett., 42, 9512–9520, 2015. a
Field, P., Heymsfield, A., and Bansemer, A.: A test of ice self-collection kernels using aircraft data, J. Atmos. Sci., 63, 651–666, 2006. a, b, c
Field, P. R., Hogan, R. J., Brown, P. R. A., Illingworth, A. J., Choularton, T. W., and Cotton, R. J.: Parametrization of ice-particle size distributions for mid-latitude stratiform cloud, Q. J. Roy. Meteor. Soc., 131, 1997–2017, 2005. a
Fukuta, N. and Takahashi, T.: The growth of atmospheric ice crystals: A summary of findings in vertical supercooled cloud tunnel studies, J. Atmos. Sci., 56, 1963–1979, 1999. a
Goddard, J., Eastment, J. D., and Thurai, M.: The Chilbolton Advanced Meteorological Radar: A tool for multidisciplinary atmospheric research, Electron. Commun. Eng., 6, 77–86, 1994a. a
Goddard, J., Tan, J., and Thurai, M.: Technique for calibration of meteorological radars using differential phase, Electron. Lett., 30, 166–167, 1994b. a
Harrington, J. Y., Reisin, T., Cotton, W. R., and Kreidenweis, S. M.: Cloud resolving simulations of Arctic stratus, Part II: Transition-season clouds, Atmos. Res., 51, 45–75, 1999. a
Hobbs, P. V., Chang, S., and Locatelli, J. D.: The dimensions and aggregation of ice crystals in natural clouds, J. Geophys. Res., 79, 2199–2206, 1974. a, b, c
Hosler, C. and Hallgren, R.: The aggregation of small ice crystals, Discuss. Faraday Soc., 30, 200–207, 1960. a, b, c, d
Illingworth, A. J., Hogan, R. J., O'Connor, E. J., Bouniol, D., Brooks, M. E., Delanoë, J., Donovan, D. P., Gaussiat, N., Goddard, J. W. F., Haeffelin, M., Baltink, H. K., Krasnov, O. A., Pelon, J., Piriou, J.-M., Protat, A., Russchenberg, H. W. J., Seifert, A., Tompkins, A. M., van Zadelhoff, G.-J., Vinit, F., Willen, U., Wilson, D. R., and Wrench, C. L.: Cloudnet - continuous evaluation of cloud profiles in seven operational models using ground-based observations, B. Am. Meteorol. Soc., 88, 885–898, 2007. a
Keith, W. and Saunders, C.: The collection efficiency of a cylindrical target for ice crystals, Atmos. Res., 23, 83–95, 1989. a
Kneifel, S., Kulie, M., and Bennartz, R.: A triple-frequency approach to retrieve microphysical snowfall parameters, J. Geophys. Res.-Atmos., 116, D11203, https://doi.org/10.1029/2010JD015430, 2011. a
Kneifel, S., Lerber, A., Tiira, J., Moisseev, D., Kollias, P., and Leinonen, J.: Observed relations between snowfall microphysics and triple-frequency radar measurements, J. Geophys. Res.-Atmos., 120, 6034–6055, 2015. a, b
Kneifel, S., Kollias, P., Battaglia, A., Leinonen, J., Maahn, M., Kalesse, H., and Tridon, F.: First observations of triple-frequency radar Doppler spectra in snowfall: Interpretation and applications, Geophys. Res. Lett., 43, 2225–2233, 2016. a, b, c
Kollias, P., Rémillard, J., Luke, E., and Szyrmer, W.: Cloud radar Doppler spectra in drizzling stratiform clouds: 1. Forward modeling and remote sensing applications, J. Geophys. Res.-Atmos., 116, D13201, https://doi.org/10.1029/2010JD015237, 2011a. a
Kollias, P., Szyrmer, W., Rémillard, J., and Luke, E.: Cloud radar Doppler spectra in drizzling stratiform clouds: 2. Observations and microphysical modeling of drizzle evolution, J. Geophys. Res.-Atmos., 116, D13203, https://doi.org/10.1029/2010JD015238, 2011b. a
Korolev, A., Emery, E., Strapp, J., Cober, S., Isaac, G., Wasey, M., and Marcotte, D.: Small ice particles in tropospheric clouds: Fact or artifact? Airborne Icing Instrumentation Evaluation Experiment, B. Am. Meteorol. Soc., 92, 967–973, 2011. a, b
Kulie, M. S., Hiley, M. J., Bennartz, R., Kneifel, S., and Tanelli, S.: Triple-frequency radar reflectivity signatures of snow: Observations and comparisons with theoretical ice particle scattering models, J. Appl. Meteorol. Clim., 53, 1080–1098, 2014. a
Leinonen, J. and Moisseev, D.: What do triple-frequency radar signatures reveal about aggregate snowflakes?, J. Geophys. Res.-Atmos., 120, 229–239, 2015. a, b
Leinonen, J., Lebsock, M. D., Tanelli, S., Sy, O. O., Dolan, B., Chase, R. J., Finlon, J. A., von Lerber, A., and Moisseev, D.: Retrieval of snowflake microphysical properties from multifrequency radar observations, Atmos. Meas. Tech., 11, 5471–5488, https://doi.org/10.5194/amt-11-5471-2018, 2018. a
Li, J.-L., Waliser, D., Chen, W.-T., Guan, B., Kubar, T., Stephens, G., Ma, H.-Y., Deng, M., Donner, L., Seman, C., and Horowitz, L.: An observationally based evaluation of cloud ice water in CMIP3 and CMIP5 GCMs and contemporary reanalyses using contemporary satellite data, J. Geophys. Res.-Atmos., 117, D16105, https://doi.org/10.1029/2012JD017640, 2012. a
Locatelli, J. D. and Hobbs, P. V.: Fall speeds and masses of solid precipitation particles, J. Geophys. Res., 79, 2185–2197, 1974. a
Mitchell, D., Huggins, A., and Grubisic, V.: A new snow growth model with application to radar precipitation estimates, Atmos. Res., 82, 2–18, 2006. a
Mitchell, D. L.: Evolution of Snow-Size Spectra in Cyclonic Storms. Part I: Snow Growth by Vapor Deposition and Aggregation, J. Atmos. Sci., 45, 3431–3451, 1988. a, b, c, d
Moisseev, D. N., Lautaportti, S., Tyynela, J., and Lim, S.: Dual-polarization radar signatures in snowstorms: Role of snowflake aggregation, J. Geophys. Res.-Atmos., 120, 12644–12655, 2015. a
Morrison, H. and Pinto, J.: Intercomparison of bulk cloud microphysics schemes in mesoscale simulations of springtime Arctic mixed-phase stratiform clouds, Mon. Weather Rev., 134, 1880–1900, 2006. a
O'Connor, E. J., Hogan, R. J., and Illingworth, A. J.: Retrieving stratocumulus drizzle parameters using Doppler radar and lidar, J. Appl. Meteorol., 44, 14–27, 2005. a
Pinto, J. O.: Autumnal mixed-phase cloudy boundary layers in the arctic, J. Atmos. Sci., 55, 2016–2038, 1998. a
Pruppacher, H. R. and Klett, J. D.: Microphysics of clouds and precipitation, D. Reidel Publishing Company, Boston, USA, 1978. a
Sekelsky, S. M.: Near-field reflectivity and antenna boresight gain corrections for millimeter-wave atmospheric radars, J. Atmos. Ocean. Technol. 19, 468–477, 2002. a
Solomon, A., Morrison, H., Persson, O., Shupe, M. D., and Bao, J.-W.: Investigation of microphysical parameterizations of snow and ice in Arctic clouds during M-PACE through model-observation comparisons, Mon. Weather Rev., 137, 3110–3128, 2009. a
Stein, T. H., Westbrook, C. D., and Nicol, J.: Fractal geometry of aggregate snowflakes revealed by triple-wavelength radar measurements, Geophys. Res. Lett., 42, 176–183, 2015. a, b, c, d, e, f, g
Takahashi, T., Endoh, T., Wakahama, G., and Fukuta, N.: Vapor diffusional growth of free-falling snow crystals between-3 and-23 C, J. Meteorol. Soc. Jpn., 69, 15–30, 1991. a
Tridon, F. and Battaglia, A.: Dual-frequency radar Doppler spectral retrieval of rain drop size distributions and entangled dynamics variables, J. Geophys. Res.-Atmos., 120, 5585–5601, 2015. a
Tridon, F., Battaglia, A., Luke, E., and Kollias, P.: Rain retrieval from dual-frequency radar Doppler spectra: validation and potential for a midlatitude precipitating case-study, Q. J. Roy. Meteor. Soc., 143, 1364–1380, 2017. a
Westbrook, C., Ball, R., and Field, P.: Radar scattering by aggregate snowflakes, Q. J. Roy. Meteor. Soc., 132, 897–914, 2006. a, b, c, d
Westbrook, C., Ball, R., and Field, P.: Corrigendum: Radar scattering by aggregate snowflakes, Q. J. Roy. Meteor. Soc., 134, 547–548, 2008a. a, b, c, d
Westbrook, C. D. and Illingworth, A. J.: Testing the influence of small crystals on ice size spectra using Doppler lidar observations, Geophys. Res. Lett., 36, L12810, https://doi.org/10.1029/2009GL038186, 2009. a, b
Westbrook, C. D., Hogan, R. J., and Illingworth, A. J.: The Capacitance of Pristine Ice Crystals and Aggregate Snowflakes, J. Atmos. Sci., 65, 206–219, 2008b. a
Based on an analysis of reflectivity differences, Rayleigh scattering is assumed where the 3 GHz reflectivity is below 5 dBZ, and the absolute difference between the 3 and 94 GHz velocity measurements is less than 0.025 m s−1. Measurements were also excluded where the 3 GHz reflectivity was less than 10 dBZ to avoid effects of residual ground clutter. | 2019-05-20 01:30:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6854313611984253, "perplexity": 2175.2640705559957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255251.1/warc/CC-MAIN-20190520001706-20190520023706-00509.warc.gz"} |
http://mathhelpforum.com/algebra/9666-equation.html | 1. ## equation
$a^2(x-1)-9=a(3x-6)$
$x*a(a-3)=a^2-6a+9$
$x*a(a-3)=(a-3)^2$
$x=\frac{(a-3)^2}{a(a-3)}$
Now... I am sure that solutioin is not defined for a=0, but can I reduce the fraction to:
$x=\frac{(a-3)}{a}$,
without any notes, or do I have to state that reduction of fraction is only posible for $a$ not equal 3? Or in other words, is a=3 also a solution?
2. Originally Posted by riki_haj
$a^2(x-1)-9=a(3x-6)$
$x*a(a-3)=a^2-6a+9$
$x*a(a-3)=(a-3)^2$
$x=\frac{(a-3)^2}{a(a-3)}$
Now... I am sure that solutioin is not defined for a=0, but can I reduce the fraction to:
$x=\frac{(a-3)}{a}$,
without any notes, or do I have to state that reduction of fraction is only posible for $a$ not equal 3? Or in other words, is a=3 also a solution?
Look at the original equation, you will see that it is inconsistent when a=3.
So you know that if there is a solution a!=3, and you can do the cancellation.
But you should say so.
RonL | 2017-02-26 22:34:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9067654013633728, "perplexity": 609.3524282414458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00569-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/140340/calculating-angle-of-human-joint-beyond-180%c2%b0-in-3d | # Calculating angle of human joint beyond 180° in 3D
I'm having some trouble calculating the angle of an human joint in 3D using the Microsoft Kinect.
Here's an example of the angle of the elbow (using the shoulder and wrist joint):
Image of example
Calculating angles between 0° and 180° is no problem, but when the person hyperextends his elbow my calculation returns 170° instead of 190°.
The calculation I'm using is as follows:
1. $d = b - a$
2. $e = b - c$
Where a, b and c are 3D-points and d and e are 3D-vectors.
My question is: How can I calculate the angle between $d$ en $e$ where the angle is between 0° and 360°?
-
What exactly are $a, b, c, d$ and $e$? – Daan Michiels May 3 '12 at 11:12
@Daan Michiels: $a$, $b$ and $c$ are 3D-points. $d$ and $e$ are 3D-vectors. – Jeroen Corsius May 3 '12 at 11:17
I am doing the same task, can you provide some example project? – Ewerton Nov 7 '12 at 22:32 | 2015-05-24 07:23:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8362840414047241, "perplexity": 1573.4635598434008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927844.14/warc/CC-MAIN-20150521113207-00186-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.springerprofessional.de/large-scale-scientific-computing/17699972 | main-content
## Über dieses Buch
This book constitutes revised papers from the 12th International Conference on Large-Scale Scientific Computing, LSSC 2019, held in Sozopol, Bulgaria, in June 2019. The 70 papers presented in this volume were carefully reviewed and selected from 81 submissions. The book also contains two invited talks. The papers were organized in topical sections named as follows: control and optimization of dynamical systems; meshfree and particle methods; fractional diffusion problems: numerical methods, algorithms and applications; pore scale flow and transport simulation; tensors based algorithms and structures in optimization and applications; HPC and big data: algorithms and applications; large-scale models: numerical methods, parallel computations and applications; monte carlo algorithms: innovative applications in conjunctions with other methods; application of metaheuristics to large-scale problems; large scale machine learning: multiscale algorithms and performance guarantees; and contributed papers.
## Inhaltsverzeichnis
### Correction to: Impact of Data Assimilation on Short-Term Precipitation Forecasts Using WRF-ARW Model
In the originally published version of chapter 30 an acknowledgement was missing. This has been corrected.
Evgeni Vladimirov, Reneta Dimitrova, Ventsislav Danchovski
### First-Order System Least Squares Finite-Elements for Singularly Perturbed Reaction-Diffusion Equations
We propose a new first-order-system least squares (FOSLS) finite-element discretization for singularly perturbed reaction-diffusion equations. Solutions to such problems feature layer phenomena, and are ubiquitous in many areas of applied mathematics and modelling. There is a long history of the development of specialized numerical schemes for their accurate numerical approximation. We follow a well-established practice of employing a priori layer-adapted meshes, but with a novel finite-element method that yields a symmetric formulation while also inducing a so-called “balanced” norm. We prove continuity and coercivity of the FOSLS weak form, present a suitable piecewise uniform mesh, and report on the results of numerical experiments that demonstrate the accuracy and robustness of the method.
### Big Data Analysis: Theory and Applications
With the continuous improvement of data processing capabilities and storage capabilities, Big Data Era has entered the public sight. Under such a circumstance, the generation of massive data has greatly facilitated the development of data mining algorithms. This paper describes the status of data mining and presents three of our works: optimization-based data mining, intelligent knowledge and the intelligence quotient of Artificial Intelligence respectively. Besides that, we also introduced some applications that have emerged in the context of big data. Furthermore, this paper indicates three potential directions for future research of big data analysis.
Yong Shi, Pei Quan
### Solutions to the Hamilton-Jacobi Equation for Bolza Problems with State Constraints and Discontinuous Time Dependent Data
This paper concerns the characterization of the value function associated with a state constrained Bolza problem in which the data are allowed to be discontinuous w.r.t. the time variable on a set of zero measure and have everywhere left and right limits. Using techniques coming from viability theory and nonsmooth analysis, we provide a characterization of the value function as the unique solution to the Hamilton-Jacobi equation, in a generalized sense which employs the lower Dini derivative and the proximal normal vectors.
Julien Bernis, Piernicola Bettiol
### Optimal Control Problem of a Metronomic Chemotherapy
In this paper we consider a metronomic chemotherapy model which is optimally controlled over the expected future lifetime of the particular patient. Under certain assumptions concerning the distribution of the future lifetime of the patient, it can be easily transformed to a purely deterministic optimal control problem with infinite horizon. To solve the latter the open source software package OCMat was used. Solutions to optimal control problems with $$L_2-$$ and regularized $$L_1-$$objective functionals have been compared.
Dieter Grass, Valeriya Lykina
### Asymptotic Behaviour of Controllability Gramians and Convexity of Small-Time Reachable Sets
We study the convexity of reachable sets for a nonlinear control-affine system on a small time interval under integral constraints on control variables. The convexity of a reachable set for a nonlinear control system under integral constraints was proved by B. Polyak under assumption that linearization of the system is controllable and $$L_2$$ norms of controls are bounded from above by a sufficiently small number. Using this result we propose sufficient conditions for the convexity of reachable sets of a control-affine system on a small time interval. These conditions are based on the estimates for the asymptotics of the minimal eigenvalue of controllability Gramian of the system linearization, which depends on a small parameter (a length of the interval). A procedure for calculating estimates using the expansion of the Gramian into a series with respect to the small parameter degrees is described and some illustrative examples are presented.
Mikhail Gusev
### On the Regularity of Mayer-Type Affine Optimal Control Problems
The paper presents a sufficient condition for strong metric sub-regularity (SMsR) of the system of first order optimality conditions (optimality system) for a Mayer-type optimal control problem with a dynamics affine with respect to the control. The SMsR property at a reference solution means that any solution of the optimality system, subjected to “small” disturbances, which is close enough to the reference one is at a distance to it, at most proportional to the size of the disturbance. The property is well understood for problems satisfying certain coercivity conditions, which however, are not fulfilled for affine problems.
Nikolai P. Osmolovskii, Vladimir M. Veliov
### Mesh-Hardened Finite Element Analysis Through a Generalized Moving Least-Squares Approximation of Variational Problems
In most finite element methods the mesh is used to both represent the domain and to define the finite element basis. As a result the quality of such methods is tied to the quality of the mesh and may suffer when the latter deteriorates. This paper formulates an alternative approach, which separates the discretization of the domain, i.e., the meshing, from the discretization of the PDE. The latter is accomplished by extending the Generalized Moving Least-Squares (GMLS) regression technique to approximation of bilinear forms and using the mesh only for the integration of the GMLS polynomial basis. Our approach yields a non-conforming discretization of the weak equations that can be handled by standard discontinuous Galerkin or interior penalty terms.
P. Bochev, N. Trask, P. Kuberry, M. Perego
### An Adaptive LOOCV-Based Algorithm for Solving Elliptic PDEs via RBF Collocation
We present a new adaptive scheme for solving elliptic partial differential equations (PDEs) through a radial basis function (RBF) collocation method. Our adaptive algorithm is meshless and it is characterized by the use of an error indicator, which depends on a leave-one-out cross validation (LOOCV) technique. This approach allows us to locate the areas that need to be refined, also including the chance to add or remove adaptively any points. The algorithm turns out to be flexible and effective by means of a good interaction between error indicator and refinement procedure. Numerical experiments point out the performance of our scheme.
R. Cavoretto, A. De Rossi
### Adaptive Refinement Techniques for RBF-PU Collocation
We propose new adaptive refinement techniques for solving Poisson problems via a collocation radial basis function partition of unity (RBF-PU) method. As the construction of an adaptive RBF-PU method is still an open problem, we present two algorithms based on different error indicators and refinement strategies that turn out to be particularly suited for a RBF-PU scheme. More precisely, the first algorithm is characterized by an error estimator based on the comparison of two collocation solutions evaluated on a coarser set and a finer one, while the second one depends on an error estimate that is obtained by a comparison between the global collocation solution and the associated local RBF interpolant. Numerical results support our study and show the effectiveness of our algorithms.
R. Cavoretto, A. De Rossi
### A Second Order Time Accurate Finite Volume Scheme for the Time-Fractional Diffusion Wave Equation on General Nonconforming Meshes
SUSHI (Scheme Using Stabilization and Hybrid Interfaces) is a finite volume method has been developed at the first time to approximate heterogeneous and anisotropic diffusion problems. It has been applied later to approximate several types of partial differential equations. The main feature of SUSHI is that the control volumes can only be assumed to be polyhedral. Further, a consistent and stable Discrete Gradient is developed.In this note, we establish a second order time accurate implicit scheme for the TFDWE (Time Fractional Diffusion-Wave Equation). The space discretization is based on the use of SUSHI whereas the time discretization is performed using a uniform mesh. The scheme is based on the use of an equivalent system of two low order equations. We sketch the proof of the convergence of the stated scheme. The convergence is unconditional. This work is an improvement of [3] in which a first order scheme, whose convergence is conditional, is established.
### Identification of a Time-Dependent Right-Hand Side of an Unsteady Equation with a Fractional Power of an Elliptic Operator
An inverse problem of identifying the right-hand side is considered for an unsteady equation with a fractional power of the elliptic operator. We consider the case when the time-dependent right-hand side is unknown. The redefinition (additional information) is associated with the known solution at an internal point (points) of the computational domain. The computational algorithm is based on a special decomposition of the solution of the unsteady problem during a transition from the previous time level to the next one. The related auxiliary problems are direct boundary value problems for stationary equations with fractional powers of elliptic operators. Some features of the proposed computational algorithm are demonstrated by the results of numerical experiments for a model 2D inverse problem.
Petr N. Vabishchevich
### Computational Identification of Adsorption and Desorption Parameters for Pore Scale Transport in Random Porous Media
Reactive flow at pore scale in random porous media is considered at low Pecklet and Damkoler numbers, and computational identification of unknown adsorption and desorption rates is discussed. The reactive transport is governed by steady state Stokes equations, coupled with convection-diffusion equation for species transport. The surface reactions, namely adsorption and desorption, are accounted via Robin boundary condition. Finite element approximation in space and implicit time discretization are exploited. Measured concentration of the specie at the outlet of the domain is provided to carry out the identification procedure. The impact of the noise in the measurement on the parameter identification procedure is studied. Stochastic parameter identification approach is adopted. Computational results demonstrating the potential of the considered parameter identification approaches are presented.
Vasiliy V. Grigoriev, Oleg Iliev, Petr N. Vabishchevich
### Weighted Time-Semidiscretization Quasilinearization Method for Solving Rihards’ Equation
This paper concerns efficient $$\sigma$$ - weighted ($$0<\sigma <1$$) time-semidiscretization quasilinearization technique for numerical solution of Richards’ equation. We solve the classical and a new $$\alpha$$ - time-fractional ($$0<\alpha <1$$) equation, that models anomalous diffusion in porous media. High-order approximation of the $$\alpha =2(1-\sigma )$$ fractional derivative is applied. Numerical comparison results are discussed.
Miglena N. Koleva, Lubin G. Vulkov
### On the Problem of Decoupling Multivariate Polynomials
In this paper we address the application properties of the decoupling multivariate polynomials problem algorithm proposed in [2]. By numerous examples we demonstrate that this algorithm, unfortunately, fails to provide a solution in some cases. Therefore we empirically determine the application scope of this algorithm and show that it is connected with the uniqueness conditions of the CP-decomposition (Canonical Polyadic Decomposition). We also investigate the approximation properties of this algorithm and show that it is capable of construction the best low-rank polynomial approximation provided that the CP-decomposition is unique.
Stanislav Morozov, Dmitry A. Zheltkov, Nikolai Zamarashkin
### Model Order Reduction Algorithms in the Design of Electric Machines
Although model order reduction techniques based on searching for a solution in a low-rank subspace are researched well for the case of linear differential equations, it is still questionable if such model order reduction techniques would work well for nonlinear PDEs. In this work, model order reduction via POD-DEIM (Proper Orthogonal Decomposition via Discrete Empirical Interpolation) method is applied to a particular nonlinear parametric PDE that is used for the modeling of electric machines. The idea of the POD-DEIM algorithm is to use statistical data about ‘typical solutions’ that correspond to ‘typical’ parameter values, to approximate solutions for other parameter values. Practical POD-DEIM application to the particular PDE has met a number of difficulties, and several improvements to the selection of initial approximation, selection of interpolation nodes, selection of interpolation basis and handling moving physical entities were made to make the method to work. These improvements, along with some numerical experiments, are presented.
Sergey Petrov
### Discrete Vortices and Their Generalizations for Scattering Problems
In the numerical solving of boundary integral equations of electrodynamics, the problem is reduced to a system of linear algebraic equations with dense matrix. A significant increase in the number of cells of the partition used for the solution can be achieved by applying special methods for compressing dense matrices and fast matrix algorithms. At the same time, the extension of the classes of solved wave scattering problems requires the progress in the of the boundary integral equations method. The problem of electromagnetic diffraction in a piecewise homogeneous medium that can consist of domains with different dielectric properties and can contain ideally conducting inclusions in the form of solid objects and screens is considered in the present work. The proposed numerical method for solving of boundary hypersingular integral equations is based on ideas borrowed from the vortex frame method, widely used in computational aerodynamics.
Alexey Setukha
### Nonnegative Tensor Train Factorizations and Some Applications
Nowadays as the amount of available data grows, the problem of managing information becomes more difficult. In many applications data can be represented as a multidimensional array. However, in the big data case and as well as when we aim at discovering some structure in the data, we are often interested to construct some low-rank tensor approximations, for instance, using tensor train (TT) decomposition. If the original data is nonnegative, we may be interested to guarantee that an approximant keeps this property. Nonnegative tensor train factorization is an utterly nontrivial task when we cannot afford to see each data element because it may be too expensive in the case of big data.A natural solution is to build tensor trains with all carriages (cores) to be nonnegative. This means that skeleton decompositions (approximations) have to be constructed nonnegative. Nonnegative factorizations can be used as models for recovering suitable structures in data, e.g., in machine learning and image processing tasks. In this work we suggest a new method for nonnegative tensor train factorizations, estimate its accuracy and give numerical results for different problems.
Elena Shcherbakova, Eugene Tyrtyshnikov
### Low Rank Structures in Solving Electromagnetic Problems
Hypersingular integral equations are applied in various areas of applied mathematics and engineering. The paper presents a method for solving the problem of diffraction of an electromagnetic wave on a perfectly conducting object of complex form. In order to solve the problem of diffraction with large wave numbers using the method of integral equations, it is necessary to calculate a large dense matrix.In order to solve the integral equation, the author used low-rank approximations of large dense matrices. The low-rank approximation method allows multiplying a matrix of size $$N\times N$$ by a vector of size N in $$\mathcal {O}(N\log (N))$$ operations instead of $$\mathcal {O}(N^2)$$. An iterative method (GMRES) is used to solve a system with a large dense matrix represented in a low-rank format, using fast matrix-vector multiplication.In the case of a large wave number, the matrix becomes ill-conditioned; therefore, it is necessary to use a preconditioner to solve the system with such a matrix. A preconditioner is constructed using the uncompressed matrix blocks of a low-rank matrix representation in order to reduce the number of iterations in the GMRES method. The preconditioner is a sparse matrix. The MUMPS package is used in order to solve system with this sparse matrix on high-performance computing systems.
Stanislav Stavtsev
### Tensors in Modelling Multi-particle Interactions
In this work we present recent results on application of low-rank tensor decompositions to modelling of aggregation kinetics taking into account multi-particle collisions (for three and more particles). Such kinetics can be described by system of nonlinear differential equations with right-hand side requiring $$N^D$$ operations for its straight-forward evaluation, where N is number of particles’ size classes and D is number of particles colliding simultaneously. Such a complexity can be significantly reduced by application low rank tensor decompositions (either Tensor Train or Canonical Polyadic) to acceleration of evaluation of sums and convolutions from right-hand side. Basing on this drastic reduction of complexity for evaluation of right-hand side we further utilize standard second order Runge-Kutta time integration scheme and demonstrate that our approach allows to obtain numerical solutions of studied equations with very high accuracy in modest times. We also show preliminary results on parallel scalability of novel approach and conclude that it can be efficiently utilized with use of supercomputers.
Daniil A. Stefonishin, Sergey A. Matveev, Dmitry A. Zheltkov
### Tensorisation in the Solution of Smoluchowski Type Equations
We investigate the structure of the non-linear operator featured in the Smoluchowski-type system of ordinary differential equations, and find a way to express it algebraically in terms of the parameters of the problem and a few auxiliary tensors, describing, in a sense, the “shape” of the system. We find compact representations of these auxiliary tensors in terms of a Tensor Train decomposition. Provided the parameters admit a compact representation in this format as well, this allows us to rather straightforwardly reuse standard numerical algorithms for a wide range of associated problems, obtaining $$O(\log N)$$ asymptotic complexity.
Ivan Timokhin
### On Tensor-Train Ranks of Tensorized Polynomials
Discretization followed by tensorization (mapping from low-dimensional to high-dimensional data) can be used to construct low-parametric approximations of functions. For example, a function f defined on [0, 1] may be mapped to a d-dimensional tensor $$A \in \mathbb {R}^{b\times \dots \times b}$$ with elements $$A(i_1,\dots ,i_d) = f(i_1b^{-1} + \dots + i_db^{-d})$$, $$i_k \in \{0,\dots ,b-1\}$$. The tensor A can now be compressed using one of the tensor formats, e.g. tensor train format. It has been noticed in practice that approximate TT-ranks of tensorizations of degree-n polynomials grow very slowly with respect to n, while the only known bound for them is $$n+1$$. In this paper we try to explain the observed effect. New bounds of the described TT-ranks are proved and shown experimentally to quite successfully capture the observed distribution of ranks.
Lev Vysotsky
### Global Optimization Algorithms Using Tensor Trains
Global optimization problem arises in a huge amount of applications including parameter estimation of different models, molecular biology, drug design and many others. There are several types of methods for this problem: deterministic, stochastic, heuristic and metaheuristic. Deterministic methods guarantee that found solution is the global optima, but complexity of such methods allows to use them only for problems of relatively small dimensionality, simple functional and area of optimization.Non-deterministic methods are based on some simple models of stochastic, physical, biological and other processes. On practice such methods are often much faster then direct methods. But for the most of them there is no proof of such fast convergence even for some simple cases.In this paper we consider global optimization method based on tensor train decomposition. The method is non-deterministic and exploits tensor structure of functional. Theoretical results proving its fast convergence in some simple cases to global optimum are provided.
Dmitry A. Zheltkov, Alexander Osinsky
### Application of the Global Optimization Methods for Solving the Parameter Estimation Problem in Mathematical Immunology
Mathematical modeling is widely used in modern immunology. The availability of biologically meaningful and detailed mathematical models permits studying the complex interactions between the components of a biological system and predicting the outcome of the therapeutic interventions. However, the incomplete theoretical understanding of the immune mechanism leads to the uncertainty of model structure and the need of model identification. This process is iterative and each step requires data-based model calibration. When the model is highly detailed, the considerable part of model parameters can not be measured experimentally or found in literature, so one has to solve the parameter estimation problem. Using the maximum likelihood framework, the parameter estimation leads to minimization problem for least square functional, when the observational errors are normally distributed. In this work we presented different computational approaches to the treatment of global optimization problem, arising in parameter estimation. We consider two high-dimensional mathematical models of HIV (human immunodeficiency virus)-infection dynamics as examples. The ODE (ordinary differential equations) and DDE (delay differential equations) versions of models were studied. For these models we solved the parameter estimation problem using a number of numerical global optimization techniques, including the optimization method, based on the tensor-train decomposition (TT). The comparative analysis of obtained results showed that the TT-based optimization technique is in the leading group of the methods ranked according to their performance in the parameter estimation for ODE and DDE versions of both models.
V. V. Zheltkova, Dmitry A. Zheltkov, G. A. Bocharov, Eugene Tyrtyshnikov
### Statistical Moments of the Vertical Distribution of Air Pollution over Bulgaria
The air quality has a key impact on the quality of life and human health. The atmospheric composition fields are formed as a result of complex interaction of processes with different temporal and spatial scales from global to regional to a chain of local scales. The earth surface heterogeneities, including the complex terrain, have a very significant impact on the atmospheric dynamics, hence on the formation of air pollution pattern. The incredible diversity of dynamic processes, the complex chemical transformations of the compounds and complex emission configuration together lead to the formation of a complex vertical structure of the atmospheric composition. The detailed analysis of this vertical structure with its temporal/spatial variability jointly with the atmospheric dynamics characteristics can enrich significantly the knowledge about the processes and mechanisms, which form air pollution, including near earth surface. The present paper presents some results of a study, which aims at performing reliable, comprehensive and detailed analysis of the atmospheric composition fields 3D structure and its connection with the processes, which lead to their formation.The numerical simulations of the vertical structure of atmospheric composition fields over Bulgaria have been performed using the US EPA Model–3 system as a modelling tool for 3D simulations and the system nesting capabilities were applied for downscaling the simulations from 81 km to 9 km grid resolution over Bulgaria. The national emission inventory was used as an emission input for Bulgaria, while outside the country the emissions are from the TNO high resolution inventory.
### Distributed Deep Learning on Heterogeneous Computing Resources Using Gossip Communication
With the increased usage of deep neural networks, their structures have naturally evolved, increasing in size and complexity. With currently used networks often containing millions of parameters, and hundreds of layers, there have been many attempts to leverage the capabilities of various high-performance computing architectures. Most approaches are focused on either using parameter servers or a fixed communication network, or exploiting particular capabilities of specific computational resources. However, few experiments have been made under relaxed communication consistency requirements and using a dynamic adaptive way of exchanging information.Gossip communication is a peer-to-peer communication approach, that can minimize the overall data traffic between computational agents, by providing a weaker guarantee on data consistency - eventual consistency. In this paper, we present a framework for gossip-based communication, suitable for heterogeneous computing resources, and apply it to the problem of parallel deep learning, using artificial neural networks. We present different approaches to gossip-based communication in a heterogeneous computing environment, consisting of CPUs and MIC-based co-processors, and implement gossiping via both shared and distributed memory. We also provide a simplistic approach to load balancing in a heterogeneous computing environment, that proves efficient for the case of parallel deep neural network training.Further, we explore several approaches to communication exchange and resource allocation, when considering parallel deep learning using heterogeneous computing resources, and evaluate their effect on the convergence of the distributed neural network.
Dobromir Georgiev, Todor Gurov
### Process Analysis of Atmospheric Composition Fields in Urban Area (Sofia City)
The air pollution pattern is formed as a result of interaction of different processes, so knowing the contribution of each one of the processes for different meteorological conditions and given emission spatial configuration and temporal profiles could be helpful for understanding the atmospheric composition and air pollutants behavior. Analysis of the contribution of these different processes (chemical and dynamical) which form the atmospheric composition in chosen region will be demonstrated in the present paper. To analyze the contribution of different dynamic and chemical processes for the air pollution formation over Sofia the CMAQ Integrated Process Rate Analysis option was applied. The procedure allows the concentration change for each compound to be presented as a sum of the contribution of each one of the processes, which determine the air pollution concentration. A statistically robust ensemble of the atmospheric composition over Sofia, taking into account the two-way interactions of local to urban scale and tracking the main pathways and processes, which lead to different scale atmospheric composition formation should be constructed in order to understand the atmospheric composition climate and air pollutants behavior.On the basis of 3D modeling tools an extensive data base was created and this data was used for different studies of the atmospheric composition, carried out with good resolution using up-to-date modeling tools and detailed and reliable input data. All the simulations were based on the US EPA (Environmental Protection Agency) Model–3 system for the 7 years period (2008 to 2014). The modeling system consists of 3 models, meteorological pre–processor, the emission pre–processor SMOKE and Chemical Transport Model (CTM) CMAQ.
### Modeling of PM10 Air Pollution in Urban Environment Using MARS
In the modern world, attention is increasingly drawn to the pressing problem of atmospheric air pollution, which is a serious threat to human health. Worldwide, China, India, Indonesia and some of the countries in Europe, including Bulgaria, are the most polluted countries. To help solve these issues, a very large number of scientific studies have been devoted, including the study, analysis and forecasting of atmospheric air pollution with particulate matter PM10. In this study the PM10 concentrations in the town of Smolyan, Bulgaria are examined and mathematical models with high performance for prediction and forecasting depending on weather conditions are developed. For this purpose, the powerful method of multivariate adaptive regression splines (MARS) is implemented. The examined data cover a period of 9 years - from 2010 to 2018, on a daily basis. As independent variables, 7 meteorological factors are used - minimum and maximum daily temperatures, wind speed and direction, atmospheric pressure, etc. Additional predictors also used are lagged PM10 and meteorological variables with a delay of 1 day. Three time variables are included to account for time. Multiple models are created with interactions between predictors up to the 4th order. The obtained best MARS models fit to over 80% of measured data. The models are used to forecast PM10 concentrations for 7 days ahead of time. This approach could be applied for real predictions and development of computer and mobile applications.
Snezhana G. Gocheva-Ilieva, Atanas V. Ivanov, Desislava S. Voynikova, Maya P. Stoimenova
### Application of Orthogonal Polynomials and Special Matrices to Orthogonal Arrays
Special matrices are explored in many areas of science and technology. Krawtchouk matrix is such a matrix that plays important role in coding theory and theory of orthogonal arrays also called fractional factorial designs in planning of experiments and statistics. In this paper we give explicitly Smith normal forms of Krawtchouk matrix and its extended matrix. Also we propose a computationally effective method for determining the Hamming distance distributions of an orthogonal array with given parameters. The obtained results facilitate the solving of many existence and classification problems in theory of codes and orthogonal arrays.
Nikolai L. Manev
### Performance Effects of Running Container-Based Open-MPI Cluster in Public Cloud
The vast majority of HPC users are heavily leveraging MPI middleware for their applications needs. For almost two decades, providing an infrastructure that constantly answers the increasing demand, security standards, software interoperability, and the ineffective resources utilization are the challenges placing HPC administrators between the overhead of the virtualization and the manual tuning of the performance pick, forcing them to find the edge of the consensus on their own.Recently, developments like Linux Containers, Infrastructure as Code, Public Cloud opened a new horizon in front of the industry, by redefining the engineering roles and providing tools and practices for solving some major caveats from the past.This paper presents three architectures for setting up Open-MPI cluster in a Linux Container-Based environment and explains how these architectures can be implemented across private and public cloud with the main focus of the performance effects in the public cloud.
Teodor Simchev, Emanouil Atanassov
### Impact of Data Assimilation on Short-Term Precipitation Forecasts Using WRF-ARW Model
In spite of efforts made by the scientific community during the last decades on the weather forecast improving, prediction of precipitation systems and fogs is still considered to be a difficult challenge. The main reason for the difficulties in prediction of these phenomena is the complexity of their formation, such as orography dependence, spatio-temporal inhomogeneity of land use and large scale synoptic conditions. Remote sensing and in-situ data assimilation have been applied to a number of studies in recent years, demonstrating significant improvements of the model results.The objective of this study is to evaluate the performance of Weather Research and Forecasting (WRF) model, and assess the improvement in the short-term precipitation forecast, using high-resolution data assimilation of satellite and in-situ measurements. The study case is specific weather phenomenon for the Eastern parts of Balkan Peninsula - passing winter Mediterranean cyclone causing excessive amounts of rainfall in Bulgaria. A three-dimensional variational (3D-Var) data assimilation system is used in this study. The model results obtained using or not data assimilation procedure, are compared to demonstrate the impact of this method on the start time of precipitation, rainfall spacial distribution and amount.
Evgeni Vladimirov, Reneta Dimitrova, Ventsislav Danchovski
### An Introduction and Summary of Use of Optimal Control Methods for PDE’s
In optimal control formulations of partial differential equations the aim is to find a control function that steers the solution to a desired form. A Lagrange multiplier, i.e. an adjoint variable is introduced to handle the PDE constraint. One can reduce the problem to a two-by-two block matrix form with square blocks for which a very efficient preconditioner, PRESB can be applied. This method gives sharp and tight eigenvalue bounds, which hold uniformly with respect to regularization, mesh size and problem parameters, and enable use of the second order inner product free Chebyshev iteration method, which latter enables implementation on parallel computers without any need to use global data communications. Furthermore this method is insensitive to round-off errors. It outperforms other earlier published methods. Implementational and spectral properties of the method, and a short survey of applications, are given.
Owe Axelsson
### Parallel BURA Based Numerical Solution of Fractional Laplacian with Pure Neumann Boundary Conditions
The study is motivated by the increased usage of fractional Laplacian in the modeling of nonlocal problems like anomalous diffusion. We present a parallel numerical solution method for the nonlocal elliptic problem: $$-\varDelta ^\alpha u = f$$, $$0<\alpha < 1$$, $$-\partial u(x)/\partial n=g(x)$$ on $$\partial \varOmega$$, $$\varOmega \subset \mathrm{I\!R}^d$$. The Finite Element Method (FEM) is used for discretization leading to the linear system $$A^\alpha {\mathbf{u}} = {\mathbf{f}}$$, where A is a sparse symmetric and positive semidefinite matrix. The implemented method is based on the Best Uniform Rational Approximation (BURA) of degree k, $$r_{\alpha ,k}$$, of the scalar function $$t^{\alpha }$$, $$0\le t \le 1$$. The related approximation of $$A^{-\alpha }{\mathbf{f}}$$ can be written as a linear combination of the solutions of k local problems. The latter are found using the preconditioned conjugate gradient method. The method is applicable to computational domains with general geometry. Linear finite elements on unstructured tetrahedral meshes with local refinements are used in the presented numerical tests. The behavior of the relative error, the number of Preconditioned Conjugate Gradient (PCG) iterations, and the parallel time is analyzed varying the parameter $$\alpha \in \{0.25, 0.50, 0.75\}$$, the BURA degree $$k \in \{5, 6,\dots ,12\}$$, and the mesh size.
Gergana Bencheva, Nikola Kosturski, Yavor Vutov
### Bias Correcting of Selected ETCCDI Climate Indices for Projected Future Climate
Regional climate models (RCMs) have been developed and extensively applied in the recent decades for dynamically downscaling coarse resolution information from different sources, for various purposes, including past climate simulations and future climate projections. Due to systematic and random model errors, however, RCM simulations often show considerable deviations from observations. This has led to the development of a number of correction approaches, which lately become known with the common name bias correction. Although some criticism exists, the general view in the expert community is that the bias-corrected climate change signal is more reliable compared to the uncorrected one and thus is more suitable for impact assessments. In the present study, which is part of more general work, outputs from simulations for the present-day (1961–1990) climate, as well as for the near future scenario (2021–2050), performed with the model ALADIN-Climate, are used for calculation of selected ETCCDI climate indices. The same subset, but based on observational database E-OBS, is taken from the open archive ClimData and used as reference. The results of the computations, performed over the interior of the Balkan Peninsula, demonstrates the possibilities of the selected bias correction technique and its modifications.
Hristo Chervenkov, Valery Spiridonov
### Project for an Open Source GIS Tool for Visualization of Flood Risk Analysis After Mining Dam Failures
Under DG ECHO funded project with acronym: ALTER there has been initiated an effort to support the Armenian Ministry of Emergency Situations in establishment of public-private partnerships to understand and address flood risks that may occur after mining dam failures. The project focus on three pilot areas where dams and other activities present risks to local communities: Akhtala and Teghut areas of Lori Marz along the Shamlugh river; the Vorotan Cascade and its associated dams in the Syunik region; and the Voghji river basin of Syunik region. In our article data collection, analysis and the results of dam break modelling for the Geghi reservoir and Geghanoush tailing dam located in Voghji river basin are presented. All collected data from hydro-meteorological sources, elevation, geologic, geomorphological and land use data have been processed in a way that Flood Hazard Index (FHI) Map of the studied area has been developed. This information is combined in GIS (Geographic Information System) layers. Those layers are being uploaded in specifically designed open source GIS tool in order to assist the end users on the field or in the operational room to rapidly assess the risks associated with flood occurred in a result of dam break and to better plan and visualize their activities.
Nina Dobrinkova, Alexander Arakelyan, Aghavni Harutyunyan, Sean Reynolds
### Open Source GIS for Civil Protection Response in Cases of Wildland Fires or Flood Events
The article describes the capabilities of the open source GIS (Geographic Information Systems) and other Free and Open Source Softwares (FOSS) for building desktop application that can support firefighting and volunteer groups in cases of wildland fires or flood events reactions. The desktop application have two main modules as design. The first module is based on open source GIS software. The second is the server software, database and visualization environment for the application. The main goal of the tool is to visualize administrative and vulnerable objects and POI’s (Points of Interests) like logistic centres for water supplies, tools and supplies from where the firefighting and volunteer groups can take what they need on the field work. The idea is to be visualized detailed information about logistic centres and what kind of equipment is stored inside. This is needed, because there aren’t implemented modern ICT (Information and Communication Technologies) tools in the field work. The current situation is such that the groups are using instructions written on paper in most of the cases. Our article presents a desktop application that can be used on the field and in an operational rooms by firefighting and volunteer groups acting in cases of wildland fires or flood events. In our application different open source software solutions as Geoserver, Qgis, Web App Builder, Boundless WEBSDK, PostgreSQL and OpenLayers are used:Geoserver allows the user to display spatial information to the world;QGIS is a professional GIS (Geographic Information System) cross-platform application that is Free and Open Source Software (FOSS);Web App Builder is a plugin for QGIS that allows easy creation of web applications;Boundless WEBSDK which provides tools for easy-to-build JavaScript-based web mapping applications;PostgreSQL is a powerful, open source object-relational database system;OpenLayers is an open-source JavaScript library for displaying map data in web browsers.
Nina Dobrinkova, Stefan Stefanov
### PDE-Constrained Optimization: Matrix Structures and Preconditioners
In this paper we briefly account for the structure of the matrices, arising in various optimal control problems, constrained by PDEs, and how it can be utilized when constructing preconditioners for the arising linear systems to be solved in the optimization framework.
Ivo Dravins, Maya Neytcheva
### One Approach of Solving Tasks in the Presence of Free Surface Using a Multiprocessor Computing Systems
The task about motion of a pair of vortices under a free surface for different Froude numbers and the problem of free oscillations of fluids in a rectangular container are considered. It is assumed that the liquid is weakly compressible and homogeneous. Comparative analysis with analytical and numerical solutions obtained using incompressible approach in the author’s previous works. To solve the system of equations obtained in curvilinear coordinates with appropriate boundary and initial conditions the explicit scheme of second order approximation by the method CABARET is used. Also includes parallel version of the algorithm of calculation using Descartes cell decomposition. Evaluation of parallelization on supercomputing facility with distributed memory was performed. The results give way to further generalize this approach for solving problems with a free surface in a three-dimensional setting. The author’s plan to construct an effective method for investigation of a non homogeneous fluid flows through the further development of this approach. Such explicit techniques offer the possibility of efficient use of multiprocessor systems (clusters) for solving problems, which previously dominated by models of incompressible medium.
Valentin A. Gushchin, Vasilii G. Kondakov
### In Silico Study on the Structure of Novel Natural Bioactive Peptides
Antimicrobial peptides (AMPs) are an abundant and diverse group of molecules produced by many tissues and cell types in a variety of invertebrate, plant and animal species in contact with infectious microorganisms. They play a crucial role as mediators of the primary host defense against microbial invasion. The characteristics, the broad spectrum and largely nonspecific activity of the antimicrobial peptides qualify them as possible candidates for therapeutic alternatives against multi-resistant bacterial strains.AMPs come in nature in the form of multicomponent secretory fluids that exhibit certain biological activity. For development of biologicals with some predesignated properties separation of the individual components, their purification and activity analysis are needed. In silico experiments are designed to speedup the identification of the active components in these substances, understanding of their structural specifics and biodynamics.Here we present the first results of a pilot in silico study on the primary structure formation of newly identified in the mucus of molluscs representatives peptides, as a prerequisite for understanding the possible role of complexation for their biological activity.
Nevena Ilieva, Peicho Petkov, Elena Lilkova, Tsveta Lazarova, Aleksandar Dolashki, Lyudmila Velkova, Pavlina Dolashka, Leandar Litov
### Sensitivity of the Simulated Heat Risk in Southeastern Europe to the RegCM Model Configuration—Preliminary Results
The spatial distribution of the biometeorological conditions is a topic of many studies in different countries. One of the most important aspects of the weather adverse effect on the human beings is the consequences from too much exposure to the heat conditions. The human body can adapt to temperatures, but to some extent. If the air temperatures become too high, human beings at first feel uncomfortable, but the consequences can be a serious threat to health and even life. The main reasons for this threat are related to the lack of perspiration and cardiovascular problems. Atmospheric numerical models for simulating the heat stress is used in many studies. One of the most affected region in the near past, but also most likely in the future, is the Southeastern Europe, including Bulgaria. Global models are with too low resolution, but still they suggest very strong heat stress especially at the end of the 21th century. According to other studies, results from regional meteorological models suggest similar conclusions. The current research is about the heat stress conditions in the Balkan Peninsula, evaluated from ten–year simulations. They are performed with regional climate model RegCM. The model is run many times with different combinations of physics parameterization of some processes. The aim is to compare the heat stress simulated by different model configurations for the Balkan Peninsula and so to reveal the dependence of heat stress evaluation on the model configuration. That would answer the question of the sensitivity of the model to the parameterization schemes from a biometeorological point of view.
### Large-Scale Prediction of the ARS Family Inhibitors of the Oncogenic KRASG12C Mutant
The KRAS protein is a molecular switch that activates cellular processes, like cell growth and differentiation. The G12C point mutation of the KRAS is found in various cancer cells. It results in the accretion of the GTP-bound active form thus accelerating downstream signalling pathways. Recently ARS family of compounds was suggested as selective covalent inhibitors of the KRASG12C. The most prospective ARS-853 has IC$$_{50}$$ = 1.6 $$\upmu$$M that is too large for the medicinal applications. We demonstrate that calculated dissociation constants K$$_\mathrm{d}$$ are proportional to the experimental IC$$_{50}$$ values and can be utilized as a measure of the inhibitor potency. Using molecular modeling tools we suggest a set of novel compounds with the predicted IC$$_{50}$$ values more than an order of magnitude lower than that of the ARS-853.
Anna M. Kulakova, Anna V. Popinako, Maria G. Khrenova
### Identification of Heat Conductivity in (2+1)D Equation as a Function of Time
The considered problem for identifying the time–dependent heat conductivity coefficient from over–posed boundary data belongs to a class of inverse problems. The proposed solution uses a variational approach for identifying the coefficient. The inverse problem is reformulated as a higher–order elliptic boundary–value problem for minimization of a quadratic functional of the original equation. The resulting system consists of a well–posed fourth–order boundary-value problem for the temperature and an explicit equation for the unknown heat conductivity coefficient. The obtained boundary–value problem is solved by means of an iterative procedure, which is thoroughly validated.
Tchavdar T. Marinov, Rossitza S. Marinova
### Numerical Calculation of Deformations of Composite Material with Fiber Inclusions
In the numerical simulation of the stress-strain state of a composite material, a problem may arise associated with a large computational complexity due to the grid resolution of a large number of inclusions. It is especially difficult to resolve elongated bodies having linear dimensions that differ by several orders of magnitude, such as fibers. In this paper, we attempt to model fibers in the form of one-dimensional lines, which can significantly reduce the computational complexity of the problem. Comparison of the results for the three-point bending of a concrete block is presented. For the numerical solution, the finite element method was applied using the FEniCS computing platform.
Petr V. Sivtsev, Djulustan Ya. Nikiforov
### On the Impact of Reordering in a Hierarchical Semi-Separable Compression Solver for Fractional Diffusion Problems
The performance of a hierarchical solver for systems of linear algebraic equations arising from finite elements (FEM) discretization of Fractional diffusion problems is the subject of this study. We consider the integral definition of Fractional Laplacian in a bounded domain introduced through the Ritz potential. The problem is non-local and the related FEM system has a dense matrix. We utilize the Structured Matrix Package (STRUMPACK) and its implementation of a Hierarchical Semi-Separable compression in order to solve the system of linear equations. Our main aim is to improve the performance and accuracy of the method by proposing and analyzing 2 schemes for reordering of the unknowns. The numerical tests are run on the high performance cluster AVITOHOL at IICT–BAS.
Dimitar Slavchev
### Computer Simulation of a Saline Enhanced Radio-Frequency Hepatic Ablation Process
We consider the simulation of thermal and electrical processes, involved in a radio-frequency ablation procedure. Radio-frequency ablation is a low invasive technique for treatment of hepatic tumors, utilizing AC current to destroy unwanted tissues by heating. We simulate an ablation procedure where the needle is bipolar, i.e. no ground pad is attached. Saline solution is injected through the needle during the procedure, creating a cloud around the tip with higher electrical conductivity. This approach is safer for some patients.The mathematical model consists of three parts—dynamical, electrical, and thermal. The energy from the applied AC voltage is determined by solving the Laplace equation to find the potential distribution. After that, the electric field intensity and the current density are directly calculated. Finally, the heat transfer equation is solved to determine the temperature distribution.A 3D image of the patient’s liver is obtained from a magnetic resonance imaging scan. Then, the geometry for the needle is added. The CGAL library is used to obtain an unstructured mesh in the computational domain. We use the finite element method in space, to obtain both the current density and the created temperature field. An unstructured mesh parallel solver is developed for the considered problem. The parallelization approach is based on partitioning the meshes using ParMETIS. Numerical tests show good performance of the developed parallel solver.
Yavor Vutov, Daniel Nikolov, Ivan Lirkov, Krassimir Georgiev
### Studying the Influence of Climate Changes on European Ozone Levels
The large-scale air pollution model UNI-DEM (the Unified Danish Eulerian Model) was used together with several carefully selected climatic scenarios. It was necessary to run the model over a long time-interval (sixteen consecutive years) and to use fine resolution on a very large space domain. This caused great difficulties because it was necessary to (a) perform many runs with different input parameters, (b) use huge input files containing the needed meteorological and emission data, (c) resolve many problems related to the computational difficulties, (d) develop and apply carefully prepared parallel codes, (e) exploit efficiently the cache memories of the available computers and (f) store in a proper way huge output files for visualization and animation. It will be described how these difficult tasks have been resolved and many results related to some potentially harmful ozone levels will be presented.
Zahari Zlatev, Ivan Dimov, István Faragó, Krassimir Georgiev, Ágnes Havasi
### A Revised Wigner Function Approach for Stationary Quantum Transport
The Wigner equation describing stationary quantum transport has a singularity at the point $$k=0$$. Deterministic solution methods usually deal with the singularity by just avoiding that point in the mesh (e.g., Frensley’s method). Results from such methods are known to depend strongly on the discretization and meshing parameters.We propose a revised approach which explicitly includes the point $$k=0$$ in the mesh. For this we give two equations for $$k=0$$. The first condition is an algebraic constraint which ensures that the solution of the Wigner equation has no singularity for $$k=0$$. If this condition is fulfilled we then can derive a transport equation for $$k=0$$ as a secondary equation.The resulting system with two equations for $$k=0$$ is overdetermined and we call it the constrained Wigner equation. We give a theoretical analysis of the overdeterminacy by relating the two equations for $$k=0$$ to boundary conditions for the sigma equation, which is the inverse Fourier transform of the Wigner equation.We show results from a prototype implementation of the constrained equation which gives good agreement with results from the quantum transmitting boundary method. No numerical parameter fitting is needed.
Robert Kosik, Johann Cervenka, Mischa Thesberg, Hans Kosina
### Techniques for Statistical Enhancement in a 2D Multi-subband Ensemble Monte Carlo Nanodevice Simulator
Novel numerical techniques are needed in advanced simulation tools in order to accurately describe the behavior of nanoelectronic devices. In this work, two different numerical techniques for statistical enhancement are included in a 2D Multi-Subband Ensemble Monte Carlo (MS-EMC) simulator. First, the consideration of the Fermi-Dirac statistics for the boundary conditions in the ohmic contacts instead of the Boltzmann ones provides a more accurate picture of the distribution function. Second, the energy-dependent weight model reduces the stochastic noise that the superparticles with very high energy introduce in the device performance. In this work, we study the impact of both numerical techniques in two of the potential candidates to extend the CMOS technology: the Fully-Depleted Silicon-On-Insulator (FDSOI) and the FinFET devices. We show that the choice of the Fermi-Dirac statistics has the same impact in both the FDSOI and the FinFET, whereas the energy-dependent weight model has more significance in the FDSOI than in the FinFET because the latter has better electrostatic integrity.
Cristina Medina-Bailon, Carlos Sampedro, Jose Luis Padilla, Luca Donetti, Vihar Georgiev, Francisco Gamiz, Asen Asenov
### Efficient Stochastic Algorithms for the Sensitivity Analysis Problem in the Air Pollution Modelling
Sensitivity analysis of the results of large and complicated mathematical models is rather tuff and time-consuming task. However, this is quite an important problem as far as their critical applications are concerned. There are many such applications in the area of air pollution modelling. On the other hand, there are lots of natural uncertainties in the input data sets and parameters of a large-scale air pollution model. Such a model, the Danish Eulerian Model with its up-to-date high-performance implementations, is under consideration in this work. Its advanced chemical scheme (the Condensed CBM IV) takes into account a large number of chemical species and numerous reactions between them.Four efficient stochastic algorithms have been used and compared by their accuracy in studying the sensitivity of ammonia and ozone concentration results with respect to the input emission levels and some chemical reactions rate parameters. The results of our numerical experiments show that the stochastic algorithms under consideration are quite efficient for the purpose of our sensitivity studies.
Tzvetan Ostromsky, Venelin Todorov, Ivan Dimov, Zahari Zlatev
### Kinetic Monte Carlo Analysis of the Operation and Reliability of Oxide Based RRAMs
By using a stochastic simulation model based on the kinetic Monte Carlo approach, we study the physics, operation and reliability of resistive random-access memory (RRAM) devices based on oxides, including silicon-rich silica (SiO$$_x$$) and hafnium oxide – HfO$$_x$$ – a widely used transition metal oxide. The interest in RRAM technology has been increasing steadily in the last ten years, as it is widely viewed as the next generation of non-volatile memory devices. The simulation procedure describes self-consistently electronic charge and thermal transport effects in the three-dimensional (3D) space, allowing the study of the dynamics of conductive filaments responsible for switching. We focus on the study of the reliability of these devices, by specifically looking into how oxygen deficiency in the system affects the switching efficiency.
### Multi-Subband Ensemble Monte Carlo Simulator for Nanodevices in the End of the Roadmap
As the scaling of electronic devices approaches to the end of the roadmap quantum phenomena play an important role not only in the electrostatics but also in the electron transport. This work presents the capabilities of a novel implementation of Multi-Subband Ensemble Monte Carlo simulators (MS-EMC) including transport quantum phenomena. In particular, an effective computational scheme of tunneling mechanisms (including S/D tunneling, GLM and BTBT) is shown taking advantage of the main features of semi-classical transport models which are the reduction of the computational requirements and a higher flexibility in comparison to purely full quantum codes.
Carlos Sampedro, Cristina Medina-Bailon, Luca Donetti, Jose Luis Padilla, Carlos Navarro, Carlos Marquez, Francisco Gamiz
### A Monte Carlo Evaluation of the Current and Low Frequency Current Noise at Spin-Dependent Hopping
Monte Carlo methods are convenient to model the electron transport due to single electron hopping. The algorithm allows to incorporate a restriction that due to the Coulomb repulsion each trap can only be occupied by a single electron. With electron spin gaining increasing attention, the trap-assisted electron transport has to be generalized to include the electron spin, especially in the presence of an external magnetic field and with transport between ferromagnetic contacts. An innovative Monte Carlo method to deal with the spin-dependent hopping is presented. When the electron spin is taken into account, the escape transition rates are described by transition matrices which describe the coupled spin and occupation relaxation from the trap. The transport process is represented by a cyclic repetition of consecutive electron hops from the source to a trap and from the trap to the drain. The rates do not depend on the previous hops nor on time. The method allows to evaluate the electron current as well as the low frequency current noise at spin-dependent hopping. Our Monte Carlo approach resolves a controversy between theoretical results found in literature.
Viktor Sverdlov, Siegfried Selberherr
### Efficient Stochastic Approaches for Multidimensional Integrals in Bayesian Statistics
A fundamental problem in Bayesian statistics is the accurate evaluation of multidimensional integrals. A comprehensive experimental study of quasi-Monte Carlo algorithms based on Sobol sequence combined with Matousek linear scrambling and a comparison with adaptive Monte Carlo approach and a lattice rule based on generalized Fibonacci numbers has been presented. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing high dimensional integrals. It is a crucial element since this may be important to be estimated in order to achieve a more accurate and reliable interpretation of the results in Bayesian statistics which is foundational in applications such as machine learning.
Venelin Todorov, Ivan Dimov
### Parallel Multilevel Monte Carlo Algorithms for Elliptic PDEs with Random Coefficients
In this work, we developed and investigated Monte Carlo algorithms for elliptic PDEs with random coefficients. We considered groundwater flow as a model problem, where a permeability field represents random coefficients. The computational complexity is the main challenge in uncertainty quantification methods. The computation contains generating of a random coefficient and solving of partial differential equations. The permeability field was generated using the circulant embedding method. Multilevel Monte Carlo (MLMC) simulation can be based on different approximations of partial differential equations. We developed three MLMC algorithms based on finite volume, finite volume with renormalization and renormalization approximation. We compared numerical simulations and parallel performance of MLMC algorithms for 2D and 3D problems.
Petr Zakharov, Oleg Iliev, Jan Mohring, Nikolay Shegunov
### Generalized Nets Model of Data Parallel Processing in Large Scale Wireless Sensor Networks
The Generalized Nets (GN) approach is an advanced way of parallel processes modeling and analysis of complex systems as Large-scale Wireless Sensor Networks (LWSN). The LWSN such as meteorological and air quality monitoring systems could generate a large amount of data that can reach petabytes per year. The sensor data-parallel processing is one of the possible solutions to reduce inter-node communication to save energy. At the same time, the on-site parallel processing requires additional energy, needed for computational data processing. Therefore, the development of a realistic model of the process is critical for the optimization analysis of every large scale sensor network.In the proposed paper, a new developed GN based model of a sensor nodes data-parallel processing of LWSN with cluster topology is presented. The proposed model covers all the aspects of the inter-node sensor data integration and the cluster-based parallel processes specific for large scale amounts of sensor data operations.
Alexander Alexandrov, Vladimir Monov, Tasho Tashev
### Modeling Block Structured Project Scheduling with Resource Constraints
We propose a formal model of block-structured project scheduling with resource constraints, with the goal of designing optimization algorithms. We combine block structured modeling of business processes with results from project scheduling literature. Differently from standard approaches, here we focus on block structured scheduling processes. Our main achievement is the formulation of an abstract mathematical model of block-structured resource-constrained scheduling processes. We tested the correctness and feasibility of our approach using an initial experimental prototype based on Constraint Logic Programming.
Amelia Bădică, Costin Bădică, Doina Logofătu, Ion Buligiu, Liviu Ciora
### Solving Combinatorial Puzzles with Parallel Evolutionary Algorithms
Rubik’s cube is the most popular combinatorial puzzle. It is well known that solutions of the combinatorial problems are generally hard to find. If 90$$^\circ$$ clockwise rotations of the cube’s sides are taken as operations it will give a minimal cube’s grammar. By building formal grammar sentences with the usage of the six operations ([L]eft, [R]ight, [T]op, [D]own, [F]ront, [B]ack) all cube’s permutations can be achieved. In an evolutionary algorithms (like genetic algorithms for example) set of formal grammar sentences can be represented as population individuals. Single cut point crossover can be efficiently applied when population individuals are strings. Changing randomly selected operation with another randomly selected operation can be used as efficient mutation operator. The most important part of such global optimization is the fitness function. For better individuals fitness value evaluation a combination between Euclidean and Hausdorff distances is proposed in this research. The experiments in this research are done as parallel program written in C++ and Open MPI.
Todor Balabanov, Stoyan Ivanov, Rumen Ketipov
### Multi-objective ACO Algorithm for WSN Layout: InterCriteria Analisys
One of the key objectives during wireless sensor networks deployment is full coverage of the monitoring region with a minimal number of sensors and minimized energy consumption of the network. In this paper we apply multi-objective Ant Colony Optimization (ACO) to solve this hard, from the computational point of view telecommunication problem. The number of ants is one of the key algorithm parameters in the ACO and it is important to find the optimal number of ants needed to achieve good solutions with minimal computational resources. The InterCriteria Analisys is applied in order to study the influence of ants number on the algorithm performance.
Stefka Fidanova, Olympia Roeva
### Reachable Sets of Nonlinear Control Systems: Estimation Approaches
The dynamical control systems of a special structure with a combined nonlinearity of quadratic and bilinear kinds presenting in state velocities are studied. The uncertainty in initial states and in system parameters is also assumed and it has a set-membership type when only the bounding sets for unknown items are given. The ellipsoidal estimates of reachable sets are derived using the special structure of studied control system. The techniques of generalized solutions of Hamilton-Jacobi-Bellman (HJB) equations and HJB inequalities together with previously established results of ellipsoidal calculus are applied to find the set-valued estimates of reachable sets as the level sets of a related cost functional. The computational algorithms and related numerical examples are also given.
Tatiana F. Filippova
### Jumping Average Filter Parameter Optimization for Pulsar Signal Detection
The paper studies the parameters of an average jumping window algorithm to improve the signal-to-noise ratio while retaining the signal characteristics. The studies were conducted with an FPGA device for processing and detecting a real pulsar signal B0329+54.
### Precision in High Dimensional Optimisation of Global Tasks with Unknown Solutions
High dimensional optimisation is a challenge for most of the available search methods. Resolving global and constrained task seems to be even harder and exploration of tasks with unknown solutions can be seen very rare in the literature and requires more research efforts. This article analyses optimisation of high dimensional global, including constrained, tasks with unknown solutions. Reviewed and analysed are experimental results precision, possibilities for trapping in local sub-optima and adaptation to unknown search spaces.
Kalin Penev
### An Intuitionistic Fuzzy Approach to the Travelling Salesman Problem
The travelling salesman problem (TSP) is a classical problem in the combinatorial optimization. Its objective is to find the cheapest route of a salesman starting from a given city, visiting all other cities only once and finally come to the same city where he started. There are different approaches for solving travelling salesman problems with clear data. In real life in one situation there may be not possible to get the delivery costs as a certain quantity. To overcome this Zadeh introduce fuzzy set concepts to deal with an imprecision. There exist algorithms for solution of this problem based on fuzzy or triangular intuitionistic fuzzy numbers (private case of intuitionistic fuzzy sets (IFSs)). But many times the degrees of membership and non-membership for certain element are not defined in exact numbers. Atanassov and Gargov in 1989 first identified it in the concept of interval-valued intuitionist fuzzy sets (IVIFS) which is characterized by sub-intervals of unit interval. In this paper, a new type of TSP is formulated, in which the travelling cost from one city to another is interval-valued intuitionistic fuzzy number (IVIFN), depending on the availability of the conveyance, condition of the roads, etc. We propose for the first time the Hungarian algorithm for finding of an optimal solution of TSP using the apparatuses of index matrices (IMs), introduced in 1984 by Atanassov, and of IVIFSs. The example shown in this paper guarantees the effectiveness of the algorithm. The presented approach for solving a new type of TSP can be applied to problems with imprecise parameters and can be extended in order to obtain the optimal solution for other types of multidimensional TSPs.
Velichka Traneva, Stoyan Tranev
### Alternatives for Neighborhood Function in Kohonen Maps
In the field of the artificial intelligence artificial neural networks are one of the most researched topics. Multilayer perceptron has a reputation for the most used type of artificial neural network, but other types such as Kohonen maps, generalized nets [1] or combinations with Kalman filter [2, 3] are also very interesting. Proposed by Teuvo Kohonen in the 1980s, self-organizing maps have application in meteorology, oceanography, project prioritization and selection, seismic facies analysis for oil and gas exploration, failure mode and effects analysis, creation of artwork and many other areas. Self-organizing maps are very useful for visualization by data dimensions reduction. Unsupervised competitive learning is used in self-organizing maps and the basic idea is the net to classify input data in predefined number of clusters. When the net has fewer nodes it achieve results similar to K-means clustering. One of the components in the self-organizing maps is the neighborhood function. It gives scaling factor for the distance between one neuron and other neurons in each step. The simplest form of a neighborhood function gives 1 for the closest nodes and 0 for all other, but the most used neighborhood function is a Gaussian function. In this research fading cosine and exponential regulated cosine functions are proposed as alternatives for neighborhood function.
Iliyan Zankinski, Kolyu Kolev, Todor Balabanov
### A Modified Gomory-Hu Algorithm with DWDM-Oriented Technology
Optimization of the topology of computer networks based on the classical Gomory-Hu algorithm does not take the specific transfer technology into account. For WDM technology requirements this leads to a redundancy of channel capacities.To reduce the redundancy of allocating network resources, we propose a modification of the Gomory-Hu algorithm which takes account of the specifics of DWDM technology – not at the final stage but already at intermediate stages in the process. The original algorithm proposed by Gomory and Hu involves the decomposition of the graph of the input network into ring subnets of different dimensions. Our modified algorithm takes account of the technical parameters of the DWDM technology for each ring during the decomposition.We illustrate our method by an example. The technique can be extended to large networks, which may lead to a significant economic effect.
Winfried Auzinger, Kvitoslava Obelovska, Roksolyana Stolyarchuk
### Adaptive Exponential Integrators for MCTDHF
We compare exponential-type integrators for the numerical time-propagation of the equations of motion arising in the multi-configuration time-dependent Hartree-Fock method for the approximation of the high-dimensional multi-particle Schrödinger equation. We find that among the most widely used integrators like Runge-Kutta, exponential splitting, exponential Runge-Kutta, exponential multistep and Lawson methods, exponential Lawson multistep methods with one predictor/corrector step provide optimal stability and accuracy at the least computational cost, taking into account that the evaluation of the nonlocal potential terms is by far the computationally most expensive part of such a calculation. Moreover, the predictor step provides an estimator for the time-stepping error at no additional cost, which enables adaptive time-stepping to reliably control the accuracy of a computation.
Winfried Auzinger, Alexander Grosz, Harald Hofstätter, Othmar Koch
### Convergence Analysis of a Finite Volume Gradient Scheme for a Linear Parabolic Equation Using Characteristic Methods
The first aim of this work is to establish a finite volume scheme using the Characteristic method for non-stationary advection-diffusion equations. The second aim is to analyze the convergence order of this scheme. The finite volume method considered here has been developed recently in [3] to approximate heterogeneous and anisotropic diffusion problems using a general class of nonconforming meshes. The formulation of schemes using the finite volume method of [3] can be obtained by replacing the gradient of the exact solution by a stable and consistent discrete gradient. This work is a continuation of the previous ones [1, 2] in which we derived directly a finite volume scheme for the heat equation along with a convergence analysis.
### Implementing a Mesh-Projection Schemes Using the Technology of Adaptive Mesh Refinement
The adaptive mesh refinement (AMR) is a basic method for development of efficient technologies allowing a numerical solution of various problems of continuum mechanics. Numerical modeling of heat-mass transfer, hydrodynamics, structural/fracture mechanics etc. deals with non-linear strongly coupled processes which significantly differ in spatial and temporal scales. Modeling of multiscale correlated phenomena stimulates application of computational technologies using irregular grids, among which the octree meshes are the most known and widely used. Using the created data for tree-structured meshes we developed the dynamic loading balance algorithm aimed at applications to the cluster type parallel computing systems. The developed tools support functionality necessary to implement various numerical models of continuum mechanics. As an example of possible applications we discuss the constructed family of mesh projective approximations to second-order partial differential equations with a variable tensor-type coefficients. The difference operator of the scheme provides energy conservation and possesses the “self-adjoint” property which is inherent to the original differential operator, e.g. in the case of a heat transfer model. We consider numerical results obtained by solution of some model initial boundary value problems for parabolic equations using the developed AMR technique .
Dmitry Boykov, Sergey Grigoriev, Olga Olkhovskaya, Alexey Boldarev
### Valuation of European Options with Liquidity Shocks Switching by Fitted Finite Volume Method
In the present paper, we construct a superconvergent fitted finite volume method (FFVM) for pricing European option with switching liquidity shocks. We investigate some basic properties of the numerical solution and establish superconvergence in maximal discrete norm. An efficient algorithm, governing the degeneracy and exponential non-linearity in the problem, is proposed. Results from various numerical experiments with different European options are provided.
Miglena N. Koleva, Lubin G. Vulkov
### Space-Time Finite Element Methods for Parabolic Initial-Boundary Value Problems with Non-smooth Solutions
We propose consistent, locally stabilized, conforming finite element schemes on completely unstructured simplicial space-time meshes for the numerical solution of parabolic initial-boundary value problems under the assumption of maximal parabolic regularity. We present new a priori discretization error estimates for low-regularity solutions, and some numerical results including results for an adaptive version of the scheme and strong scaling results.
Ulrich Langer, Andreas Schafelner
### MATLAB Implementation of Element-Based Solvers
Rahman and Valdman (2013) introduced a vectorized way to assemble finite element stiffness and mass matrices in MATLAB. Local element matrices are computed all at once by array operations and stored in multi-dimensional arrays (matrices). We build some iterative solvers on available multi-dimensional structures completely avoiding the use of a sparse matrix.
Leszek Marcinkowski, Jan Valdman
### The Approach to the Construction of Difference Schemes with a Consistent Approximation of the Stress-Strain State and the Energy Balance of the Medium in Cylindrical Geometry
In the paper, on unstructured metric grids of the theory of the support operator method, on the topological and geometric structure of which minimal reasonable restrictions are imposed, applicable to the symmetricized displacement tensor $$t_\mathbf {u}$$, discrete analogs of self-adjoint and sign-definite operations $$\mathop {\mathrm {div}}(t_\mathbf {u})$$ and $$\mathop {\mathrm {div}}(\mathop {\mathrm {tr}}(t_\mathbf {u})\delta )$$ which are invariants to solid rotations, were obtained for modeling of force fields of elastic processes, as well as approximation of integrals of the form $$\int _{\varOmega }\mathrm{tr}\left( \mathrm{t}_{\mathbf {u}}^{2} \right) dV$$ and $$\int _{\varOmega }\mathrm{tr}^{2} \left( \mathrm{t}_{\mathbf {u}} \right) dV$$, sufficient to simulate elastic-dependent energy balances of the medium taking into account the curvature of space of the cylindrical geometry of the system.
Yury Poveshchenko, Vladimir Gasilov, Viktoriia Podryga, Yulia Sharova
### Two-Layer Completely Conservative Difference Scheme of Gas Dynamics in Eulerian Variables with Adaptive Regularization of Solution
For the equations of gas dynamics in Euler variables, a family of two-layer in time completely conservative difference schemes profiled on the space with time weights is constructed. The effective conservation of internal energy in this type of divergent difference schemes is ensured by the absence of constantly operating sources of difference origin in the internal energy equation, producing “computational” entropy (including singularities of the solution). Considerable attention in this work is paid to the methods of constructing regularizing mass, momentum and internal energy flows that do not violate the properties of complete conservatism of difference schemes of this class, to the analysis of their amplitude and admissibility of adaptive use on variable structure grids in space and on implicit layers in time. The developed type of difference schemes can be used to calculate multi-temperature processes (electron and ion temperatures), where for the available number of variables, a single balance equation for the total energy of the medium is not enough.
Orkhan Rahimly, Viktoriia Podryga, Yury Poveshchenko, Parvin Rahimly, Yulia Sharova
### Some Problems of Modeling the Impact on Gas Hydrates on the Basis of Splitting by Physical Processes
In this work we consider an equilibrium joint model of a two-component three-phase (water, gas, hydrate) filtration fluid dynamics and two-phase processes in the thawed zone with no gas hydrates, for which splitting by physical processes is performed. Algorithms for the joint calculation of hydrate-containing and thawed states of a fluid-dynamic medium are described. Examples of calculations of technogenic depressive effects near the wells on the dynamics of spatial distributions of gas hydrates’ thawing and the formation of thawed two-phase zones are given.
Parvin Rahimly, Viktoriia Podryga, Yury Poveshchenko, Orkhan Rahimly, Yulia Sharova
### Backmatter
Weitere Informationen | 2021-01-25 23:32:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.524211049079895, "perplexity": 935.0284405185661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704792131.69/warc/CC-MAIN-20210125220722-20210126010722-00422.warc.gz"} |
https://kocircuit.github.io/compiler/gate/index.html | # Ko
## A language for programming recursive circuits
Ko
3. Compiler
3.2. Gates: Transformations in Go
3.2. Gates: Transformations in Go
Gates are a mechanism for implementing transformations in Go and making them accessible within (call-able from) Ko.
Implementing a transformation in Go (a gate) requires three simple steps. First, describe the argument structure of the transformation by defining a Go struct for it. Second, implement the transformation as a Play method attached to the Go argument structure (from step 1). Third, register the implementation with the Ko compiler (in Go) and build an “extended” compiler which links in the new implementation.
In the following complete example, we imlpement a gate which returns the maximum of two 64-bit integers and is visible from Ko as the two-argument function GoMaxInt64(x, y) .
The entire implementation fits in a single .go file which we can place, say, in github.com/kocircuit/kocircuit/utils/integer/max.go .
package integer
import (
"github.com/kocircuit/kocircuit/lang/go/eval"
"github.com/kocircuit/kocircuit/lang/go/runtime"
)
func init() { // Register the new gate with Ko.
eval.RegisterEvalGate(new(GoMaxInt64))
}
type GoMaxInt64 struct { // gate argument structure
X int64 ko:"name=x" // The Ko field name is specified in a tag.
Y int64 ko:"name=y" // There are no restrictions on field types.
}
// Play implements the gate transformation.
// Play can return any Go type as necessary.
func (g *GoMaxInt64) Play(ctx *runtime.Context) int64 {
if g.X < g.Y {
return g.Y
} else {
return g.X
}
}
After Go package integer is linked into the compiler (described below), the transformation GoMaxInt64 will be accessible from Ko with the appropriate package import clause. For instance, the following Ko code utilizes GoMaxInt64 to return the maximum of three 64-bit integers:
import "github.com/kocircuit/kocircuit/utils/integer" as util
Max3(x, y, z) {
return: util.GoMaxInt64(
x: util.GoMaxInt64(x: x, y: y)
y: util.GoMaxInt64(x: y, y: z)
)
}
It is also possible to implement Go transformations, where one of the arguments is monadic (i.e. the transformation can be invoked with a shorthand notation by passing a single argument value and not specifying the argument name).
The following variation of GoMaxInt64 accepts a sequence of integers (the monadic argument int64s ) and an additional optional argument otherwise , determining the result if the sequence is empty:
type GoMaxInt64 struct {
Int64s []int64 ko:"name=int64s,monadic"
Otherwise *int64 ko:"name=otherwise"
}
func (g *GoMaxInt64) Play(ctx *runtime.Context) int64 {
var max int64
if g.Otherwise != nil {
max = *g.Otherwise
}
for _, n := range g.Int64s {
if n > max {
max = n
}
}
return max
}
This new implementation of GoMaxInt64 can be invoked (in Ko) in two different ways. The standard way entails passing the arguments and specifying their names:
util.GoMaxInt64(int64s: (1, 2, 3), otherwise: 0) // or
util.GoMaxInt64(int64s: (1, 2, 3))
The latter case, where the optional otherwise argument is not passed, can also be written as util.GoMaxInt64(1, 2, 3) , thereby taking advantage of the fact that int64s is defined as the monadic argument (default argument when no argument name is specified).
## Type translations
The Go types used for the fields of the gate structure, as well as for the return value of the Play method, are generally unconstrained.
The Ko compiler transparently converts Go types to their unambiguously corresponding Ko types . Primitive types (boolean, string, signed and unsigned integers and floating-point numbers) are mapped respectively. Go structures and slices are mapped to Ko structures and sequences. Go pointers correspond to optional types in Ko. Go interfaces are mapped to Ko opaque values. (This facilitates transporting Go runtime objects, which might be mutable, through the Ko immutable type system without complications.) Go channels, arrays and functional types are — for the moment — disallowed as they are not in line (or necessary for) common protocol type systems (like Protocol Buffers or GraphQL). If their use is necessary, in cases of low-level systems programming with Ko, they can always be hidden behind a Go interface to be treated as opaque inside Ko.
## Extending the compiler
Before the Ko compiler can see and reflect on new gates, they need to be linked into the compiler. We have provided an easy way to do so.
Create a new Go binary package (which will build to the new compiler) and import the Ko compiler infrastructure package, as well as any number of Go packages containing gate implementations. For the example from this article, you would simply need:
package main
import (
"github.com/kocircuit/kocircuit/lang/ko/cmd" // provides the Ko compiler command-line
_ "github.com/kocircuit/kocircuit/util/integer" // links the gate into the new compiler
)
func main() {
cmd.Execute()
}
There are also simlpe methods (excluded here for brevity) to embed the Ko compiler in your software (rather than using it on the command-line) and utilize it , for example, as a “scripting engine”. | 2019-01-17 18:12:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3541610836982727, "perplexity": 6544.904390741182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659056.44/warc/CC-MAIN-20190117163938-20190117185938-00050.warc.gz"} |
https://qsantos.fr/2021/03/07/solving-keplers-equation-5-million-times-a-second/ | Categories
# Solving Kepler’s Equation 5 Million Times a Second
I know it is a controversial opinion, but you might need to know where things are when running a simulation. As a bonus, it helps you in knowing what to draw on the screen.
Of course, you can just use a position vector (x, y, z) where x, y and z are the coordinates of the object. But things have the inconvenient tendency not to stay in the same place for all eternity. So we’ll want to know where that object is at the given time.
Now, the more generic approach is to run a physics simulation. We compute the acceleration of the object, update its speed depending on the acceleration and update its position depending on its speed. By the way, use Runge-Kutta 4 for this, not Euler’s method.
That would work for things in orbit too. But we can do much better in this case. This one guy, Johannes Kepler found simpler formulas to describe the movement of two orbiting objects (e.g. the Sun and the Earth, or the Earth and the Moon). Basically, the trajectories will be ellipses, and we can deduce the position of an object in its orbit at any given time.
We can describe the position of the object in its orbit as the angle between the direction of the periapsis (where the planet is closest to its sun) and the direction to the object, taken at the relevant focus. This is the true anomaly. Everything is an illusion, so we also like to pretend the ellipse is actually a circle and define the eccentric anomaly. But these angles are not linear with time. So we also define the mean anomaly, which works like an angle (goes from 0 to 2π) but is linear with time. Once we know the direction of the object, we will only need to know its distance, r.
And here are the formulas to get from time to the true anomaly, and how we then obtain the distance from focus (e is the eccentricity of the ellipse):
• mean anomaly (M): M = M₀ + n × (t – t₀)
• eccentric anomaly (E): M = E – e × sin E
• true anomaly (f): $latex \displaystyle f=2\arctan\sqrt{\frac{1+e}{1-e}} \tan\frac E2$
• distance from focus (r): $latex \displaystyle r=\frac p{1 + e \cos f}$
You won’t be calculating this in your head, but the expressions are pretty easy to translate to working code. Well, except for one. Notice how the eccentric anomaly is not expressed as a function of the mean anomaly? Well, that’s the hard part. And it’s called Kepler’s equation.
We do not know a simple way to express E as a formula depending only on e and M. Instead, we must proceed by successive approximations until we decide we are close enough. For this, we will need two things:
• a starting value E₀ for E
• a way to improve the current approximation of E
Most introductory books and online articles I have found will tell you to take E₀ = M. This has been obsolete since 1978. It works well enough, but fails in corner cases (near-parabolic orbits around periapsis). And modern starting values are close enough to limit the number of improvements you then need to make. I reviewed 21 papers, selected the 3 most relevant ones, and implemented 21 methods for picking starting values (3 papers for the hyperbolic case with 5 methods for the hyperbolic case).
To improve the starting value, old references will make you do E := M + e sin E, but this is pretty naive. Most modern references will use Newton’s method, which helps you solve an equation faster as long as you know its first derivative. But you can go further by using the second derivative using Halley’s method. You can go arbitrarily far using Householder’s method.
Great, now I have 21 × 4 = 84 approaches to try (5 × 3 = 15 for the hyperbolic case). Which one is the most accurate? Which one is the fastest? And, more importantly, can I find a fast and accurate approach?
You can spend years studying the theoretical properties of these functions. Or, we just try them for real. It is the most efficient way to know how they actually behave on real hardware with real implementations of numbers.
I wrote a benchmark tool to test both the accuracy and the speed of each approach. It turns out that only a handful of approaches are accurate enough for my taste:
# Elliptical case
## Naive method
## Newton's method
## Halley's method
580 c 3.580e-16 gooding_10
764 c 3.926e-16 gooding_11
577 c 3.604e-16 mikkola_1
742 c 3.653e-16 mikkola_2
## Householder's third order method
606 c 3.645e-16 gooding_10
809 c 3.623e-16 gooding_11
641 c 3.935e-16 mikkola_1
764 c 3.904e-16 mikkola_2
# Hyperbolic case
## Newton's method
## Halley's method
1454 c 3.731e-15 mikkola_1
1578 c 3.725e-15 mikkola_2
## Householder's third order method
1704 c 3.618e-15 mikkola_1
1757 c 3.618e-15 mikkola_2
The first column is the number of CPU cycles per solve of Kepler’s equation. The second column is the worst relative error over the test set. The third column is a short name for the method for picking a starting value for E.
As we can see above, no approach seems to work well enough with the naive improvement, or with Newton’s method. And using third derivatives does not do much. gooding_11 and mikkola_2 are relatively slow. This leaves gooding_10 and mikkola_1. I picked mikkola_1 because the paper also treated the hyperbolic case. It comes from A cubic approximation for Kepler’s equation by Seppo Mikkola.
And after all this, I have a very robust way to know where stuff is at any time! Of course, you should not forget to take care of floating point precision issues when implementing all of this…
## Appendix: Full benchmark
# Elliptical case
## Naive method
264 c 2.500e-01 smith_1
276 c 5.303e-01 smith_2
360 c 2.000e-01 smith_3
444 c 9.701e-02 smith_4
385 c 1.667e-01 smith_5
480 c 6.495e+01 smith_6
246 c 2.500e-01 gooding_1
299 c 2.000e-01 gooding_2
356 c 1.667e-01 gooding_3
246 c 5.303e-01 gooding_4
460 c 9.701e-02 gooding_5
253 c 2.417e-01 gooding_6
325 c 7.497e-02 gooding_7
334 c 7.497e-02 gooding_7b
378 c 1.277e-01 gooding_8
423 c 7.499e-02 gooding_9
455 c 2.376e-01 gooding_10
608 c 8.020e-02 gooding_11
286 c 1.439e-01 gooding_12
448 c 4.342e-01 mikkola_1
629 c 4.426e-01 mikkola_2
## Newton's method
522 c 8.517e-03 smith_1
539 c 1.379e-02 smith_2
578 c 8.111e-03 smith_3
697 c 8.111e-03 smith_4
706 c 8.111e-03 smith_5
900 c 1.028e+02 smith_6
481 c 8.517e-03 gooding_1
583 c 8.111e-03 gooding_2
561 c 8.111e-03 gooding_3
476 c 1.379e-02 gooding_4
686 c 8.111e-03 gooding_5
537 c 3.528e-02 gooding_6
536 c 1.378e-02 gooding_7
585 c 1.378e-02 gooding_7b
702 c 2.524e-04 gooding_8
680 c 1.379e-02 gooding_9
731 c 1.117e-10 gooding_10
940 c 2.072e-08 gooding_11
528 c 5.673e-03 gooding_12
697 c 1.825e-06 mikkola_1
815 c 1.825e-06 mikkola_2
## Halley's method
395 c 1.250e-01 smith_1
419 c 2.553e-03 smith_2
488 c 6.250e-02 smith_3
596 c 1.710e-02 smith_4
557 c 4.167e-02 smith_5
989 c 5.130e+01 smith_6
429 c 1.250e-01 gooding_1
503 c 6.250e-02 gooding_2
524 c 4.167e-02 gooding_3
426 c 2.553e-03 gooding_4
587 c 1.710e-02 gooding_5
410 c 6.127e-03 gooding_6
467 c 2.548e-03 gooding_7
440 c 2.548e-03 gooding_7b
508 c 9.153e-05 gooding_8
552 c 2.550e-03 gooding_9
598 c 3.580e-16 gooding_10
821 c 3.926e-16 gooding_11
430 c 1.028e-03 gooding_12
594 c 3.604e-16 mikkola_1
742 c 3.653e-16 mikkola_2
## Householder's third order method
438 c 1.562e-02 smith_1
439 c 6.779e-04 smith_2
501 c 7.812e-03 smith_3
667 c 2.138e-03 smith_4
533 c 5.208e-03 smith_5
1053 c 6.535e+01 smith_6
453 c 1.562e-02 gooding_1
501 c 7.812e-03 gooding_2
594 c 5.208e-03 gooding_3
429 c 6.779e-04 gooding_4
595 c 2.138e-03 gooding_5
454 c 1.638e-03 gooding_6
526 c 6.736e-04 gooding_7
585 c 6.736e-04 gooding_7b
583 c 9.493e-09 gooding_8
652 c 6.741e-04 gooding_9
681 c 3.645e-16 gooding_10
846 c 3.623e-16 gooding_11
486 c 2.724e-04 gooding_12
651 c 3.935e-16 mikkola_1
752 c 3.904e-16 mikkola_2
# Hyperbolic case
## Newton's method
780 c 7.245e+25 basic
1064 c 1.421e-03 mikkola_1
1201 c 1.970e-04 mikkola_2
1014 c 8.559e-01 gooding
892 c 1.515e+19 pfelger
## Halley's method
1134 c 1.729e+17 basic
1396 c 3.731e-15 mikkola_1
1526 c 3.725e-15 mikkola_2
1265 c 9.160e+10 gooding
1237 c 1.791e+13 pfelger
## Householder's third order method
1260 c 1.323e+07 basic
1631 c 3.618e-15 mikkola_1
1595 c 3.618e-15 mikkola_2
1467 c 2.276e+17 gooding
1365 c 9.997e+01 pfelger | 2023-01-31 12:55:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.854781985282898, "perplexity": 3996.3542754130995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499871.68/warc/CC-MAIN-20230131122916-20230131152916-00188.warc.gz"} |
https://www.cut-the-knot.org/Optimization/MaxMinOfFunction.shtml | # Find the Maximum and Minimum of a Function
### Solution 1
The domain of $f$ is $x\in [0,13].$ We have
\begin{align} f(x)&=\sqrt{x+27}+\sqrt{13-x}+\sqrt{x}\\ &=\sqrt{x+27}+\sqrt{13+2\sqrt{x(13-x)}}\\ &\ge\sqrt{27}+\sqrt{13}=3\sqrt{3}+\sqrt{13}, \end{align}
with equality for $x=0.$ Therefore, the minimum of the function is $3\sqrt{3}+\sqrt{13}.$
For the maximum, the Cauchy-Schwarz inequality gives
\displaystyle\begin{align} f^2(x)&=(\sqrt{x+27}+\sqrt{13-x}+\sqrt{x})^2\\ &\le \left(\frac{1}{2}+1+\frac{1}{3}\right)[2x+(x+27)+3(13-x)]\\ &=121.\end{align}
Equality holds when $4x=9(13-x)=x+27,$ with the only solution $x=9.$ Therefore, the maximum of $f(x)$ is $f(9)=11.$
### Solution 2
Define $r=\sqrt{x+27},$ $s=\sqrt{13-x},$ $t=\sqrt{x}.$ Then $r^2+s^2=40$ and $s^2+t^2=13.$ Use Lagrange multipliers:
\displaystyle\begin{align}F(r,s,t,\lambda,\mu)&=r+s+t-\lambda(r^2+s^2)-\mu(s^2+t^2)\\ F_r&=1-2\lambda r=0\\ F_s&=1-2\lambda s-2\mu s=0\\ F_t&=1-2\mu t=0\end{align}
It follows that $\displaystyle \frac{1}{4\lambda^2}+\frac{1}{4(\lambda+\mu)^2}=40$ and $\displaystyle \frac{1}{4\mu^2}+\frac{1}{4(\lambda+\mu)^2}=13.$ Thus $\displaystyle \lambda=\frac{1}{12}$ and $\displaystyle \mu=\frac{1}{6}.$ Further, $r=6,$ $s=2,$ $t=3$ and $F_{max}=6+3+2=11,$ attained at $x=t^2=9.$
### Acknowledgment
This is a problem from a 2009 Chinese Mathematical Competition I found in a book Mathematical Olympiad in China (2009-2010): Problems and Solutions by Xiong Bin and Lee Peng Yee(World Scientific, 2013, 27-28).
Solution 2 is by Sam Walters.
### A Sample of Optimization Problems
• Mathematicians Like to Optimize
• Reshuffling knights - castle defenders
• Isoperimetric Theorem and Inequality
• Viewing a Statue: the Problem of Regiomontanus
• Fagnano's Problem
• Minimax Principle Demonstration
• Maximum Perimeter Property of the Incircle
• Extremal Problem in a Circular Segment
• Optimization in Four Variables with Two Constraints
• Daniel Dan's Optimization in Three Variables
• Problem in a Special Trapezoid
• Cubic Optimization with Linear Constraints
• Cubic Optimization with Partly Linear Constraints
• Problem M317 from Crux Mathematicorum
• Area of Isosceles Triangle
• Minimum of Cotangents from Saint Petersburg | 2021-09-19 13:11:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7542015314102173, "perplexity": 4370.46587857309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056890.28/warc/CC-MAIN-20210919125659-20210919155659-00524.warc.gz"} |
https://www.gamedev.net/forums/topic/305729-start-an-external-programc/ | # Start an external program(C++)
This topic is 4970 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I would need some help on how to start an external program with my c++ program. I'm in windows VS 2005 c++ express, and a little code example that you can spare would help much. I just want to be able to do something like: string progdir = "c:\program files\program.exe" and then start that program/process with whatever the functions is called, Execute(progdir, null) or whatever it can be :). And my c++ program shall not pause or anything strange but just continue as normal.
##### Share on other sites
Take a look into ShellExecute:
ShellExecuteHINSTANCE ShellExecute(HWND hwnd,LPCTSTR lpOperation,LPCTSTR lpFile,LPCTSTR lpParameters,LPCTSTR lpDirectory,INT nShowCmd);Opens or prints a specified file.Returns a value greater than 32 if successful, or an error value that is less than or equal to 32 otherwise. The following table lists the error values. The return value is cast as an HINSTANCE for backward compatibility with 16-bit Windows applications. 0 The operating system is out of memory or resources.ERROR_FILE_NOT_FOUND The specified file was not found.ERROR_PATH_NOT_FOUND The specified path was not found.ERROR_BAD_FORMAT The .exe file is invalid (non-Win32® .exe or error in .exe image).SE_ERR_ACCESSDENIED The operating system denied access to the specified file.SE_ERR_ASSOCINCOMPLETE The file name association is incomplete or invalid.SE_ERR_DDEBUSY The DDE transaction could not be completed because other DDE transactions were being processed.SE_ERR_DDEFAIL The DDE transaction failed.SE_ERR_DDETIMEOUT The DDE transaction could not be completed because the request timed out.SE_ERR_DLLNOTFOUND The specified dynamic-link library was not found.SE_ERR_FNF The specified file was not found.SE_ERR_NOASSOC There is no application associated with the given file name extension.SE_ERR_OOM There was not enough memory to complete the operation.SE_ERR_PNF The specified path was not found.SE_ERR_SHARE A sharing violation occurred.hwndWindow handle to a parent window. This window receives any message boxes that an application produces. For example, an application may report an error by producing a message box.lpOperationAddress of a null-terminated string that specifies the operation to perform. The following operation strings are valid: "open" The function opens the file specified by the lpFile parameter. The file can be an executable file or a document file. It can also be a folder."print" The function prints the file specified by lpFile. The file should be a document file. If the file is an executable file, the function opens the file, as if "open" had been specified."explore" The function explores the folder specified by lpFile.This parameter can be NULL. In that case, the function opens the file specified by lpFile.lpFileAddress of a null-terminated string that specifies the file to open or print or the folder to open or explore. The function can open an executable file or a document file. The function can print a document file.lpParametersIf the lpFile parameter specifies an executable file, lpParameters is an address to a null-terminated string that specifies the parameters to be passed to the application.If lpFile specifies a document file, lpParameters should be NULL.lpDirectoryAddress of a null-terminated string that specifies the default directory.nShowCmdIf lpFile specifies an executable file, nShowCmd specifies how the application is to be shown when it is opened. This parameter can be one of the following values: SW_HIDE Hides the window and activates another window.SW_MAXIMIZE Maximizes the specified window.SW_MINIMIZE Minimizes the specified window and activates the next top-level window in the z-order.SW_RESTORE Activates and displays the window. If the window is minimized or maximized, Windows restores it to its original size and position. An application should specify this flag when restoring a minimized window.SW_SHOW Activates the window and displays it in its current size and position.SW_SHOWDEFAULT Sets the show state based on the SW_ flag specified in theSTARTUPINFO structure passed to theCreateProcess function by the program that started the application. An application should callShowWindow with this flag to set the initial show state of its main window.SW_SHOWMAXIMIZED Activates the window and displays it as a maximized window.SW_SHOWMINIMIZED Activates the window and displays it as a minimized window.SW_SHOWMINNOACTIVE Displays the window as a minimized window. The active window remains active.SW_SHOWNA Displays the window in its current state. The active window remains active.SW_SHOWNOACTIVATE Displays a window in its most recent size and position. The active window remains active.SW_SHOWNORMAL Activates and displays a window. If the window is minimized or maximized, Windows restores it to its original size and position. An application should specify this flag when displaying the window for the first time.If lpFile specifies a document file, nShowCmd should be zero.You can use this function to open or explore a shell folder. To open a folder, use either of the following calls:ShellExecute(handle, NULL, path_to_folder, NULL, NULL, SW_SHOWNORMAL);orShellExecute(handle, "open", path_to_folder, NULL, NULL, SW_SHOWNORMAL);To explore a folder, use the following call:ShellExecute(handle, "explore", path_to_folder, NULL, NULL, SW_SHOWNORMAL);If lpOperation is NULL, the function opens the file specified by lpFile. If lpOperation is "open" or "explore", the function will attempt to open or explore the folder.To obtain information about the application that is launched as a result of calling ShellExecute, use ShellExecuteEx.
I know that using
system("C:\\blah.exe")
will not work for you since your app will freeze until the new process is done.
- Drew
##### Share on other sites
PROCESS_INFORMATION pi;STARTUPINFO si;string progdir = "c:\program files\program.exe";ZeroMemory(&si, sizeof(si));si.cb = sizeof(si);ZeroMemory(&pi, sizeof(pi));CreateProcess(NULL,TEXT(progdir.c_str()),NULL,NULL,FALSE,0,NULL,NULL,&si,&pi);
1. 1
2. 2
3. 3
Rutin
22
4. 4
5. 5
• 10
• 16
• 14
• 9
• 9
• ### Forum Statistics
• Total Topics
632929
• Total Posts
3009277
• ### Who's Online (See full list)
There are no registered users currently online
× | 2018-10-17 09:52:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21489840745925903, "perplexity": 4260.245099633088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511122.49/warc/CC-MAIN-20181017090419-20181017111919-00229.warc.gz"} |
https://www.springerprofessional.de/complete-mappings-and-carlitz-rank/11715018 | main-content
## Weitere Artikel dieser Ausgabe durch Wischen aufrufen
01.11.2016 | Ausgabe 1/2017
# Complete mappings and Carlitz rank
Zeitschrift:
Designs, Codes and Cryptography > Ausgabe 1/2017
Autoren:
Leyla Işık, Alev Topuzoğlu, Arne Winterhof
Wichtige Hinweise
Communicated by C. Mitchell.
## Abstract
The well-known Chowla and Zassenhaus conjecture, proven by Cohen in 1990, states that for any $$d\ge 2$$ and any prime $$p>(d^2-3d+4)^2$$ there is no complete mapping polynomial in $$\mathbb {F}_p[x]$$ of degree d. For arbitrary finite fields $$\mathbb {F}_q$$, we give a similar result in terms of the Carlitz rank of a permutation polynomial rather than its degree. We prove that if $$n<\lfloor q/2\rfloor$$, then there is no complete mapping in $$\mathbb {F}_q[x]$$ of Carlitz rank n of small linearity. We also determine how far permutation polynomials f of Carlitz rank $$n<\lfloor q/2\rfloor$$ are from being complete, by studying value sets of $$f+x.$$ We provide examples of complete mappings if $$n=\lfloor q/2\rfloor$$, which shows that the above bound cannot be improved in general.
### Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Literatur
Über diesen Artikel
Zur Ausgabe | 2020-08-15 20:24:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7723932266235352, "perplexity": 2618.4784253171233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439741154.98/warc/CC-MAIN-20200815184756-20200815214756-00527.warc.gz"} |
https://www.physicsforums.com/threads/nested-quantifiers-help-please.855365/ | 1. Feb 2, 2016
Kingyou123
1. The problem statement, all variables and given/known data
Problem #5,
2. Relevant equations
Professor's handout, to show that this false: Show that for some x∈X there is no way to choose y∈Y such that P(x,y) is true. That is showΓ(∃x∃yP(x,y)) whic is equivalent to ∃x∀y(ΓP(x,y))
3. The attempt at a solution
So far I have appilied this definition, but my professor hasn't given me an example to follow to solve this... Would I have to solve the second part?
Last edited by a moderator: Feb 3, 2016
2. Feb 2, 2016
andrewkirk
that's not correct.
The statement we want to show false is $\forall x\exists y(y^2<x+1)$, which is equivalent to $\neg\exists x\neg\Big(\exists y(y^2<x+1)\Big)$.
The negation of that is $\exists x\neg\Big(\exists y(y^2<x+1)\Big)$.
Can you find such an $x$?
3. Feb 2, 2016
Kingyou123
So if we make (y^2<x+1) true than the Γ will make it false, correct? Like if I plug in 1 for x and y.
4. Feb 2, 2016
Kingyou123
Would this be correct?
5. Feb 2, 2016
andrewkirk
No that is not correct. Your step from the first to second line is invalid. In symbolic logic, you should always write the formal justification for each step. If you apply that discipline you will in most cases realise without assistance when you make an invalid step.
Go back to my previous post, look at the last logical proposition, and think about what value of $x$ would satisfy $\neg\Big(\exists y(y^2<x+1)\Big)$ where the domain is the real numbers. It's actually very easy.
6. Feb 3, 2016
Kingyou123
Okay, I already turned it in. So basically I just had to prove when (y^2<x+1) is false,correct? So if I set x to 2 and y to 2, I would get 4<3 therefore making the statement false. The thing that confuses me is the not symbol, so it I made the statement false it would be not false, so true?
7. Feb 3, 2016
andrewkirk
No. What you have to do is find a value of $x$ such that, for all $y$, $y^2$ is not less than $x+1$, ie that $y^2$ is more than or equal to $x+1$. You cannot choose a single $y$. The result has to hold for all $y$.
Can you think of a number that all squares of real numbers are more than or equal to (but, by the way, not all squares of complex numbers)?
8. Feb 3, 2016
haruspex
It looks right to me.
In your last step in post 2 you had ....~∃y(P(x,y)). That converts to ...∀y(~P(x,y)) to get the form in the OP.
Which form is more readily proven is another matter.
9. Feb 3, 2016
andrewkirk
The first step is invalid. It selects specific values for $x$ and $y$, which is only valid if those variables are universally quantified (using the Axiom Schema of Specification). They are existentially quantified.
Last edited: Feb 3, 2016
10. Feb 3, 2016
haruspex
Ok, I thought you were objecting to the last part.
I suspect the erroneous statement was a typo for ∃x(~∃y( etc. | 2017-08-23 02:52:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7067588567733765, "perplexity": 517.8325512747164}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117519.82/warc/CC-MAIN-20170823020201-20170823040201-00183.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-12-exponential-functions-and-logarithmic-functions-12-4-properties-of-logarithmic-functions-12-4-exercise-set-page-809/10 | ## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)
Published by Pearson
# Chapter 12 - Exponential Functions and Logarithmic Functions - 12.4 Properties of Logarithmic Functions - 12.4 Exercise Set: 10
#### Answer
$\log_{5}{25}+\log_{5}{125}$
#### Work Step by Step
Using $\log_b(xy)=\log_b x +\log_b y,$ the given logarithmic equation, $\log_{5}{(25\cdot125)} ,$ is equivalent to \begin{array}{l}\require{cancel} \log_{5}{25}+\log_{5}{125} .\end{array} This can be further reduced to 2+3. Therefore, the expression is equal to 5.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2018-06-20 15:47:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8533730506896973, "perplexity": 2469.154657727161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863650.42/warc/CC-MAIN-20180620143814-20180620163814-00062.warc.gz"} |
https://gmatclub.com/forum/three-points-are-randomly-chosen-on-the-circumference-of-a-given-circl-289021.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 22 Oct 2019, 09:23
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Three points are randomly chosen on the circumference of a given circl
Author Message
TAGS:
### Hide Tags
GMATH Teacher
Status: GMATH founder
Joined: 12 Oct 2010
Posts: 935
Three points are randomly chosen on the circumference of a given circl [#permalink]
### Show Tags
18 Feb 2019, 13:58
00:00
Difficulty:
85% (hard)
Question Stats:
35% (02:08) correct 65% (01:55) wrong based on 40 sessions
### HideShow timer Statistics
GMATH practice exercise (Quant Class 19)
Three points are randomly chosen on the circumference of a given circle. What is the probability that the center of the circle lies inside the triangle whose vertices are at the three points?
(A) 1/3
(B) 1/4
(C) 1/5
(D) 2/5
(E) 2/7
Hint: the problem was created based on another given in the link hidden below
https://gmatclub.com/forum/three-points-are-chosen-independently-an-at-random-on-the-circumferenc-205189.html
_________________
Fabio Skilnik :: GMATH method creator (Math for the GMAT)
Our high-level "quant" preparation starts here: https://gmath.net
GMATH Teacher
Status: GMATH founder
Joined: 12 Oct 2010
Posts: 935
Re: Three points are randomly chosen on the circumference of a given circl [#permalink]
### Show Tags
19 Feb 2019, 12:29
fskilnik wrote:
GMATH practice exercise (Quant Class 19)
Three points are randomly chosen on the circumference of a given circle. What is the probability that the center of the circle lies inside the triangle whose vertices are at the three points?
(A) 1/3
(B) 1/4
(C) 1/5
(D) 2/5
(E) 2/7
We will refer to the link given in the hint: https://gmatclub.com/forum/three-points ... 05189.html
FOCUS: P(center inside the triangle)
We may consider (without loss of generality) that the first point (blue) was chosen at the position shown in the first figure and that the second point (red) was chosen (without loss of generality) in the upper semicircle (red arc) also presented in the first figure.
In the second figure, we show one possibility of the first two points, to illustrate the fact that we will have a favorable scenario if and only if the third point (green) is chosen in the arc in green, obtained by two lines, each one defined by the center and one of the two first points.
Considering exactly the same reasoning presented in the exercise mentioned in the hint, let´s consider the two extremal cases: when the first two points ("almost") coincide and when the first two points are ("almost") diametrically opposites. The corresponding favorable probabilities are 0 and 1/2, respectively, hence (following the same rationale presented there) our FOCUS is their average: (0+1/2)/2 = 1/4.
We follow the notations and rationale taught in the GMATH method.
Regards,
Fabio.
POST-MORTEM: it is possible to prove that the probabilities of having the first and second points coincidental or diametrically opposites are both ZERO (not approximately zero), therefore there is no trouble omitting the words "almost" presented in parentheses. A subtle consequence: the answer obtained is not approximately right, it is exactly right. (Details are out-of-GMAT´s-scope.)
_________________
Fabio Skilnik :: GMATH method creator (Math for the GMAT)
Our high-level "quant" preparation starts here: https://gmath.net
Re: Three points are randomly chosen on the circumference of a given circl [#permalink] 19 Feb 2019, 12:29
Display posts from previous: Sort by | 2019-10-22 16:23:32 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9044212698936462, "perplexity": 1601.4428616941127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822458.91/warc/CC-MAIN-20191022155241-20191022182741-00020.warc.gz"} |
http://www.maa.org/publications/periodicals/loci/thinking-outside-the-box-or-maybe-just-about-the-box-10?device=mobile | # Thinking Outside the Box -- or Maybe Just About the Box
Author(s):
Thomas Hern (Bowling Green State Univ.) and David Meel (Bowling Green State Univ.)
#### The second Box Problem applet
At first glance, this applet, ClosedBox2, contains many of the same components as the first Box Problem applet; however, the cut length determines the positioning of the cut so that in each case the box volume is relatively maximized.
Figure 9: The second Box Problem applet
Warning: The second Box Problem applet page, entitled ClosedBox2, is best viewed in 1024 x 768 resolution or greater and may take up to a minute to load.
In this applet there are a variety of elements that can be seen. First, the point P is no longer adjustable but is rather determined by the length of the cut defined by the segment BQ. In addition, the grey box in the lower left-hand of the applet contains a dynamic graphical depiction of the functional relationship between cut length and volume. That is, it contains a graphical depiction of
$V(l) = \left( {B - 2l} \right)\left( {{1 \over 2}A - 2l} \right)\left( {2l} \right)$
where l corresponds to $$m( \overline {BQ} )$$ and currently B = 8.5 and A = 14.0.
One element that students need to grapple with when working with this particular applet is the graphical depiction of the function. In particular, graphing $$V(l)$$ using a graphing calculator or computer algebra system yields a figure similar to the following:
Figure 10: Graph of the Box Problem Function
The graphical depiction of the function presented in the applet is a truncated version of the one in figure 10. Students will need to come to grips with the fact that the applet is only concerned with the volume of boxes that are physically constructible whereas the graph of the function shown in figure 10, does not necessarily concern itself with the constructability of the box. Instead, it provides a graph of the functional relationship between an independent variable l and a dependent variable V. In essence, this graph in figure 10 is less concerned with cut length and volume and more concerned with expressing the relationship for all possible values of l, irrespective if these values are possible cut lengths or if those cut lengths yield appropriate volumes.
Too often, we see students focused on the algebraic elements of a problem without considering the physical (or mathematical) constraints on that problem. We seek in this applet to guide students to grapple with the interplay of these two seemingly disparate forces. In turn, we hope to lead them to reconcile for themselves how functional relationships that model real-world phenomena require a careful examination of the domain for which that relationship actually does model the phenomena. For instance, students might at first think that l's only restriction is that it must be less than $${1 \over 2}m( {\overline {BB'} } )$$ since a cut cannot exceed half the width of the cardboard and maintain its connectedness. However, there is another constraint: the cut length, l, cannot exceed $${1 \over 4}m( {\overline {AA'} } )$$ . This ''hidden'' constraint comes directly from the relationship of cut length and position of the cut and depends on the relationship between length and width of the rectangular piece of cardboard. | 2015-04-19 09:34:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6369549632072449, "perplexity": 409.76686483788376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246638571.67/warc/CC-MAIN-20150417045718-00058-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.learncram.com/maths/maths-formulas-for-class-10/ | # Maths Formulas For Class 10
For some Maths can be fun and for some, it can be a nightmare. Maths Formulas are difficult to memorise and we have curated a list of Maths Formulas for Class 10 just for you. You can use this as a go-to sheet whenever you want to prepare Class 10 Maths Formulas. Students can get Formulas on Algebra, Calculus, and Geometry that can be useful in your preparation and help you do your homework.
Here in this article, we have listed basic Maths formulas so that you can learn the fundamentals of Maths. Our unique way of solving Maths Problems will make you learn how the equation came into existence instead of memorizing it. Solve all the important problems and questions in Maths with the Best Maths Formulas for Class 10.
Feel free to directly use the best Maths formulas during your homework or exam preparation. You need to know the list of Class 10 formulas as they will not just be useful in your academic books but also in your day to day lives.
Remember the Maths Formulas in a smart way by making use of our list. You can practice Questions and Answers based on these Class 10 Maths Formulas. Students can get basic Maths formulas Free PDF Download for Class 10. Candidates can use the handy learning aid Maths Formulas PDF to have in depth knowledge on the subject as per the Latest CBSE Syllabus.
CBSE Class 10 Maths Formulas according to the Chapters are prepared by subject experts and you can rely on them during your preparation. Click on the topic you wish to prepare from the list of formulas prevailing.
### Linear Equations
One Variable ax+b=0 a≠0 and a&b are real numbers Two variable ax+by+c = 0 a≠0 & b≠0 and a,b & c are real numbers Three Variable ax+by+cz+d=0 a≠0 , b≠0, c≠0 and a,b,c,d are real numbers
### Pair of Linear Equations in two variables:
a1x+b1+c1=0
a2x+b2+c2=0
Where
• a1, b1, c1, a2, b2, and c2 are all real numbers and
• a12+b12 ≠ 0 & a2+ b22 ≠ 0
It should be noted that linear equations in two variables can also be represented in graphical form.
### Algebra or Algebraic Equations
The standard form of Quadratic Equations
ax2+bx+c=0 where a ≠ 0
And x = [-b ± √(b2 – 4ac)]/2a
### Algebraic formulas:
• (a+b)= a+ b+ 2ab
• (a-b)= a+ b– 2ab
• (a+b) (a-b) = a– b2
• (x + a)(x + b) = x2 + (a + b)x + ab
• (x + a)(x – b) = x2 + (a – b)x – ab
• (x – a)(x + b) = x2 + (b – a)x – ab
• (x – a)(x – b) = x2 – (a + b)x + ab
• (a + b)3 = a3 + b3 + 3ab(a + b)
• (a – b)3 = a3 – b3 – 3ab(a – b)
• (x + y + z)2 = x2 + y2 + z2 + 2xy + 2yz + 2xz
• (x + y – z)2 = x2 + y2 + z2 + 2xy – 2yz – 2xz
• (x – y + z)2 = x2 + y2 + z2 – 2xy – 2yz + 2xz
• (x – y – z)2 = x2 + y2 + z2 – 2xy + 2yz – 2xz
• x3 + y3 + z3 – 3xyz = (x + y + z)(x2 + y2 + z2 – xy – yz -xz)
• x+ y2 =½ [(x + y)2 + (x – y)2]
• (x + a) (x + b) (x + c) = x3 + (a + b +c)x2 + (ab + bc + ca)x + abc
• x3 + y3= (x + y) (x2 – xy + y2)
• x3 – y3 = (x – y) (x2 + xy + y2)
• x2 + y2 + z2 -xy – yz – zx = ½ [(x-y)2 + (y-z)2 + (z-x)2]
### Basic formulas for powers
• px p= pm+n
• {pm}⁄{pn} = pm-n
• (pm)= pmn
• p-m = 1/pm
• p1 = p
• P= 1
### Arithmetic Progression(AP) Formulas
If a1, a2, a3, a4, a5, a6, are the terms of AP and d is the common difference between each term, then we can write the sequence as; a, a+d, a+2d, a+3d,
### Algebraic formulas:
• (a+b)= a+ b+ 2ab
• (a-b)= a+ b– 2ab
• (a+b) (a-b) = a– b2
• (x + a)(x + b) = x2 + (a + b)x + ab
• (x + a)(x – b) = x2 + (a – b)x – ab
• (x – a)(x + b) = x2 + (b – a)x – ab
• (x – a)(x – b) = x2 – (a + b)x + ab
• (a + b)3 = a3 + b3 + 3ab(a + b)
• (a – b)3 = a3 – b3 – 3ab(a – b)
• (x + y + z)2 = x2 + y2 + z2 + 2xy + 2yz + 2xz
• (x + y – z)2 = x2 + y2 + z2 + 2xy – 2yz – 2xz
• (x – y + z)2 = x2 + y2 + z2 – 2xy – 2yz + 2xz
• (x – y – z)2 = x2 + y2 + z2 – 2xy + 2yz – 2xz
• x3 + y3 + z3 – 3xyz = (x + y + z)(x2 + y2 + z2 – xy – yz -xz)
• x+ y2 =½ [(x + y)2 + (x – y)2]
• (x + a) (x + b) (x + c) = x3 + (a + b +c)x2 + (ab + bc + ca)x + abc
• x3 + y3= (x + y) (x2 – xy + y2)
• x3 – y3 = (x – y) (x2 + xy + y2)
• x2 + y2 + z2 -xy – yz – zx = ½ [(x-y)2 + (y-z)2 + (z-x)2]
### Basic formulas for powers
• px p= pm+n
• {pm}⁄{pn} = pm-n
• (pm)= pmn
• p-m = 1/pm
• p1 = p
• P= 1
### Arithmetic Progression(AP) Formulas
If a1, a2, a3, a4, a5, a6, are the terms of AP and d is the common difference between each term, then we can write the sequence as; aa+d, a+2d, a+3d, a+4d, a+5d,….,nth term… where a is the first term. Now, nth term for arithmetic progression is given as;
nth term = a + (n-1) d
Sum of nth term in Arithmetic Progression;
Sn = n/2 [a + (n-1) d]
### Trigonometry Formulas For Class 10
Trigonometry maths formulas for Class 10 covers three major functions Sine, Cosine and Tangent for a right-angle triangle. Also, in trigonometry, the functions sec, cosec and cot formulas can be derived with the help of sin, cos and tan formulas.
Let a right-angled triangle ABC is right-angled at point B and have ∠θ
Trigonometry Table:
Other Trigonometric formulas:
• sin(90° – θ) = cos θ
• cos(90° – θ) = sin θ
• tan(90° – θ) = cot θ
• cot(90° – θ) = tan θ
• sec(90° – θ) = cosecθ
• cosec(90° – θ) = secθ
• sin2θ + cos2 θ = 1
• secθ = 1 + tan2θ for 0° ≤ θ < 90°
• Cosecθ = 1 + cot2 θ for 0° ≤ θ ≤ 90°
### Circles Formulas For Class 10
• Circumference of the circle = 2 π r
• Area of the circle = π r2
• Area of the sector of angle θ = (θ/360) × π r2
• Length of an arc of a sector of angle θ = (θ/360) × 2 π r
(r = radius of the circle)
### Surface Area and Volumes Formulas For Class 10
The common formulas from the surface area and volumes chapter in 10th class include the following:
• Sphere Formulas
Diameter of sphere 2r Circumference of Sphere 2 π r Surface area of sphere 4 π r2 Volume of Cylinder 4/3 π r2
• Cylinder Formulas
Circumference of Cylinder 2 πrh Curved surface area of Cylinder 2 πr2 Total surface area of Cylinder Circumference of Cylinder + Curved surface area of Cylinder = 2 πrh + 2 πr2 Volume of Cylinder π r2 h
• Cone Formulas
Slant height of cone l = √(r2 + h2) Curved surface area of cone πrl Total surface area of cone πr (l + r) Volume of cone ⅓ π r2 h
• Cuboid Formulas
Perimeter of cuboid 4(l + b +h) Length of the longest diagonal of a cuboid √(l2 + b2 + h2) Total surface area of cuboid 2(l×b + b×h + l×h) Volume of Cuboid l × b × h
Here, l = length, b = breadth and h = height In case of Cube, put l = b = h = a, as cube all its sides of equal length, to find the surface area and volumes.
### Statistics Formulas for Class 10
In class 10, the chapter statistics mostly deals with finding the mean median and standard deviation of grouped data.
(I) The mean of the grouped data can be found by 3 methods.
### FAQs on Class 10 Maths Formulas
1. Where can I get Maths Formulas for Class 10?
You can find the list of all Maths Formulas pertaining to Class 10 from our page. In fact, all the formulas are arranged topic wise as per chapters and you can use them to score better grades in the exam.
2. How do I Learn Class 10 Maths Formulas?
Don’t try to mug up the formulas instead try finding the logic behind it so that it will be easy for you. However, there are some formulas that are hard to derive and you can memorize them. Practice as much as you can to understand the Maths Class 10 Formulas.
3. Is there a Website that provides all Maths formulas for Class 10?
Students can make use of our website to access all the Class 10 Maths Formulas as per the topics to make your learning process effective.
Final Words
We believe that the comprehensive list of basic Maths formulas for Class 10 will make your learning effective. You can simply click on the Topics to view the Class 10 Maths formulas and aid your preparation. If you feel any formula is missing that can be added to our list do drop us a comment and we will add it to the list. | 2022-07-07 15:54:08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.823172390460968, "perplexity": 2328.9054598247776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104495692.77/warc/CC-MAIN-20220707154329-20220707184329-00609.warc.gz"} |
https://www.degruyter.com/view/j/anona.2019.8.issue-1/anona-2016-0164/anona-2016-0164.xml | Show Summary Details
More options …
Editor-in-Chief: Radulescu, Vicentiu / Squassina, Marco
IMPACT FACTOR 2018: 6.636
CiteScore 2018: 5.03
SCImago Journal Rank (SJR) 2018: 3.215
Source Normalized Impact per Paper (SNIP) 2018: 3.225
Mathematical Citation Quotient (MCQ) 2018: 3.18
Open Access
Online
ISSN
2191-950X
See all formats and pricing
More options …
Volume 8, Issue 1
# Solutions of vectorial Hamilton–Jacobi equations are rank-one absolute minimisers in ${L}^{\mathrm{\infty }}$
Nikos Katzourakis
• Corresponding author
• Department of Mathematics and Statistics, University of Reading, Whiteknights, PO Box 220, Reading RG6 6AX, Berkshire, United Kingdom
• Email
• Other articles by this author:
Published Online: 2017-06-04 | DOI: https://doi.org/10.1515/anona-2016-0164
## Abstract
Given the supremal functional ${E}_{\mathrm{\infty }}\left(u,{\mathrm{\Omega }}^{\prime }\right)=\underset{{\mathrm{\Omega }}^{\prime }}{\mathrm{ess}\mathrm{sup}}H\left(\cdot ,\mathrm{D}u\right)$, defined on ${W}_{\mathrm{loc}}^{1,\mathrm{\infty }}\left(\mathrm{\Omega },{ℝ}^{N}\right)$, with ${\mathrm{\Omega }}^{\prime }⋐\mathrm{\Omega }\subseteq {ℝ}^{n}$, we identify a class of vectorial rank-one absolute minimisers by proving a statement slightly stronger than the next claim: vectorial solutions of the Hamilton–Jacobi equation $H\left(\cdot ,\mathrm{D}u\right)=c$ are rank-one absolute minimisers if they are ${C}^{1}$. Our minimality notion is a generalisation of the classical ${L}^{\mathrm{\infty }}$ variational principle of Aronsson to the vector case, and emerged in earlier work of the author. The assumptions are minimal, requiring only continuity and rank-one convexity of the level sets.
MSC 2010: 35D99; 35D40; 35J47; 35J92; 35J70; 35J99
## 1 Introduction
In this paper we are concerned with the construction of a special class of appropriately defined vectorial minimisers in calculus of variations in ${L}^{\mathrm{\infty }}$, that is, for supremal functionals of the form
$\left\{\begin{array}{cc}& {E}_{\mathrm{\infty }}\left(u,{\mathrm{\Omega }}^{\prime }\right):=\underset{x\in {\mathrm{\Omega }}^{\prime }}{\mathrm{ess}\mathrm{sup}}H\left(x,\mathrm{D}u\left(x\right)\right),\hfill \\ & u\in {W}_{\mathrm{loc}}^{1,\mathrm{\infty }}\left(\mathrm{\Omega },{ℝ}^{N}\right),{\mathrm{\Omega }}^{\prime }⋐\mathrm{\Omega }.\hfill \end{array}$(1.1)
In the above, $n,N\in ℕ$, $\mathrm{\Omega }\subseteq {ℝ}^{n}$ is an open set and $H:\mathrm{\Omega }×{ℝ}^{N×n}\to \left[0,\mathrm{\infty }\right)$ is a continuous function. The calculus of variations in ${L}^{\mathrm{\infty }}$ has been pioneered by Aronsson in the 1960s who studied the scalar case $N=1$ quite systematically ([2, 3, 4, 5, 6, 7]). A major difficult associated to the study of (1.1) is that the standard minimality notion used for the respective more classical integral functional
$E\left(u,\mathrm{\Omega }\right)={\int }_{\mathrm{\Omega }}H\left(x,\mathrm{D}u\left(x\right)\right)𝑑x$
is not appropriate in the ${L}^{\mathrm{\infty }}$ case due to the lack of “locality” of (1.1). The remedy is to require minimality on all subdomains ${\mathrm{\Omega }}^{\prime }⋐\mathrm{\Omega }$, a notion now known as absolute minimality (hence the emergence of the domain as second argument of the functional). The field has seen an explosion of interest especially after the 1990s when the development of viscosity solutions ([13, 11]) allowed the rigorous study of the non-divergence equation arising as the analogue of the Euler–Lagrange for (1.1) (for a pedagogical introduction to the scalar case and numerous references, we refer to [22]). Inthe special case of $H\left(x,P\right)$ being the Euclidean norm $|P|$ on ${ℝ}^{n}$, the respective PDE is the $\mathrm{\infty }$-Laplace equation
${\mathrm{\Delta }}_{\mathrm{\infty }}u:=\mathrm{D}u\otimes \mathrm{D}u:{\mathrm{D}}^{2}u=\sum _{i,j=1}^{n}{\mathrm{D}}_{i}u{\mathrm{D}}_{j}u{\mathrm{D}}_{ij}^{2}u=0.$(1.2)
Despite the importance for applications and the deep analytical interest of the area, the vectorial case of $N\ge 2$ remained largely unexplored until the early 2010s. In particular, not even the correct form of the respective PDE systems associated to the ${L}^{\mathrm{\infty }}$ variational problem was known. A notable exception is the early vectorial contributions [9, 8], wherein (among other deep results) ${L}^{\mathrm{\infty }}$ versions of lower semi-continuity and quasiconvexity were introduced and studied, and the existence of absolute minimisers was established in some generality with H depending on u itself but for $\mathrm{min}\left\{n,N\right\}=1$.
The author in a series of recent papers (see [18, 20, 21, 19, 26, 25, 30, 29, 28, 24]) has laid the foundations of the vectorial case and, in particular, has derived and studied the analogues of (1.2) associated to general ${L}^{\mathrm{\infty }}$ functionals (see also the joint contributions with Croce, Pisante, Pryer and Abugirda [1, 14, 31, 27]). In the model case of H being the Euclidean norm on ${ℝ}^{N×n}$ and independent of x, i.e.,
$H\left(x,P\right)=|P|={\left(\sum _{\alpha =1}^{N}\sum _{i=1}^{n}{\left({P}_{\alpha i}\right)}^{2}\right)}^{1/2},$
the respective equation is called the $\mathrm{\infty }$-Laplace PDE system and when applied to smooth maps $u:\mathrm{\Omega }\subseteq {ℝ}^{n}\to {ℝ}^{N}$ reads
${\mathrm{\Delta }}_{\mathrm{\infty }}u:=\left(\mathrm{D}u\otimes \mathrm{D}u+{|\mathrm{D}u|}^{2}{\left[\mathrm{D}u\right]}^{\perp }\otimes I\right):{\mathrm{D}}^{2}u=0.$(1.3)
Here ${\left[Du\left(x\right)\right]}^{\perp }$ is the orthogonal projection on the orthogonal complement of the range of gradient matrix $\mathrm{D}u\left(x\right)\in {ℝ}^{N×n}$:
${\left[\mathrm{D}u\right]}^{\perp }:={\mathrm{Proj}}_{{\left(R\left(\mathrm{D}u\right)\right)}^{\perp }}.$
$\sum _{\beta =1}^{N}\sum _{i,j=1}^{n}\left({\mathrm{D}}_{i}{u}_{\alpha }{\mathrm{D}}_{j}{u}_{\beta }+{|\mathrm{D}u|}^{2}{\left[\mathrm{D}u\right]}_{\alpha \beta }^{\perp }{\delta }_{ij}\right){\mathrm{D}}_{ij}^{2}{u}_{\beta }=0,\alpha =1,\mathrm{\dots },N.$
In the full vector case of (1.3), even more intriguing phenomena occur since a further difficulty, which is not present in the scalar case, is that the coefficient involving ${\left[\mathrm{D}u\right]}^{\perp }$ is discontinuous even for ${C}^{\mathrm{\infty }}$ maps; for instance, $u\left(x,y\right)={e}^{ix}-{e}^{iy}$ is a smooth $2×2$ $\mathrm{\infty }$-harmonic map near the origin and the rank of gradient is 1 on the diagonal, but it is 2 otherwise. The emergence of discontinuities is a genuine vectorial phenomenon which does not arise if $\mathrm{min}\left\{n,N\right\}=1$ (see [18, 21, 19]). For $N=1$, the scalar version (1.2) has continuous coefficients, whilst for $n=1$, (1.3) reduces to
${\mathrm{\Delta }}_{\mathrm{\infty }}u=\left({u}^{\prime }\otimes {u}^{\prime }\right){u}^{\prime \prime }+{|{u}^{\prime }|}^{2}\left(I-\frac{{u}^{\prime }}{|{u}^{\prime }|}\otimes \frac{{u}^{\prime }}{|{u}^{\prime }|}\right){u}^{\prime \prime }={|{u}^{\prime }|}^{2}{u}^{\prime \prime }.$
A problem associated to the discontinuities is that Aronsson’s notion of absolute minimisers is not appropriate in the vectorial case of rank $\mathrm{rk}\left(\mathrm{D}u\right)\ge 2$ . By the perpendicularity of $\mathrm{D}u$ and ${\left[\mathrm{D}u\right]}^{\perp }$, ${\mathrm{\Delta }}_{\mathrm{\infty }}u=0$ actually consists of two independent systems and each one is characterised in terms of the ${L}^{\mathrm{\infty }}$ norm of the gradient via different sets of variations. In [20] we proved the following variational characterisation in the class of classical solutions. A ${C}^{2}$ map $u:\mathrm{\Omega }\subseteq {ℝ}^{n}\to {ℝ}^{N}$ is a solution to
$\mathrm{D}u\otimes \mathrm{D}u:{\mathrm{D}}^{2}u=0$(1.4)
if and only if it is a rank-one absolute minimiser on Ω, namely, when for all $D⋐\mathrm{\Omega }$, all scalar functions $g\in {C}_{0}^{1}\left(D\right)$ vanishing on $\partial D$ and all directions $\xi \in {ℝ}^{N}$, u is a minimiser on D with respect to variations of the form $u+\xi g$ (see Figure 1):
${\parallel \mathrm{D}u\parallel }_{{L}^{\mathrm{\infty }}\left(D\right)}\le {\parallel \mathrm{D}u+\xi \otimes \mathrm{D}g\parallel }_{{L}^{\mathrm{\infty }}\left(D\right)}.$(1.5)
Further, if $\mathrm{rk}\left(\mathrm{D}u\right)\equiv$ const., u is a solution to
${|\mathrm{D}u|}^{2}{\left[\mathrm{D}u\right]}^{\perp }\mathrm{\Delta }u=0$(1.6)
if and only if $u\left(\mathrm{\Omega }\right)$ has $\mathrm{\infty }$-minimal area, namely, when for all $D⋐\mathrm{\Omega }$, all scalar functions $h\in {C}^{1}\left(\overline{D}\right)$ (not vanishing on $\partial D$) and all vector fields $\nu \in {C}^{1}\left(D,{ℝ}^{N}\right)$ which are normal to $u\left(\mathrm{\Omega }\right)$, u is a minimiser on D with respect to normal free variations of the form $u+h\nu$ (see Figure 2):
${\parallel \mathrm{D}u\parallel }_{{L}^{\mathrm{\infty }}\left(D\right)}\le {\parallel \mathrm{D}u+\mathrm{D}\left(h\nu \right)\parallel }_{{L}^{\mathrm{\infty }}\left(D\right)}.$(1.7)
We called a map $\mathrm{\infty }$-minimal with respect to functional ${\parallel \mathrm{D}\left(\cdot \right)\parallel }_{{L}^{\mathrm{\infty }}\left(\cdot \right)}$ when it is a rank-one absolute minimiser on Ω and $u\left(\mathrm{\Omega }\right)$ has $\mathrm{\infty }$-minimal area.
Figure 1
Figure 2
Perhaps the greatest difficulty associated to (1.1) and (1.3) is how to define and study generalised solutions since for the highly nonlinear non-divergence model system (1.3) all standard arguments based on the maximum principle or on integration by parts seem to fail. In the very recent work [30], the author proposed the theory of so-called $\mathcal{𝒟}$-solutions, which applies to general fully nonlinear PDE systems of any order, namely,
$\mathcal{ℱ}\left(\cdot ,u,\mathrm{D}u,\mathrm{\dots },{\mathrm{D}}^{p}u\right)=0,$
and allows for merely measurable solutions u to be rigorously interpreted and studied. This notion is duality-free and is based on the probabilistic representation of derivatives which do not exist classically. $\mathcal{𝒟}$-solutions have already borne substantial fruit and in [30, 29, 28, 24, 23] we have derived several existence-uniqueness, variational and regularity results. In particular, in [28], we have obtained a variational characterisation of (1.3) in the setting of general $\mathcal{𝒟}$-solutions for ${W}_{\mathrm{loc}}^{1,\mathrm{\infty }}\left(\mathrm{\Omega },{ℝ}^{N}\right)$ appropriately defined minimisers which are relevant to (1.4)–(1.7) but different.
In this paper we consider the obvious generalisation of the rank-one minimality notion of (1.5) adapted to the functional (1.1). To this end, we identify a large class of rank-one absolute minimisers: for any $c\ge 0$, every solution $u:\mathrm{\Omega }\subseteq {ℝ}^{n}\to {ℝ}^{N}$ to the vectorial Hamilton–Jacobi equation
$H\left(x,\mathrm{D}u\left(x\right)\right)=c,x\in \mathrm{\Omega },$
actually is a rank-one absolute minimiser. Namely, for any ${\mathrm{\Omega }}^{\prime }⋐\mathrm{\Omega }$, any $\varphi \in {W}_{0}^{1,\mathrm{\infty }}\left({\mathrm{\Omega }}^{\prime }\right)$ and any $\xi \in {ℝ}^{N}$, we have
$\underset{x\in {\mathrm{\Omega }}^{\prime }}{\mathrm{ess}\mathrm{sup}}H\left(x,\mathrm{D}u\left(x\right)\right)\le \underset{x\in {\mathrm{\Omega }}^{\prime }}{\mathrm{ess}\mathrm{sup}}H\left(x,\mathrm{D}u\left(x\right)+\xi \otimes \mathrm{D}\varphi \left(x\right)\right).$
For the above implication to be true we need the solutions to be in ${C}^{1}\left(\mathrm{\Omega },{ℝ}^{N}\right)$ and not just in ${W}_{\mathrm{loc}}^{1,\mathrm{\infty }}\left(\mathrm{\Omega },{ℝ}^{N}\right)$. This is not a technical difficulty, since it is well known, even in the scalar case, that if we allow only for one non-differentiability point, then the strong solutions of the eikonal equation $|\mathrm{D}u|=1$ are not absolutely minimising for the ${L}^{\mathrm{\infty }}$ norm of the gradient (e.g., the cone function $x↦|x|$). However, due to regularity results which available in the scalar case, it suffices to assume everywhere differentiability (see [12, 10]).
Our only hypothesis imposed on H is that for any $x\in \mathrm{\Omega }$, the partial function $H\left(x,\cdot \right):{ℝ}^{N×n}\to ℝ$ is rank-one level-convex. This means that for any $t\ge 0$, the sublevel sets $\left\{H\left(x,\cdot \right)\le t\right\}$ are rank-one convex sets in ${ℝ}^{N×n}$. A set $\mathcal{𝒞}\subseteq {ℝ}^{N×n}$ is called rank-one convex when for any matrices $A,B\in \mathcal{𝒞}$ with $\mathrm{rk}\left(A-B\right)\le 1$, the convex combination $\lambda A+\left(1-\lambda \right)B$ is in $\mathcal{𝒞}$ for any $0\le \lambda \le 1$. An equivalent way to phrase the rank-one level-convexity of $H\left(x,\cdot \right)$ is via the inequality
$H\left(x,\lambda A+\left(1-\lambda \right)B\right)\le \mathrm{max}\left\{H\left(x,A\right),H\left(x,B\right)\right\},$
when
$x\in \mathrm{\Omega },A,B\in {ℝ}^{N×n},\mathrm{rk}\left(A-B\right)\le 1\mathit{ }\text{and}\mathit{ }0\le \lambda \le 1.$
This convexity assumption is substantially weaker than the ${L}^{\mathrm{\infty }}$ versions of quasiconvexity which we call “BJW-quasiconvexity”, named after Barron, Jensen and Wang who introduced it in [8].
We note that Hamilton–Jacobi equations are very important for ${L}^{\mathrm{\infty }}$ variational problems and their equations. In the scalar case, ${C}^{1}$ solutions are viscosity solutions to the respective second order single equations in ${L}^{\mathrm{\infty }}$ (see, e.g., [22]). Heuristically, for the case of the $\mathrm{\infty }$-Laplace equation this can be seen by rewriting (1.2) as $\mathrm{D}u\mathrm{D}\left(\frac{1}{2}{|\mathrm{D}u|}^{2}\right)=0$ and this reveals that solutions of $|\mathrm{D}u|=c$ are $\mathrm{\infty }$-harmonic. In the vectorial case, Hamilton–Jacobi equations give rise to certain first-order differential inclusions of the form
$\mathrm{D}u\left(x\right)\in \mathcal{𝒦}\subseteq {ℝ}^{N×n},x\in \mathrm{\Omega },$
for which the Dacorogna–Marcellini Baire category (the analytic counterpart of Gromov’s convex integration) method can be utilised to establish existence of $\mathcal{𝒟}$-solutions to the ${L}^{\mathrm{\infty }}$ systems of PDE with extra geometric properties (see [30] and [16, 15]).
The main result of the present paper is the following.
#### Theorem 1.
Let $\mathrm{\Omega }\mathrm{\subseteq }{\mathrm{R}}^{n}$ be an open set, $n\mathrm{,}N\mathrm{\in }\mathrm{N}$, and $H\mathrm{:}\mathrm{\Omega }\mathrm{×}{\mathrm{R}}^{N\mathrm{×}n}\mathrm{\to }\mathrm{\left[}\mathrm{0}\mathrm{,}\mathrm{\infty }\mathrm{\right)}$ a continuous function such that for all $x\mathrm{\in }\mathrm{\Omega }$, $P\mathrm{↦}H\mathit{}\mathrm{\left(}x\mathrm{,}P\mathrm{\right)}$ is rank-one level-convex, that is, $\mathrm{\left\{}H\mathit{}\mathrm{\left(}x\mathrm{,}\mathrm{\cdot }\mathrm{\right)}\mathrm{\le }t\mathrm{\right\}}$ is a rank-one convex in ${\mathrm{R}}^{N\mathrm{×}n}$ for all $t\mathrm{\ge }\mathrm{0}$, $x\mathrm{\in }\mathrm{\Omega }$. Let $u\mathrm{\in }{C}^{\mathrm{1}}\mathit{}\mathrm{\left(}\mathrm{\Omega }\mathrm{,}{\mathrm{R}}^{N}\mathrm{\right)}$ be a solution to the vectorial Hamilton–Jacobi PDE
for some $c\mathrm{\ge }\mathrm{0}$. Then, u is a rank-one absolute minimiser of the functional
${E}_{\mathrm{\infty }}\left(u,{\mathrm{\Omega }}^{\prime }\right)=\underset{x\in {\mathrm{\Omega }}^{\prime }}{\mathrm{ess}\mathrm{sup}}H\left(x,\mathrm{D}u\left(x\right)\right),{\mathrm{\Omega }}^{\prime }⋐\mathrm{\Omega },u\in {W}_{\mathrm{loc}}^{1,\mathrm{\infty }}\left(\mathrm{\Omega },{ℝ}^{N}\right).$
In addition, the following marginally stronger result holds true: for any ${\mathrm{\Omega }}^{\mathrm{\prime }}\mathrm{⋐}\mathrm{\Omega }$, any $\varphi \mathrm{\in }{W}_{\mathrm{0}}^{\mathrm{1}\mathrm{,}\mathrm{\infty }}\mathit{}\mathrm{\left(}{\mathrm{\Omega }}^{\mathrm{\prime }}\mathrm{\right)}$ and any $\xi \mathrm{\in }{\mathrm{R}}^{N}$, we have
${E}_{\mathrm{\infty }}\left(u,{\mathrm{\Omega }}^{\prime }\right)\le \underset{𝔹\in \mathcal{ℬ}\left(\varphi ,{\mathrm{\Omega }}^{\prime }\right)}{inf}{E}_{\mathrm{\infty }}\left(u+\xi \varphi ,𝔹\right),$
where $\mathcal{B}\mathit{}\mathrm{\left(}\varphi \mathrm{,}{\mathrm{\Omega }}^{\mathrm{\prime }}\mathrm{\right)}$ is the set of open balls centred at local extrema (maxima or minima) of ϕ inside ${\mathrm{\Omega }}^{\mathrm{\prime }}$:
(1.8)
An immediate consequence of Theorem 1 is the following result.
#### Corollary 2.
In the setting of Theorem 1, we additionally have
$\underset{{\mathrm{\Omega }}^{\prime }}{\mathrm{ess}\mathrm{sup}}H\left(\cdot ,\mathrm{D}u\right)\le \underset{\rho \to 0}{lim}\left(\underset{{𝔹}_{\rho }\left(x\right)}{\mathrm{ess}\mathrm{sup}}H\left(\cdot ,\mathrm{D}u+\xi \otimes \mathrm{D}\varphi \right)\right)$
for any $x\mathrm{\in }{\mathrm{\Omega }}^{\mathrm{\prime }}$ at which ϕ achieves a local maximum or a local minimum.
The quantity of the right-hand side above is known as the local functional at x and in the scalar case it has been used as a substitute of the pointwise values due to its upper semi-continuity regularity properties.
## 2 The proof of Theorem 1
Let $H,c,u,\mathrm{\Omega }$ be as in the statement and fix ${\mathrm{\Omega }}^{\prime }⋐\mathrm{\Omega }$ and a unit vector $\xi \in {ℝ}^{N}$. We introduce the following notation for the projections on $\mathrm{span}\left[\xi \right]$ and the orthogonal hyperplane $\mathrm{span}{\left[\xi \right]}^{\perp }$:
${\left[\xi \right]}^{\top }:=\xi \otimes \xi ,{\left[\xi \right]}^{\perp }:=I-\xi \otimes \xi .$
Let $\psi \in {W}_{u}^{1,\mathrm{\infty }}\left({\mathrm{\Omega }}^{\prime },{ℝ}^{N}\right)$ be such that ${\left[\xi \right]}^{\perp }\left(\psi -u\right)\equiv 0$, that is, the projections of ψ and u on the hyperplane $\mathrm{span}{\left[\xi \right]}^{\perp }\subseteq {ℝ}^{N}$ coincide. Then $\xi \cdot \left(\psi -u\right)\in {W}_{0}^{1,\mathrm{\infty }}\left({\mathrm{\Omega }}^{\prime }\right)$ and, because the scalar function $\xi \cdot \left(\psi -u\right)$ vanishes on $\partial {\mathrm{\Omega }}^{\prime }$, there exist at least one local extremum of $\xi \cdot \left(\psi -u\right)$ in ${\mathrm{\Omega }}^{\prime }$, whence the set $\mathcal{ℬ}\left(\xi \cdot \left(\psi -u\right),{\mathrm{\Omega }}^{\prime }\right)$, given by (1.8), is non-empty. Fix a ball ${𝔹}_{\rho }\left(x\right)⋐{\mathrm{\Omega }}^{\prime }$ centred at such an extremal point of $\xi \cdot \left(\psi -u\right)$.
We illustrate the idea by assuming first in addition that $\psi \in {W}_{u}^{1,\mathrm{\infty }}\left({\mathrm{\Omega }}^{\prime },{ℝ}^{N}\right)\cap {C}^{1}\left({\mathrm{\Omega }}^{\prime },{ℝ}^{N}\right)$. In this case, the point x is a critical point of $\xi \cdot \left(\psi -u\right)$ and we have $\mathrm{D}\left(\xi \cdot \left(\psi -u\right)\right)\left(x\right)=0$. Hence,
$\mathrm{D}\left(\psi -u\right)\left(x\right)={\left[\xi \right]}^{\top }\mathrm{D}\left(\psi -u\right)\left(x\right)+{\left[\xi \right]}^{\perp }\mathrm{D}\left(\psi -u\right)\left(x\right)$$=\xi \otimes \mathrm{D}\left(\xi \cdot \left(\psi -u\right)\right)\left(x\right)+\mathrm{D}\left({\left[\xi \right]}^{\perp }\left(\psi -u\right)\right)\left(x\right)$$=0,$
because ${\left[\xi \right]}^{\perp }\psi \equiv {\left[\xi \right]}^{\perp }u$ on ${\mathrm{\Omega }}^{\prime }$. Thus,
${E}_{\mathrm{\infty }}\left(u,{\mathrm{\Omega }}^{\prime }\right)=c=H\left(x,\mathrm{D}u\left(x\right)\right)=H\left(x,\xi \otimes \mathrm{D}\left(\xi \cdot u\right)\left(x\right)+{\left[\xi \right]}^{\perp }\mathrm{D}u\left(x\right)\right),$
and hence
${E}_{\mathrm{\infty }}\left(u,{\mathrm{\Omega }}^{\prime }\right)=H\left(x,\xi \otimes \mathrm{D}\left(\xi \cdot \psi \right)\left(x\right)+{\left[\xi \right]}^{\perp }\mathrm{D}\psi \left(x\right)\right)$$=H\left(x,\mathrm{D}\psi \left(x\right)\right)$$\le \underset{y\in {𝔹}_{\rho }\left(x\right)}{\mathrm{ess}\mathrm{sup}}H\left(y,\mathrm{D}\psi \left(y\right)\right)$$={E}_{\mathrm{\infty }}\left(\psi ,{𝔹}_{\rho }\left(x\right)\right)$(2.1)
for any ${𝔹}_{\rho }\left(x\right)\subseteq \mathcal{ℬ}\left(\xi \cdot \left(\psi -u\right),{\mathrm{\Omega }}^{\prime }\right)$, whence the conclusion ensues.
Now we return to the general case of $\psi \in {W}_{u}^{1,\mathrm{\infty }}\left({\mathrm{\Omega }}^{\prime },{ℝ}^{N}\right)$. We extend ψ by u on $\mathrm{\Omega }\setminus {\mathrm{\Omega }}^{\prime }$ and consider the sets
${\mathrm{\Omega }}_{k}:=\left\{\begin{array}{cc}\left\{x\in {\mathrm{\Omega }}^{\prime }:\mathrm{dist}\left(x,\partial {\mathrm{\Omega }}^{\prime }\right)>\frac{{d}_{0}}{k}\right\},\hfill & k\in ℕ,\hfill \\ \mathrm{\varnothing },\hfill & k=0,\hfill \end{array}$
where ${d}_{0}>0$ is a constant small enough so that ${\mathrm{\Omega }}_{1}\ne \mathrm{\varnothing }$. We set
${V}_{k}:={\mathrm{\Omega }}_{k}\setminus \overline{{\mathrm{\Omega }}_{k-1}},k\in ℕ,$(2.2)
and consider a partition of unity ${\left({\zeta }_{k}\right)}_{k=1}^{\mathrm{\infty }}\subseteq {C}_{c}^{\mathrm{\infty }}\left({\mathrm{\Omega }}^{\prime }\right)$ over ${\mathrm{\Omega }}^{\prime }$ so that
(2.3)
(Such a partition of unity can be easily constructed explicitly by mollifying the characteristic functions ${\chi }_{{V}_{k}}$ and rescaling them appropriately.) Let ${\left({\eta }^{\epsilon }\right)}_{\epsilon >0}$ be the standard mollifier (as, e.g., in [17]) and set
${\psi }^{\epsilon }:=\xi \otimes \left(\sum _{k=1}^{\mathrm{\infty }}{\zeta }_{k}\left(\left(\xi \cdot \psi \right)*{\eta }^{\epsilon /k}\right)\right)+{\left[\xi \right]}^{\perp }\mathrm{D}\psi ,0<\epsilon <{d}_{0}.$
We claim that ${\psi }^{\epsilon }\in {W}_{u}^{1,\mathrm{\infty }}\left({\mathrm{\Omega }}^{\prime },{ℝ}^{N}\right)\cap {C}^{1}\left({\mathrm{\Omega }}^{\prime },{ℝ}^{N}\right)$ and ${\psi }^{\epsilon }\to \psi$ in $C\left(\overline{{\mathrm{\Omega }}^{\prime }},{ℝ}^{N}\right)$ as $\epsilon \to 0$. To verify these claims, fix $l\in ℕ$. If $l\ge 2$, then by (2.2)–(2.3) and because $|\xi |=1$, we have
$\parallel {\psi }^{\epsilon }-\psi {\parallel }_{C\left(\overline{{V}_{l}}\right)}=\underset{{V}_{l}}{sup}|\xi \cdot \left(\sum _{k=1}^{\mathrm{\infty }}{\zeta }_{k}\left(\psi *{\eta }^{\epsilon /k}\right)-\psi \sum _{k=1}^{\mathrm{\infty }}{\zeta }_{k}\right)|$$\le \underset{{V}_{l}}{sup}\sum _{k=1}^{\mathrm{\infty }}{\zeta }_{k}|\psi *{\eta }^{\epsilon /k}-\psi |$$=\underset{{V}_{l}}{sup}\sum _{k=l-1}^{l+1}{\zeta }_{k}|\psi *{\eta }^{\epsilon /k}-\psi |$$\le 3\underset{k=l-1,l,l+1}{\mathrm{max}}\parallel \psi *{\eta }^{\epsilon /k}-\psi {\parallel }_{C\left(\overline{{\mathrm{\Omega }}^{\prime }}\right)},$(2.4)
whilst, for $l=1$, we similarly have
$\parallel {\psi }^{\epsilon }-\psi {\parallel }_{C\left(\overline{{V}_{1}}\right)}\le 2\underset{k=1,2}{\mathrm{max}}\parallel \psi *{\eta }^{\epsilon /k}-\psi {\parallel }_{C\left(\overline{{\mathrm{\Omega }}^{\prime }}\right)}.$(2.5)
By the standard properties of mollifiers, we have that the function
$\omega \left(t\right):=\underset{0<\tau (2.6)
is an increasing continuous modulus of continuity with $\omega \left({0}^{+}\right)=0$. By (2.4)–(2.6), we have that
${\parallel {\psi }^{\epsilon }-\psi \parallel }_{C\left(\overline{{V}_{l}}\right)}\le \left\{\begin{array}{cc}3\omega \left(\frac{\epsilon }{l-1}\right),\hfill & l\ge 2,\hfill \\ 2\omega \left(\epsilon \right),\hfill & l=1.\hfill \end{array}$
Since the ${C}^{1}$ regularity of ${\psi }^{\epsilon }$ is obvious (because u, by assumption, is such and ${\left[\xi \right]}^{\perp }\psi \equiv {\left[\xi \right]}^{\perp }u$), the claim has been established.
Note now that since $\psi -u\in {W}_{0}^{1,\mathrm{\infty }}\left({\mathrm{\Omega }}^{\prime },{ℝ}^{N}\right)$, the set $\mathcal{ℬ}\left(\xi \cdot \left(\psi -u\right),{\mathrm{\Omega }}^{\prime }\right)$ given by (1.8) is non-empty because the scalar function $\xi \cdot \left(\psi -u\right)$ which vanishes on $\partial {\mathrm{\Omega }}^{\prime }$ necessarily attains an interior extremum. Fix a ball
${𝔹}_{\rho }\left({x}_{0}\right)\in \mathcal{ℬ}\left(\xi \cdot \left(\psi -u\right),{\mathrm{\Omega }}^{\prime }\right).$
Since ${\psi }^{\epsilon }-u\to \psi -u$ in $C\left(\overline{{\mathrm{\Omega }}^{\prime }},{ℝ}^{N}\right)$ as $\epsilon \to 0$, by a standard stability argument of maxima/minima of scalar-valued function under uniform convergence (see, e.g., [30]), there exists a local extremum ${x}_{\epsilon }\in {\mathrm{\Omega }}^{\prime }$ of $\xi \cdot \left({\psi }^{\epsilon }-u\right)$ such that ${x}_{\epsilon }\to {x}_{0}$ as $\epsilon \to 0$. By the differentiability of u and by choosing ε small enough, we may arrange
$\mathrm{D}\left(\xi \cdot \left({\psi }^{\epsilon }-u\right)\right)\left({x}_{\epsilon }\right)=0,|{x}_{\epsilon }-x|<\frac{\rho }{2}.$
Hence,
$\mathrm{D}\left({\psi }^{\epsilon }-u\right)\left({x}_{\epsilon }\right)=\xi \otimes \mathrm{D}\left(\xi \cdot \left({\psi }^{\epsilon }-u\right)\right)\left({x}_{\epsilon }\right)+\mathrm{D}\left({\left[\xi \right]}^{\perp }\left(\psi -u\right)\right)\left({x}_{\epsilon }\right)=0.$
Then, by arguing as in (2.1), we have
$\underset{{\mathrm{\Omega }}^{\prime }}{\mathrm{ess}\mathrm{sup}}H\left(\cdot ,\mathrm{D}u\right)\le \underset{{𝔹}_{\rho /2}\left({x}_{0}\right)}{\mathrm{ess}\mathrm{sup}}H\left(\cdot ,\mathrm{D}{\psi }^{\epsilon }\right).$(2.7)
Since
$\mathrm{D}{\psi }^{\epsilon }=\xi \otimes \left[\sum _{k=1}^{\mathrm{\infty }}\mathrm{D}{\zeta }_{k}\left(\left(\xi \cdot \psi \right)*{\eta }^{\epsilon /k}\right)+\sum _{k=1}^{\mathrm{\infty }}{\zeta }_{k}\left(\left(\mathrm{D}\left(\xi \cdot \psi \right)\right)*{\eta }^{\epsilon /k}\right)\right]+{\left[\xi \right]}^{\perp }\mathrm{D}\psi ,$
our continuity assumption and the ${W}^{1,\mathrm{\infty }}$ regularity of ψ imply that there exists a positive increasing modulus of continuity ${\omega }_{1}$ with ${\omega }_{1}\left({0}^{+}\right)=0$ such that on the ball ${𝔹}_{\rho /2}\left({x}_{0}\right)$ we have
$H\left(\cdot ,\mathrm{D}{\psi }^{\epsilon }\right)=H\left(\cdot ,\xi \otimes \left[\sum _{k=1}^{\mathrm{\infty }}{\zeta }_{k}\left(\left(\mathrm{D}\left(\xi \cdot \psi \right)\right)*{\eta }^{\epsilon /k}\right)+\sum _{k=1}^{\mathrm{\infty }}\mathrm{D}{\zeta }_{k}\left(\left(\xi \cdot \psi \right)*{\eta }^{\epsilon /k}\right)\right]+{\left[\xi \right]}^{\perp }\mathrm{D}\psi \right)$$\le H\left(\cdot ,\xi \otimes \left[\sum _{k=1}^{\mathrm{\infty }}{\zeta }_{k}\left(\mathrm{D}\left(\xi \cdot \psi \right)*{\eta }^{\epsilon /k}\right)\right]+{\left[\xi \right]}^{\perp }\mathrm{D}\psi \right)+{\omega }_{1}\left(|\sum _{k=1}^{\mathrm{\infty }}\mathrm{D}{\zeta }_{k}\left(\left(\xi \cdot \psi \right)*{\eta }^{\epsilon /k}\right)|\right)$$=:A+B.$(2.8)
By further restricting $\epsilon <\rho /2$, we may arrange
$\bigcup _{x\in {𝔹}_{\rho /2}\left(x\right)}{𝔹}_{\epsilon }\left(x\right)\subseteq {𝔹}_{\rho }\left({x}_{0}\right),$(2.9)
and by (2.2)–(2.3), there exists $K\left(\rho \right)\in ℕ$ such that
${𝔹}_{\rho }\left({x}_{0}\right)\subseteq \bigcup _{k=1,\mathrm{\dots },K\left(\rho \right)}\overline{{V}_{k}}.$(2.10)
This implies that for any $x\in {𝔹}_{\rho }\left({x}_{0}\right)$,
$\sum _{1}^{\mathrm{\infty }}{\zeta }_{k}\left(x\right)=\sum _{1}^{K\left(\rho \right)+1}{\zeta }_{k}\left(x\right)=1,$(2.11)
forming a convex combination. We now recall for immediate use right below the following Jensen-like inequality for level-convex functions (see, e.g., [9, 8]): for any probability measure μ on an open set $U\subseteq {ℝ}^{n}$ and any μ-measurable function $f:U\subseteq {ℝ}^{n}\to \left[0,\mathrm{\infty }\right)$, we have
$\mathrm{\Phi }\left({\int }_{U}f\left(x\right)𝑑\mu \left(x\right)\right)\le \mu -\underset{x\in U}{\mathrm{ess}\mathrm{sup}}\mathrm{\Phi }\left(f\left(x\right)\right),$(2.12)
when $\mathrm{\Phi }:{ℝ}^{n}\to ℝ$ is any continuous level-convex function. Further, by our rank-one level-convexity assumption on H and if ψ is as above, for any $x\in \mathrm{\Omega }$ and $\xi \in {ℝ}^{N}$ with $|\xi |=1$, the function
$\mathrm{\Psi }\left(p\right):=H\left(x,\xi \otimes p+{\left[\xi \right]}^{\perp }\mathrm{D}\psi \left(x\right)\right),p\in {ℝ}^{n},$(2.13)
is level-convex. Indeed, given $p,q\in {ℝ}^{n}$ and $t\ge 0$ with $\mathrm{\Psi }\left(p\right),\mathrm{\Psi }\left(q\right)\le t$, we set
$P:=\xi \otimes p+{\left[\xi \right]}^{\perp }\mathrm{D}\psi \left(x\right),Q:=\xi \otimes q+{\left[\xi \right]}^{\perp }\mathrm{D}\psi \left(x\right).$
Then, $P-Q=\xi \otimes \left(p-q\right)$, and hence $\mathrm{rk}\left(P-Q\right)\le 1$. Moreover, $H\left(x,P\right)=\mathrm{\Psi }\left(p\right)\le t$ and $H\left(x,Q\right)=\mathrm{\Psi }\left(q\right)\le t$, which gives
$\mathrm{\Psi }\left(\lambda p+\left(1-\lambda \right)q\right)=H\left(x,\lambda P+\left(1-\lambda \right)Q\right)\le t$
for any $\lambda \in \left[0,1\right]$, as desired.
Now, by using (2.2)–(2.3), (2.9)–(2.11) and the level-convexity of the function Ψ of (2.13), for any $x\in {𝔹}_{\rho /2}\left({x}_{0}\right)$, we have the estimate
$A\left(x\right)=H\left(x,\xi \otimes \left[\sum _{k=1}^{K\left(\rho \right)+1}{\zeta }_{k}\left(x\right)\left(\left(\mathrm{D}\left(\xi \cdot \psi \right)\right)*{\eta }^{\epsilon /k}\right)\left(x\right)\right]+{\left[\xi \right]}^{\perp }\mathrm{D}\psi \left(x\right)\right)$$=\mathrm{\Psi }\left(\sum _{k=1}^{K\left(\rho \right)+1}{\zeta }_{k}\left(x\right)\left(\left(\mathrm{D}\left(\xi \cdot \psi \right)\right)*{\eta }^{\epsilon /k}\right)\left(x\right)\right)$$\le \underset{k=1,\mathrm{\dots },K\left(\rho \right)+1}{\mathrm{max}}\mathrm{\Psi }\left(\left(\left(\mathrm{D}\left(\xi \cdot \psi \right)\right)*{\eta }^{\epsilon /k}\right)\left(x\right)\right)$$=\underset{k=1,\mathrm{\dots },K\left(\rho \right)+1}{\mathrm{max}}\mathrm{\Psi }\left({\int }_{{𝔹}_{\epsilon /k}\left(x\right)}\mathrm{D}\left(\xi \cdot \psi \right)\left(y\right){\eta }^{\epsilon /k}\left(|x-y|\right)𝑑y\right).$(2.14)
Since for any x and $\epsilon ,k$, the map $\mu :={\eta }^{\epsilon /k}\left(|x-\cdot |\right){\mathcal{ℒ}}^{n}$ is a probability measure on the ball ${𝔹}_{\epsilon /k}\left(x\right)$ which is absolutely continuous with respect to the Lebesgue measure ${\mathcal{ℒ}}^{n}$, in view of (2.12), (2.14) gives
$A\left(x\right)\le \underset{k=1,\mathrm{\dots },K\left(\rho \right)+1}{\mathrm{max}}\left(\underset{y\in {𝔹}_{\epsilon /k}\left(x\right)}{\mathrm{ess}\mathrm{sup}}\mathrm{\Psi }\left(\mathrm{D}\left(\xi \cdot \psi \right)\left(y\right)\right)\right)$$=\underset{k=1,\mathrm{\dots },K\left(\rho \right)+1}{\mathrm{max}}\left(\underset{y\in {𝔹}_{\epsilon /k}\left(x\right)}{\mathrm{ess}\mathrm{sup}}H\left(x,\xi \otimes \mathrm{D}\left(\xi \cdot \psi \right)\left(y\right)+{\left[\xi \right]}^{\perp }\mathrm{D}\psi \left(x\right)\right)\right)$$\le \underset{y\in {𝔹}_{\epsilon }\left(x\right)}{\mathrm{ess}\mathrm{sup}}H\left(x,\xi \otimes \mathrm{D}\left(\xi \cdot \psi \right)\left(y\right)+{\left[\xi \right]}^{\perp }\mathrm{D}\psi \left(x\right)\right).$(2.15)
By the continuity of H and $\mathrm{D}u$, there exists a positive increasing modulus of continuity ${\omega }_{2}$ with ${\omega }_{2}\left({0}^{+}\right)=0$ such that
$|H\left(x,P\right)-H\left(y,Q\right)|\le {\omega }_{2}\left(|x-y|+|P-Q|\right),|\mathrm{D}u\left(x\right)-\mathrm{D}u\left(y\right)|\le {\omega }_{2}\left(|x-y|\right)$
for all $x,y\in {𝔹}_{\rho }\left({x}_{0}\right)$ and $|P|,|Q|\le {\parallel \mathrm{D}\psi \parallel }_{{L}^{\mathrm{\infty }}\left({\mathrm{\Omega }}^{\prime }\right)}+1$. By using the fact that ${\left[\xi \right]}^{\perp }\psi \equiv {\left[\xi \right]}^{\perp }u$ on ${\mathrm{\Omega }}^{\prime }$, (2.15) and the above, we have
$A\left(x\right)\le \underset{y\in {𝔹}_{\epsilon }\left(x\right)}{\mathrm{ess}\mathrm{sup}}H\left(x,{\left[\xi \right]}^{\top }\mathrm{D}\psi \left(y\right)+{\left[\xi \right]}^{\perp }\mathrm{D}\psi \left(x\right)\right)$$\le \underset{y\in {𝔹}_{\epsilon }\left(x\right)}{\mathrm{ess}\mathrm{sup}}\left\{H\left(y,{\left[\xi \right]}^{\top }\mathrm{D}\psi \left(y\right)+{\left[\xi \right]}^{\perp }\mathrm{D}\psi \left(y\right)\right)+{\omega }_{2}\left(|x-y|+|{\left[\xi \right]}^{\perp }\left(\mathrm{D}\psi \left(y\right)-\mathrm{D}\psi \left(x\right)\right)|\right)\right\}$$=\underset{y\in {𝔹}_{\epsilon }\left(x\right)}{\mathrm{ess}\mathrm{sup}}\left\{H\left(y,\mathrm{D}\psi \left(y\right)\right)+{\omega }_{2}\left(|x-y|+|{\left[\xi \right]}^{\perp }\left(\mathrm{D}u\left(y\right)-\mathrm{D}u\left(x\right)\right)|\right)\right\}.$(2.16)
By (2.9), (2.16) gives
$A\left(x\right)\le \underset{y\in {𝔹}_{\epsilon }\left(x\right)}{\mathrm{ess}\mathrm{sup}}H\left(y,\mathrm{D}\psi \left(y\right)\right)+\underset{y\in {𝔹}_{\epsilon }\left(x\right)}{sup}{\omega }_{2}\left(|x-y|+|\mathrm{D}u\left(y\right)-\mathrm{D}u\left(x\right)|\right)$$\le \underset{y\in {𝔹}_{\rho }\left({x}_{0}\right)}{\mathrm{ess}\mathrm{sup}}H\left(y,\mathrm{D}\psi \left(y\right)\right)+{\omega }_{2}\left(\epsilon +{\omega }_{2}\left(\epsilon \right)\right)$(2.17)
for any $x\in {𝔹}_{\rho /2}\left({x}_{0}\right)$. We now estimate the term B of (2.8) on ${𝔹}_{\rho /2}\left({x}_{0}\right)$ as above by using (2.6):
$B={\omega }_{1}\left(|\xi \cdot \left[\sum _{k=1}^{\mathrm{\infty }}\mathrm{D}{\zeta }_{k}\left(\psi *{\eta }^{\epsilon /k}\right)\right]|\right)$$\le {\omega }_{1}\left(|\sum _{k=1}^{\mathrm{\infty }}\mathrm{D}{\zeta }_{k}\left(\psi *{\eta }^{\epsilon /k}\right)|\right)$$\le {\omega }_{1}\left(|\sum _{k=1}^{\mathrm{\infty }}\mathrm{D}{\zeta }_{k}\psi |+\sum _{k=1}^{K\left(\rho \right)+1}|\mathrm{D}{\zeta }_{k}||\psi *{\eta }^{\epsilon /k}-\psi |\right),$
and since ${\sum }_{k=1}^{\mathrm{\infty }}\mathrm{D}{\zeta }_{k}\equiv 0$, we get
$B\le {\omega }_{1}\left(\sum _{k=1}^{K\left(\rho \right)+1}|\mathrm{D}{\zeta }_{k}||\psi *{\eta }^{\epsilon /k}-\psi |\right)$$\le {\omega }_{1}\left(C\left(\rho \right)\underset{k=1,\mathrm{\dots },K\left(\rho \right)+1}{\mathrm{max}}\parallel \psi *{\eta }^{\epsilon /k}-\psi {\parallel }_{C\left(\overline{{\mathrm{\Omega }}^{\prime }}\right)}\right)$$\le {\omega }_{1}\left(C\left(\rho \right)\omega \left(\epsilon \right)\right)$(2.18)
on ${𝔹}_{\rho /2}\left({x}_{0}\right)$. By putting together (2.7), (2.8), (2.17) and (2.18), we have
$\underset{{\mathrm{\Omega }}^{\prime }}{\mathrm{ess}\mathrm{sup}}H\left(\cdot ,\mathrm{D}u\right)\le \underset{{𝔹}_{\rho }\left({x}_{0}\right)}{\mathrm{ess}\mathrm{sup}}H\left(\cdot ,\mathrm{D}\psi \right)+{\omega }_{1}\left(C\left(\rho \right)\omega \left(\epsilon \right)\right)+{\omega }_{2}\left(\epsilon +{\omega }_{2}\left(\epsilon \right)\right),$
and by letting $\epsilon \to 0$, the conclusion follows.
## References
• [1]
H. Abugirda and N. Katzourakis, Existence of $1D$ vectorial absolute minimisers in ${L}^{\mathrm{\infty }}$ under minimal assumptions, Proc. Amer. Math. Soc. 145 (2017), 2567–2575. Google Scholar
• [2]
G. Aronsson, Minimization problems for the functional $su{p}_{x}\mathcal{ℱ}\left(x,f\left(x\right),{f}^{\prime }\left(x\right)\right)$, Ark. Mat. 6 (1965), 33–53. Google Scholar
• [3]
G. Aronsson, Minimization problems for the functional $su{p}_{x}\mathcal{ℱ}\left(x,f\left(x\right),{f}^{\prime }\left(x\right)\right)$ II, Ark. Mat. 6 (1966), 409–431. Google Scholar
• [4]
G. Aronsson, Extension of functions satisfying Lipschitz conditions, Ark. Mat. 6 (1967), 551–561.
• [5]
G. Aronsson, On the partial differential equation ${u}_{x}^{2}{u}_{xx}+2{u}_{x}{u}_{y}{u}_{xy}+{u}_{y}^{2}{u}_{yy}=0$, Ark. Mat. 7 (1968), 395–425. Google Scholar
• [6]
G. Aronsson, Minimization problems for the functional $su{p}_{x}\mathcal{ℱ}\left(x,f\left(x\right),{f}^{\prime }\left(x\right)\right)$ III, Ark. Mat. 7 (1969), 509–512. Google Scholar
• [7]
G. Aronsson, On certain singular solutions of the partial differential equation ${u}_{x}^{2}{u}_{xx}+2{u}_{x}{u}_{y}{u}_{xy}+{u}_{y}^{2}{u}_{yy}=0$, Manuscripta Math. 47 (1984), no. 1–3, 133–151. Google Scholar
• [8]
E. N. Barron, R. Jensen and C. Wang, Lower semicontinuity of ${L}^{\mathrm{\infty }}$ functionals, Ann. Inst. H. Poincaré Anal. Non Linéaire 18 (2001), no. 4, 495–517. Google Scholar
• [9]
E. N. Barron, R. Jensen and C. Wang, The Euler equation and absolute minimizers of ${L}^{\mathrm{\infty }}$ functionals, Arch. Ration. Mech. Anal. 157 (2001), 255–283. Google Scholar
• [10]
L. A. Caffarelli and M. G. Crandall, Distance functions and almost global solutions of eikonal equations, Comm. Partial Differential Equations 35 (2010), no. 3, 391–414.
• [11]
M. G. Crandall, A visit with the $\mathrm{\infty }$-Laplacian, Calculus of Variations and Non-Linear Partial Differential Equations, Lecture Notes in Math. 1927, Springer, Berlin (2008), 75–122. Google Scholar
• [12]
M. G. Crandall, L. C. Evans and R. Gariepy, Optimal Lipschitz extensions and the infinity Laplacian, Calc. Var. Partial Differential Equations 13 (2001), 123–139. Google Scholar
• [13]
M. G. Crandall, H. Ishii and P.-L. Lions, User’s guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc. 27 (1992), 1–67.
• [14]
G. Croce, N. Katzourakis and G. Pisante, $\mathcal{𝒟}$-solutions to the system of vectorial calculus of variations in ${L}^{\mathrm{\infty }}$ via the Baire category method for the singular values, preprint (2016), http://arxiv.org/pdf/1604.04385.pdf.
• [15]
B. Dacorogna, Direct Methods in the Calculus of Variations, 2nd ed., Appl. Math. Sci. 78, Springer, Berlin, 2008. Google Scholar
• [16]
B. Dacorogna and P. Marcellini, Implicit Partial Differential Equations, Progr. Nonlinear Differential Equations Appl. 37, Birkhäuser, Boston, 1999. Google Scholar
• [17]
L. C. Evans, Partial Differential Equations, Grad. Stud. Math. 19, American Mathematical Society, Providence, 1998. Google Scholar
• [18]
N. Katzourakis, ${L}^{\mathrm{\infty }}$-variational problems for maps and the Aronsson PDE system, J. Differential Equations 253 (2012), no. 7, 2123–2139. Google Scholar
• [19]
N. Katzourakis, Explicit $2D$ $\mathrm{\infty }$-harmonic maps whose interfaces have junctions and corners, C. R. Math. Acad. Sci. Paris 351 (2013), 677–680. Google Scholar
• [20]
N. Katzourakis, $\mathrm{\infty }$-minimal submanifolds, Proc. Amer. Math. Soc. 142 (2014), 2797–2811.
• [21]
N. Katzourakis, On the structure of $\mathrm{\infty }$-harmonic maps, Comm. Partial Differential Equations 39 (2014), no. 11, 2091–2124.
• [22]
N. Katzourakis, An Introduction to Viscosity Solutions for Fully Nonlinear PDE with Applications to Calculus of Variations in ${L}^{\mathrm{\infty }}$, Springer Briefs Math., Springer, Cham, 2015. Google Scholar
• [23]
N. Katzourakis, Equivalence between weak and $\mathcal{𝒟}$-solutions for symmetric hyperbolic first order PDE systems, preprint (2015), http://arxiv.org/pdf/1507.03042.pdf.
• [24]
N. Katzourakis, Mollification of $\mathcal{𝒟}$-solutions to fully nonlinear PDE systems, preprint (2015), http://arxiv.org/pdf/1508.05519.pdf.
• [25]
N. Katzourakis, Nonuniqueness in vector-valued calculus of variations in ${L}^{\mathrm{\infty }}$ and some linear elliptic systems, Comm. Pure Appl. Anal. 14 (2015), no. 1, 313–327. Google Scholar
• [26]
N. Katzourakis, Optimal $\mathrm{\infty }$-quasiconformal immersions, ESAIM Control Optim. Calc. Var. 21 (2015), no. 2, 561–582.
• [27]
N. Katzourakis and T. Pryer, On the numerical approximation of $\mathrm{\infty }$-harmonic mappings, NoDEA Nonlinear Differential Equations Appl. 23 (2016), no. 6, 1–23. Google Scholar
• [28]
N. Katzourakis, A new characterisation of $\mathrm{\infty }$-harmonic and p-harmonic maps via affine variations in ${L}^{\mathrm{\infty }}$, Electron. J. Differential Equations 2017 (2017), no. 29, 1–19. Google Scholar
• [29]
N. Katzourakis, Absolutely minimising generalised solutions to the equations of vectorial calculus of variations in ${L}^{\mathrm{\infty }}$, Calc. Var. Partial Differential Equations 56 (2017), no. 1, 1–25.
• [30]
N. Katzourakis, Generalised solutions for fully nonlinear PDE systems and existence-uniqueness theorems, J. Differential Equations 23 (2017), 641–686.
• [31]
N. Katzourakis and T. Pryer, Second order ${L}^{\mathrm{\infty }}$ variational problems and the $\mathrm{\infty }$-polylaplacian, preprint (2016), http://arxiv.org/pdf/1605.07880.pdf.
Accepted: 2017-04-03
Published Online: 2017-06-04
Funding Source: Engineering and Physical Sciences Research Council
Award identifier / Grant number: EP/N017412/1
The author has been partially supported by an EPSRC grant EP/N017412/1.
Citation Information: Advances in Nonlinear Analysis, Volume 8, Issue 1, Pages 508–516, ISSN (Online) 2191-950X, ISSN (Print) 2191-9496,
Export Citation | 2019-10-18 03:47:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 309, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9841861128807068, "perplexity": 3044.642376498163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677884.28/warc/CC-MAIN-20191018032611-20191018060111-00162.warc.gz"} |
https://gamedev.stackexchange.com/questions/32653/change-the-title-name-of-a-xna-window | # Change the title name of a XNA window?
I have tried to change the title with this source:
http://msdn.microsoft.com/en-us/library/ff966436.aspx
but this isn't working!? Help preciated!
• You can do it with code by changing the value of Window.Title but both ways described in the link you posted also worked for me. Jul 20 '12 at 13:33
Window.Title = "My new title" | 2022-01-26 18:35:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7334689497947693, "perplexity": 1474.2206976584325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304959.80/warc/CC-MAIN-20220126162115-20220126192115-00232.warc.gz"} |
http://mathhelpforum.com/statistics/185501-matching-cards.html | 1. ## Matching cards.
A petrol station runs a promotion to advertise its new car wash. Each customer is given a pair of cards, each with a letter printed on it. there are a large number of different cards - each one has one of the symbols W, A, $or H on it. Only 10% of the cards have a$ symbol, and the other three cards are equally likely. If the two cards are both the same the customer wins a prize.
(a) What is the probability that a customer is given a W card and a H card?
(b) What is the probability that a customer wins a prize?
2. ## Re: Find the probability (help please)
There is 30% chance of getting a W and 30% chance of getting a H.
Put together a table of all combinations for recieveing 2 cards, then find the probabilities.
3. ## Re: Find the probability (help please)
This means there is a 0.09 chance of getting a W and H (.3 x .3)
4. ## Re: Find the probability (help please)
Originally Posted by hsetima
This means there is a 0.09 chance of getting a W and H (.3 x .3)
I mean in total there is a .18 chance because it can either be W and H or H and W! | 2018-06-19 03:02:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7326574325561523, "perplexity": 676.2394687384135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861752.19/warc/CC-MAIN-20180619021643-20180619041643-00248.warc.gz"} |
http://paperity.org/p/75501265/fractional-and-j-fold-coloring-of-the-plane | # Fractional and j-Fold Coloring of the Plane
Discrete & Computational Geometry, Mar 2016
We present results referring to the Hadwiger–Nelson problem which asks for the minimum number of colors needed to color the plane with no two points at distance 1 having the same color. Exoo considered a more general problem concerning graphs $G_{[a,b]}$ with $\mathbb {R}^2$ as the vertex set and two vertices adjacent if their distance is in the interval [a, b]. Exoo conjectured $\chi (G_{[a,b]}) = 7$ for sufficiently small but positive difference between a and b. We partially answer this conjecture by proving that $\chi (G_{[a,b]}) \geqslant 5$ for $b > a$. A j-fold coloring of a graph $G = (V,E)$ is an assignment of j-elemental sets of colors to the vertices of G, in such a way that the sets assigned to any two adjacent vertices are disjoint. The fractional chromatic number $\chi _f(G)$ is the infimum of fractions k / j for j-fold coloring of G using k colors. We generalize a method by Hochberg and O’Donnel (who proved that $G_{[1,1]} \leqslant 4.36$) for the fractional coloring of graphs $G_{[a,b]}$, obtaining a bound dependent on $\frac{a}{b}$. We also present few specific and two general methods for j-fold coloring of $G_{[a,b]}$ for small j, in particular for $G_{[1,1]}$ and $G_{[1,2]}$. The j-fold coloring for small j has strong practical motivation especially in scheduling theory, while graph $G_{[1,2]}$ is often used to model hidden conflicts in radio networks.
This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2Fs00454-016-9769-3.pdf
Jarosław Grytczuk, Konstanty Junosza-Szaniawski, Joanna Sokół, Krzysztof Węsek. Fractional and j-Fold Coloring of the Plane, Discrete & Computational Geometry, 2016, 594-609, DOI: 10.1007/s00454-016-9769-3 | 2017-08-23 08:12:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6003674864768982, "perplexity": 455.73641364883963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117911.49/warc/CC-MAIN-20170823074634-20170823094634-00404.warc.gz"} |
https://math.stackexchange.com/questions/1446408/limit-lim-n-to-infty-sum-k-1n-frackk2n2 | # limit $\lim_{n\to \infty }\sum_{k=1}^{n}\frac{k}{k^2+n^2}$ [duplicate]
How do I evaluate this?
$$\lim_{n\to \infty }\sum_{k=1}^{n}\frac{k}{k^2+n^2}$$
I got concerned for that, I've tried make it integral for Riemann but it still undone.
## marked as duplicate by YuiTo Cheng, DMcMor, max_zorn, José Carlos Santos sequences-and-series StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Jun 28 at 20:42
$$\sum_{k=1}^{n}\frac{k}{k^{2}+n^{2}}=\frac{1}{n}\sum_{k=1}^{n}\frac{k/n}{\frac{k^{2}}{n^{2}}+1}$$ Which is a Riemann sum for the function $f(x)=\frac{x}{1+x^{2}}$ over $[0,1]$. Your sum therefore tends to $$\int_{0}^{1}\frac{x}{1+x^{2}}dx$$ Can you go on from here?
• Thank you so much Daniel, yeah i can. – Reza Habibi Sep 22 '15 at 12:23
• Both answers come with relevant clarifications on your problem. It would only be fair and in the spirit of the community to accept one – Victor Sep 22 '15 at 13:28
Notice, $$\lim_{n\to \infty}\sum_{k=1}^{n}\frac{k}{k^2+n^2}$$ $$=\lim_{n\to \infty}\sum_{k=1}^{n}\frac{\left(\frac{k}{n}\right)\frac{1}{n}}{\left(\frac{k}{n}\right)^2+1}$$ Let $\frac{k}{n}=u\implies \lim_{n\to \infty}\frac{1}{n}=du\to 0$
then we have $$\text{upper bound of u}=\lim_{n\to \infty}\frac{k}{n}=\lim_{n\to \infty}\frac{n}{n}=1$$ $$\text{lower bound of u}=\lim_{n\to \infty}\frac{k}{n}=\lim_{n\to \infty}\frac{1}{n}=0$$ Changing summation into integration with proper limits $$\int_{0}^{1}\frac{u\ du}{u^2+1}$$ $$=\frac{1}{2}\int_{0}^{1}\frac{(2u)\ du}{u^2+1}=\frac{1}{2}\int_{0}^{1}\frac{d(u^2)}{u^2+1}$$ $$=\frac{1}{2}[\ln|u^2+1|]_{0}^{1}$$ $$=\frac{1}{2}[\ln|1+1|-\ln|0+1|]$$$$=\color{red}{\frac{1}{2}\ln 2}$$
• Which part of this is not covered by the previous answer? – Did Sep 22 '15 at 14:35
Notice that $$\frac{k}{k^2+n^2}$$ is an increasing function on $k \in (0,n)$ as derivative is positive in this range.
Therefore $$\int_0^{n-1}\frac{k}{k^2+n^2}\leq\sum_1^{n-1}\frac{k}{k^2+n^2}\leq\int_1^{n}\frac{k}{k^2+n^2}$$.
Note that $$\int_0^{n-1}\frac{k}{k^2+n^2}=(1/2) \log \frac{2n^2-2n+1}{n^2}$$
So $$\lim_{n \to \infty}\int_0^{n-1}\frac{k}{k^2+n^2}=\lim_{n \to \infty}(1/2) \log \frac{2n^2-2n+1}{n^2}= 1/2 \log2$$.
Also, $$\int_1^{n}\frac{k}{k^2+n^2}=(1/2) \log \frac{2n^2}{n^2+1}$$
So $$\lim_{n \to \infty}\int_1^{n}\frac{k}{k^2+n^2}=\lim_{n \to \infty}(1/2) \log \frac{2n^2}{n^2+1}= 1/2 \log2$$.
So $$\lim_{n\to\infty} \sum_1^{n}\frac{k}{k^2+n^2}=\lim_{n\to\infty}\sum_1^{n-1}\frac{k}{k^2+n^2} + \lim_{n\to\infty}\frac{n}{n^2+n^2}=\lim_{n\to\infty}\sum_1^{n-1}\frac{k}{k^2+n^2} + 0$$
But, $$S=\lim_{n\to\infty}\sum_1^{n-1}\frac{k}{k^2+n^2} = (1/2)\log2$$, as $$(1/2)\log2\leq S\leq(1/2)\log2$$
So $$\lim_{n\to\infty} \sum_1^{n}\frac{k}{k^2+n^2}=(1/2)\log2$$ | 2019-08-24 20:23:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6969056129455566, "perplexity": 2798.7115177848664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321696.96/warc/CC-MAIN-20190824194521-20190824220521-00406.warc.gz"} |
https://greprepclub.com/forum/bd-is-parallel-to-ae-5829.html | It is currently 13 Jul 2020, 15:13
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# BD is parallel to AE.
Author Message
TAGS:
Founder
Joined: 18 Apr 2015
Posts: 12092
Followers: 256
Kudos [?]: 3015 [0], given: 11279
BD is parallel to AE. [#permalink] 02 Aug 2017, 08:31
Expert's post
00:00
Question Stats:
72% (00:46) correct 27% (00:42) wrong based on 168 sessions
Attachment:
#GREpracticequestion BD is parallel to AE..jpg [ 17.41 KiB | Viewed 4279 times ]
BD is parallel to AE.
Quantity A Quantity B xz wy
A) Quantity A is greater.
B) Quantity B is greater.
C) The two quantities are equal.
D) The relationship cannot be determined from the information given.
[Reveal] Spoiler: OA
_________________
Need Practice? 20 Free GRE Quant Tests available for free with 20 Kudos
GRE Prep Club Members of the Month: Each member of the month will get three months free access of GRE Prep Club tests.
Intern
Joined: 28 Feb 2017
Posts: 6
Followers: 0
Kudos [?]: 2 [0], given: 3
Re: BD is parallel to AE. [#permalink] 23 Sep 2017, 18:54
VP
Joined: 20 Apr 2016
Posts: 1279
WE: Engineering (Energy and Utilities)
Followers: 20
Kudos [?]: 1236 [4] , given: 242
Re: BD is parallel to AE. [#permalink] 23 Sep 2017, 20:23
4
KUDOS
Pria wrote:
Here given BD is parallel to AE,
so angle CBD = angle CAE, and angle BDC = angle AEC.
Therefore the triangles BCD and ACE are similar (Angle C is common to both triangles and by AAA triangle BCD and triangle ACE are similar)
Now it is given side BC = x , AB = y and AC =x+w.
side CD = y, DE = z and CE = y+Z
Now as both triangles BCD and ACE are similar
therfore we have
$$\frac{CB}{CA}$$ = $$\frac{CD}{CE}$$
substitute the values we get
$$\frac{x}{(x+w)}$$= $$\frac{y}{(y+z)}$$
or x(y+z) = y(x+w)
or xy + xz = xy + wy
or xz = wy. So option C.
_________________
If you found this post useful, please let me know by pressing the Kudos Button
Rules for Posting
Got 20 Kudos? You can get Free GRE Prep Club Tests
GRE Prep Club Members of the Month:TOP 10 members of the month with highest kudos receive access to 3 months GRE Prep Club tests
Intern
Joined: 28 Feb 2017
Posts: 6
Followers: 0
Kudos [?]: 2 [0], given: 3
Re: BD is parallel to AE. [#permalink] 24 Sep 2017, 07:36
Thank you.
Intern
Joined: 22 Sep 2017
Posts: 2
Followers: 0
Kudos [?]: 0 [0], given: 5
Re: BD is parallel to AE. [#permalink] 26 Sep 2017, 17:04
I don't understand.
CA could be different length than CE
Can anyone explain? Thank you
VP
Joined: 20 Apr 2016
Posts: 1279
WE: Engineering (Energy and Utilities)
Followers: 20
Kudos [?]: 1236 [1] , given: 242
Re: BD is parallel to AE. [#permalink] 26 Sep 2017, 20:48
1
KUDOS
pclawong wrote:
I don't understand.
CA could be different length than CE
Can anyone explain? Thank you
$$\frac{{CB}}{{CA}} = \frac{{CD}}{{CE}}$$ because they are similar and the ratio's of there length has to be equal.
CA= x+w (total length of CA, x & w are given)
CE = y+z (total length of CE, y & z are given)
_________________
If you found this post useful, please let me know by pressing the Kudos Button
Rules for Posting
Got 20 Kudos? You can get Free GRE Prep Club Tests
GRE Prep Club Members of the Month:TOP 10 members of the month with highest kudos receive access to 3 months GRE Prep Club tests
Director
Joined: 09 Nov 2018
Posts: 505
Followers: 0
Kudos [?]: 54 [0], given: 1
Re: BD is parallel to AE. [#permalink] 12 Nov 2018, 18:59
pranab01 wrote:
Pria wrote:
Here given BD is parallel to AE,
so angle CBD = angle CAE, and angle BDC = angle AEC.
Therefore the triangles BCD and ACE are similar (Angle C is common to both triangles and by AAA triangle BCD and triangle ACE are similar)
Now it is given side BC = x , AB = y and AC =x+w.
side CD = y, DE = z and CE = y+Z
Now as both triangles BCD and ACE are similar
therfore we have
$$\frac{CB}{CA}$$ = $$\frac{CD}{CE}$$
substitute the values we get
$$\frac{x}{(x+w)}$$= $$\frac{y}{(y+z)}$$
or x(y+z) = y(x+w)
or xy + xz = xy + wy
or xz = wy. So option C.
If you put the reason, it will be better---
If two triangles are similar, the ratio of their corresponding sides are equal.
---------------------------------------------------------------------------------------------------------
Please let me know, if I am wrong.
Intern
Joined: 07 Jan 2019
Posts: 30
Followers: 0
Kudos [?]: 2 [0], given: 0
Re: BD is parallel to AE. [#permalink] 09 Jan 2019, 12:41
thank you
Director
Joined: 09 Nov 2018
Posts: 505
Followers: 0
Kudos [?]: 54 [0], given: 1
Re: BD is parallel to AE. [#permalink] 11 Jan 2019, 14:53
pclawong wrote:
I don't understand.
CA could be different length than CE
Can anyone explain? Thank you
By maintaining ratio, any length is possible.
Intern
Joined: 27 Jan 2019
Posts: 29
Followers: 0
Kudos [?]: 26 [1] , given: 3
Re: BD is parallel to AE. [#permalink] 09 May 2019, 04:02
1
KUDOS
A line drawn parallel to a side of the triangle divides the two other sides proportionally. So x/w and y/z have the same ratio. Hence xz and wy are equal
Intern
Joined: 14 May 2019
Posts: 9
Followers: 0
Kudos [?]: 9 [2] , given: 0
Re: BD is parallel to AE. [#permalink] 16 May 2019, 04:02
2
KUDOS
Hi,
This is a question of proportionality of sides of two triangles that are similar. The larger and the smaller triangles are similar due to two common angles.
Here, x/w=y/z
So, xz=wy.
Hence, option (C) is the right answer.
Active Member
Joined: 27 Aug 2019
Posts: 59
Followers: 0
Kudos [?]: 48 [1] , given: 41
Re: BD is parallel to AE. [#permalink] 30 Sep 2019, 06:16
1
KUDOS
Let's say x = a*w where 'a' is the constant term of proportionality between a side of the little triangle and the corresponding side of the larger triangle.
Then, it must be also that: y = a*z.
Therefore:
x*z = (a*w)*z
w*y = w*(a*z)
The two are equal, hence C.
_________________
GRE Prep Club Members of the Month: Each member of the month will get three months free access of GRE Prep Club tests.
Manager
Joined: 04 Apr 2020
Posts: 91
Followers: 0
Kudos [?]: 33 [0], given: 22
Re: BD is parallel to AE. [#permalink] 03 May 2020, 10:13
grewhiz wrote:
Hi,
This is a question of proportionality of sides of two triangles that are similar. The larger and the smaller triangles are similar due to two common angles.
Here, x/w=y/z
So, xz=wy.
Hence, option (C) is the right answer.
Can we reliably predict this relation? I thought the only relation we can predict from the diagram is x/(x+w) = y/(y+z)
Re: BD is parallel to AE. [#permalink] 03 May 2020, 10:13
Display posts from previous: Sort by | 2020-07-13 23:13:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40639787912368774, "perplexity": 8166.01618396644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147031.78/warc/CC-MAIN-20200713225620-20200714015620-00151.warc.gz"} |
https://engineering.stackexchange.com/questions/5494/steel-selection-for-building-a-trailer | # Steel selection for building a trailer
I'm thinking about building another trailer. I've built lots of smaller trailers in the past, but this time I'd like to build myself a small tandem axle gooseneck rated for 7,500 lbs.
I am a certified welder, I have a Bachelor's degree in Physics, and I work as a software developer. I have the know how, but I would like some input on material selection.
• Round Pipe
• Angle Iron
• I-Beam
• Rectangular Tubing
I am currently leaning towards the Rectangular Tubing as the best for support, but I cannot seem to find anything online that confirms this.
After deciding on the best material for the trailer, where would I find good charts to tell me what size and thickness I should use? Obviously, I could go overkill, but I would like to build this one smarter rather than throwing as much iron at it that I have.
Does anyone have any input? Is there a better group to post this in? I was looking for something along the line if Industrial Engineering, but this is all that pulled up.
EDIT:
I was trying to keep this a generic question where someone could tell me something like, "Here is the formula we use, and this is how to use it..." It looks like I won't get that, though.
My heaviest load would be a tractor with a front end loader and a brush cutter on the back with a total weight of 5500 to 6500 lbs. A tandem axle trailer with two (2) 3500-lbs axles can support this load fine. I have selected axles from Southwest Wheel's torsion axle with brakes (the front axle will have brakes, but not the rear).
Trailer length will be 18-foot, and have a gooseneck configuration (it distributes the weight better and pulls smoother than a bumper trailer). For calculations, I'm going to use 7500-lb capacity.
I am looking at the structural data for square tubing using a spec sheet HERE (trying not to advertise another website, but that is where I see data). Page 21 shows data values for various sizes and thicknesses.
There is a line called Bending Factor. For an 18-foot trailer (18 x 12 = 216 inches), 3/8-inch thick 4x2 square tubing shows a Bending Factor of (x=1.03 , y=1.55).
I was using Rogue Fabrication's Calculator yesterday, where I entered the following values: Tube Shape=Square Tubing, Outside Diameter=4-in, Wall Thickness=0.1875-in, Material="Cheap seamed tube", Load=3800-lbs, Tube Length=216-in, and Hazard Factor=1, I got that my material is 1.22 times as strong as the loading conditions.
Next, I tried EasyCalculation's Beam Deflection Calculator, with values of Length=216, Width=2, Height=4, Wall Thickness=0.1875, Force=3750. It shows a deflection of about 100 inches for 2 lengths of rectangular tubing. If I use 4 lengths, that drops the force down to 7500/4=1875 per beam, and deflection down to 50 inches. Those deflection values seem really high. That is more iron than most trailers have.
The old tandem axle trailer I use now only has two (2) lengths of 4-inch angle iron (1/4-thick). It flexes a couple of inches, but not 50 inches. I must be missing something.
## How do I calculate the amount of flex a 20-ft length of material would have?
If square tubing is not best, that's fine as long as you let me know what would be better and how you selected that configuration when you comment.
• This question doesn't seem to be within the scope of this site. It is a mix of subjective opinion (every cross-section has its advantages and disadvantages, so you can't define the "best" one without knowing what is precisely meant by that) and of requesting references, neither of which are in scope. – Wasabi Sep 21 '15 at 17:12
• The shape of a member affects its area moment of inertia, which affects the loading in the member. You need to define a load, a factor of safety, a material, and then play with dimensions and shape until you find a combination that gets your load*(factor of safety) under the yield strength of the material. I put this in a comment instead of an answer because you haven't defined anything about the trailer except the generic class and that you want a tractor on it. If you want a reference try any "deformable bodies" or "machine design" text. – Chuck Sep 21 '15 at 19:33
• We aren't asking to make it as complicated as possible. We're asking to make it as correctly as possible. Figuring out good design is not trivial and should not be taken lightly. – Wasabi Sep 22 '15 at 23:11
• You have 3 options - do the math and get the right parts, copy someone else's work and hope they did the math, or guess and hope it works out okay. I mean, what's the worst a 3.5 ton load hitched to a car going 55mph on a populated highway can do? Sarcasm aside, you should probably either take this seriously and do the work or find a different project. – Chuck Sep 23 '15 at 0:13
• In our office we have a general rule ... "if something moves, it's mechanical" Maybe try tagging this with Mechanical Engineering? The dynamic loads on the beams and especially the connections would become critical. Just doing a static analysis on the structure would not be enough. – NamSandStorm Sep 28 '15 at 12:59
# Here's the formula(s) we use
### Beam Bending (available on Wikipedia)
$$EI\frac{d^4\,\delta y}{d\,x^4}=q(x)$$ $$I=\int (y-\bar y)^2 dA$$ $$\bar y= \frac1{A}\int y \;dA$$
$$\sigma_{max} = y_{max}E\frac{d^2\,\delta y}{d\,x^2}_{max}$$
Where $A$ is the cross-sectional area of the beam, $y$ is the position along the axis in the direction of the beam loading, $\delta y$ is the deflection in the direction of loading, $E$ is the modulus of elasticity (search matweb to get a value for your material), and finally $q(x)$ is the load per distance function.
### Here's how to use them
For a rectangular tube with height $H$, width $W$ and thickness $t$ we have:
$$\bar y=0$$ $$I=W\int_{-\frac{H}2}^{\frac{H}2}y^2\;dy-(W-2t)\int_{-\frac{H}2+t}^{\frac{H}2-t}y^2\;dy=\frac{H^3W-(H-2t)^3(W-2t)}{12}$$
For $H=4\,in\,,\quad W=2\,in\,,\quad t=.1875\,in$
$$I\approx4.2\,in^4$$
Now there's beam loading, this is likely where you ran into difficulty. First lets take a look at a cantilever beam:
Here there are just two points that are loaded, the support and the tip. Think of a diving board scenario. We'll say the support is at $x=0$ and the load $F$ is at $x=L$ $$q(x)= -\delta(x)F+\delta(x-L)F$$
$$EI\frac{d^4\,\delta y}{d\,x^4}=q(x)$$
$$EI\frac{d^3\,\delta y}{d\,x^3}=\int_{0^-}^xq(x) dx = F$$
This is basically saying that there's a constant shearing stress in the beam the whole way.
$$EI\frac{d^2\,\delta y}{d\,x^2}=\int F dx +C=F(x-L)$$
This expression is for the bending moment in the beam. We know that the free end must have a bending moment of zero so we set the integration constant to accommodate that.
$$\frac{d\,\delta y}{d\,x}=\frac1{EI}\int F(x-L) dx +C=\frac{F}{EI}(\frac12 x^2-Lx)$$
This represents the slope of the deflected beam. Here we know that the slope must be zero at the support so we've set the integration constant accordingly.
$$\delta y=\frac{F}{EI}\int \frac12 x^2-Lx \; dx +C=\frac{F}{EI}(\frac16 x^3-\frac{L}2 x^2)$$
Here we know the deflection is zero at the support so we set eh integration constant accordingly. Now if we just want to look at the deflection at the end we plug in $x=L$
$$\delta y=-\frac{FL^3}{3EI}$$
This corresponds to the equation on last website in your post.
From matweb for medium alloy steel we have $E=30\,000\,ksi$ So plugging in:
$$\delta y= -\frac{3.750\,klb \, (216 in)^3}{3\,30000\,ksi\,4.2\,in^4}\approx -100\,in$$
This is exactly what the online calculator produced. However, if you tried to load a beam like this it would permanently deform. An 18 foot lever arm is really long and will bend the snot out of a 4in thin wall beam with only moderate difficulty. The issue is that a trailer is not a cantilever beam.
so let's take a look at a more reasonable loading scenario. Let's model the axles as a point loads located $40\,in$ and $80\,in$ from the end of the trailer, the $7500 lbf$ load as distributed over the rearward $18\,ft$ and the gooseneck support an additional $5ft$ in front of that.
Now some of our loads aren't known yet, but we can figure out some of them in the process. Some of them we can't though, so let's add an additional constraint. The weight distribution will be split between the axles according to the variable $\alpha$
$$F_{axles}=F_{rear} \frac1{\alpha}=F_{front}\frac1{(1-\alpha)}$$
Now we have:
$$q(x)=-\frac{F}{L}H(L-x)+F_{axels}(\alpha\delta(x-x_{rear})+(1-\alpha)\delta(x-x_{front})+(F-F_{axels})\delta(x-x_{goose})$$
Integrating:
$$EI\frac{d^3\,\delta y}{d\,x^3}=\begin{cases} -\frac{F}{L}x & x\leq x_{rear} \\ -\frac{F}{L}x+F_{axels}\alpha & x_{rear} \lt x\leq x_{front} \\ -\frac{F}{L}x+F_{axels} & x_{front} \lt x\leq L \\ F_{axels}-F & L \leq x \end{cases}$$
Then integrating again:
$$EI\frac{d^2\,\delta y}{d\,x^2}=\begin{cases} -\frac{F}{2L}x^2 & x\leq x_{rear} \\ -\frac{F}{2L}x^2+F_{axels}\alpha (x-x_{rear}) & x_{rear} \leq x\leq x_{front} \\ -\frac{F}{2L}x^2+F_{axels} (x-(1-\alpha)x_{front}-\alpha x_{rear}) & x_{front} \leq x\leq L \\ (F_{axels}-F) (x-x_{goose}) & L \leq x \end{cases}$$
Note that this bending moment must be continuous and both ends must be zero as there is no bending moment applied at the ends (they are free to rotate) This leads to an additional constraint that can be used to find $F_{axles}$
$$F_{axels}=F\frac{x_{goose}-\frac{L}2}{x_{goose}-(1-\alpha)x_{front}-\alpha x_{rear}}$$
However, to keep the expressions shorter lets leave $F_{axels}$ in the expressions.
Now the slope will be:
$$\frac{d\,\delta y}{d\,x}=\frac1{EI}\begin{cases} -\frac{F}{6L}x^3+C_1 & x\leq x_{rear} \\ -\frac{F}{6L}x^3+F_{axels}\alpha \frac12 ( x-x_{rear})^2+C_1 & x_{rear} \leq x\leq x_{front} \\ -\frac{F}{6L}x^3+F_{axels}(\alpha \frac12 (x-x_{rear})^2+(1-\alpha)\frac12(x-x_{front})^2)+C_1 & x_{front} \leq x\leq L \\ (F_{axels}-F)\frac12 (x-x_{goose})^2 +C_2 & L \leq x \end{cases}$$
And at this point I moved over to a numerical solution. I integrated again and found values for all the constants such that both the slope and displacement were continuous and the displacement at the goose and the rear axle were zero. The resulting deflection had a maximum at about 2 inches. But I used the full load and I should have used half the load giving 1 inch. That sounds about right to me.
Note that the peak bending moment is $9\, kN \,m$ which when multiplied by half our height and divided by our area moment give a peak stress of $38 ksi$ this is about 13% the yield strength of the medium alloy steel on matweb. You mihgt hink this would be sufficient then however, this is only for a static trailer, not one moving and bumping around.
The acceleration forces on a trailer could easily triple the load over short periods. Additionally, the bumps in the road will cycle the loading making it not the yield strength you want to look at but the fatigue strength at the appropriate number of cycles you'd like the trailer to last. The fatigue strength may be as low as 10% of the yield strength, so I would want a minimum load factor of about 30 (3/10%), then add a factor of safety of 2 and your beams need to be about 60 times stronger than would be required to just meet your yield stress in a static load scenario. In short, I would go with bigger beams.
• +1 for the spectacular answer. I hope @jp2code realizes the effort involved in making "just a trailer". – Chuck Sep 30 '15 at 15:36
• @Chuck, I doubt any gooseneck trailer manufacturers have used these calculations. This is a spectacular answer that I may eventually accept, but I would like to know how manufacturers determine what size material they need when building a trailer for a given load range. – jp2code Sep 30 '15 at 17:15
• @jp2code it's this or guesswork. – Chuck Sep 30 '15 at 17:57
• @jp2code Like most problems, once you solve the problem once you can make a tool to re-do all the calculations when your numbers change. So, no they don't go through this for every trailer design. They made a tool to do it for them. Then they probably verify their design with an FEA analysis. I doubt any gooseneck trailer manufactures have used less than this level of detail calculations, it's just likely to be embedded in a tool similar to the online tools you found. – Rick Sep 30 '15 at 19:04
• I'm not sure that this really adds anything useful to real world design. I once designed a 6 lane road bridge without integrating anything at all. I think that it's still standing. Engineer's don't integrate. – Paul Uszak Dec 18 '15 at 2:22
Here is some additional information and lengthy discussion on trailer design criteria. There is even a white paper in the thread on loading and safety factors that should be used:
There are a lot of other threads on that site providing good information regarding trailer design.
For what it's worth, I'd start my structure design with some type of rectangular steel section in mind for a trailer. They are regularly available and "easy" to work with(cut, drill, weld, mount other components, etc).
For the structure of a trailer rectangular section tube is likely to be the most efficient compromise between strength stiffness and ease of design and manufacture. Round tube is a bit stronger weight for weight but much more difficult to assemble and join accurately, simply because rectangular tube has convenient flat surfaces.
As already mentioned thing like this aren't designed by calculus in the real world and by far your best bet is to copy an existing design as failures in this sort of application tend to occur when you get unexpected load concentrations rather than considering the design as an approximated beam so unless you have access to FEA software then paper calculations are a bit pointless.
• I had hoped that one of the engineers on this site could have said, "It is best to use X for {something} pounds". In the end, I just guestimated: i.imgur.com/mkOJrhS.jpg – jp2code Jan 4 '16 at 22:28
• The problem is that the actual load on the trailer is a small part of the overall problem what I will say is that for a 3000 kg load on an A frame about 3m long 100mm x 50mm rectangular box section (3mm wal thickness) is the right sort of ballpark to give you a comfortable factor of safety. – Chris Johns Jan 4 '16 at 22:38
The easy answer is don't design - cheat. Go and look for a trailer similar to what you're after. Photograph it and measure all the bits. (Don't act like you're trying to nick it). Similar sections will do, but I'd err on larger sizes. | 2019-05-23 20:50:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5194072723388672, "perplexity": 906.7347169283139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257396.96/warc/CC-MAIN-20190523204120-20190523230120-00220.warc.gz"} |
https://fr.maplesoft.com/support/help/maple/view.aspx?path=MapleTA%2FBuiltin%2Fnumfmt | numfmt - Maple Help
Online Help
All Products Maple MapleSim
MapleTA[Builtin]
numfmt
format a number according to a template
Calling Sequence numfmt(format, number)
Parameters
format - string number - numeric
Description
• The numfmt command uses a format template to specify how many digits after the decimal point to display among other things. The given number is matched against the template and a formatted string is returned.
• This command has the same specification as the built-in Java command, java.text.DecimalFormat.
Examples
> $\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left("#.00",20.9\right)$
${"20.90"}$ (1)
> $\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left("#.#",12.34\right)$
${"12.3"}$ (2)
> $\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left(".#",12.34\right)$
${"12.3"}$ (3)
> $\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left("#.0000",12.3456789\right)$
${"12.3457"}$ (4)
> $\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left("#.010",12.3456\right)$
${"12.351"}$ (5)
> $\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left("#",12.34\right)$
${"12"}$ (6)
> $\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left("#.#",12.3456\right)$
${"12.3"}$ (7)
> $\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left("#.###",12.3456\right)$
${"12.346"}$ (8)
> $\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left("000.#",12.3456\right)$
${"012.3"}$ (9)
Compatibility
• The MapleTA[Builtin][numfmt] command was introduced in Maple 18.
• For more information on Maple 18 changes, see Updates in Maple 18.
See Also | 2022-08-13 14:53:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7376757860183716, "perplexity": 3137.9694980732975}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571959.66/warc/CC-MAIN-20220813142020-20220813172020-00183.warc.gz"} |
http://sail.unist.ac.kr/members/yunseong/ | ### Brief Bio
Yunseong Hwang is an MS student at Ulsan National Institute of Science and Technology since 2014. He received B.S. degree from computer engineering at Ulsan National Institute of Science and Technology (UNIST) in Feb. 2014. He has experience in several machine learning algorithms and platforms such as R, Scikit Learn, and Matlab. Based on ths knowledge, he achieved top 7% (43 out of 691 teams) in a machine learning competition to predict sales in Walmart. https://www.kaggle.com/yunseong
### Research Interests
His research interests are in learning and inference algorithms in Gaussian Processes and now working on improving the Automatic Statistician which automatically extracts human-readable report from continuous time-series data.
#### 1. Overview of Gaussian Process and ABCD
A Gaussian process $\mathcal{GP}\left(\mu(x),k(x,x’)\right)$ is a statistical distribution which any finite set of samples of function evaluations $[f(x_1),\dotsc,f(x_n)]$ has a joint Gaussian distribution $\mathcal{N}(m,K)$, where $m_i = \mu(x_i)$ and $K_{ij} = k(x_i,x_j)$. In most applications, we don’t have any prior knowledge about the mean of f(x) thus by symmetry we take it to be zero, which only requires specification of the kernel function $k(x,x’)$. In an alternative view point a Gaussian process can be seen as a distribution over functions $f(x) \sim \mathcal{GP}$, since it specifies distribution of function evaluations at any input $x$ in a possibly infinite input space $\mathcal{X}$, although its definition is only on finite set of function evaluations. Mathematically,
\text{If } \mathbf{y} = [f(\mathbf{x}_1),\dotsc,f(\mathbf{x}_n)]^\mathrm{T}, \mathbf{X} = [\mathbf{x}_1,\dotsc,\mathbf{x}_n]^\mathrm{T}, \text{ and } \mathbf{y} \sim \mathcal{GP}\left(\mu(x),k(x,x’)\right) \\
\text{then} \\
P(\mathbf{y}|\mathbf{X}) = \frac{1}{\sqrt{(2\pi)^n|\Sigma|}}\exp{\left(-\frac{1}{2}(\mathbf{y}-\mathbf{m})^\mathrm{T}\Sigma^{-1}(\mathbf{y}-\mathbf{m})\right)} \\
\text{where} \\
\mathbf{m}_i = \mu(\mathbf{x}_i) \text{ and } \Sigma_{ij} = k(\mathbf{x}_i,\mathbf{x}_j)
Automatic Bayesian Covariance Discovery (ABCD, by Lloyd et. al.) is a system that discovers covariance function which can properly model the covariance pattern along the function evaluations. ABCD is mostly based on the two key facts about kernel compositions,
\text{If } f_1(x) \sim \mathcal{GP}(0,k_1) \text{ and independently } f_2(x) \sim \mathcal{GP}(0,k_2) \\
\text{then} \\
f_1(x) + f_2(x) \sim \mathcal{GP}(0,k_1 + k_2) \\
f_1(x) \times f_2(x) \sim \mathcal{GP}(0,k_1 \times k_2) \\
\text{where} \\
(k_1 + k_2)(x,x’) = k_1(x,x’) + k_2(x,x’) \\
(k_1 \times k_2)(x,x’) = k_1(x,x’) \times k_2(x,x’)
ABCD uses those compositions to find a proper kernel function. Given a kernel function as starting kernel expression, it iteratively and greedily builds a composite kernel function by applying those operations to existing best-so-far kernel function with many new base kernel functions as another operand. And again it selects the best kernel among those expanded kernels. As we can express those composite kernel expression in sum of products form, one of the benefits of this system is that we can interpret the data/signal separately for each additive component. In addition to that in ABCD those additive components can be expressed in natural language form based on each base kernels characteristic, for example smoothing effect of squared exponential kernel constitutes any smooth function that its smoothness fits with the kernels lengthscale parameter.
Extensión del libro permite a viagra en vente libre en suisse la costa del sol la viabilidad. Composición idéntica cialis sube la presion a la disfuncion generico-farmacia-enlinea erectil no ayuda a resolver el problema. Político de concertación a que edad empiezan a tomar viagra se realiza en el marco.
#### 2. Motivational Example
Original ABCD system is discussed with only time series data. And motivation of my work is based on a question, what if there is a similar system for multiple time series data or multidimensional data. The following figure is comparison between the original ABCD system (left) and new system with multiple time series data (right). The first row shows the raw data and the following rows depicts separate additive components for those signals.
The notable part of this figure is that by using multiple data, the new system can tell that there is a sudden drop of value after Sep. 11 throughout the data, while the original cannot selectively explain that part as an additive component (only explains with a smooth function).
#### 3. Datasets and Results
Here are some results about the fitness of the new system in comparison with the original ABCD system. Two kinds of datasets are used for the experiment, US stock data and US house price index data. Here BIC denotes Bayesian Information Criterion, N is number of data points and P is number of parameters.
\begin{array}{|l|r|r|r|r|r|}
\hline
& & ABCD & ABCD & NEW & NEW \\
\hline
SET & N & P & BIC & P & BIC \\
\hline
Top 3 stocks & 387 & \textbf{13} & \textbf{686.05} & 22 & 750.65 \\
Top 6 stocks & 774 & \textbf{21} & \textbf{2141.76} & 49 & 2219.71 \\
Top 9 stocks & 1161 & \textbf{38} & 4167.40 & 73 & \textbf{3985.03} \\
\hline
\end{array}
The BIC measure of both systems in the stock data set. ‘Top 3′, ‘Top 6′ and ‘Top 9′ stocks are selected by their market capitalization ranks in 2011. The NEW system requires less parameters than ABCD. The NEW system models trained with 3 stocks and 6 stocks show better performance than individually optimized ABCD models in terms of BIC meature.
\begin{array}{|l|r|r|r|r|r|}
\hline
& & ABCD & ABCD & NEW & NEW \\
\hline
SET & N & P & BIC & P & BIC \\
\hline
Top 2 cities & 240 & \textbf{12} & 663.54 & 20 & \textbf{634.00} \\
Top 4 cities & 480 & \textbf{14} & \textbf{1260.05} & 38 & 1424.18 \\
Top 6 cities & 720 & \textbf{23} & \textbf{1972.58} & 61 & 2100.62 \\
\hline
\end{array}
The BIC measure of both systems in the housing market data set. ‘Top 2′, ‘Top 4′ and ‘Top 6′ US cities are selected in terms of their city population rank. The BIC measures of the NEW system models are similar or better than the measures of individually trained ABCD models.
### Contact
School of Electrical and Computer Engineering, UNIST
50 UNIST-gil, EB2 502, Ulsan, 689-798, Korea
Phone: 010-6518-1260
Email: yunseong@unist.ac.kr | 2020-02-17 18:08:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7189016938209534, "perplexity": 2126.0666252134965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143079.30/warc/CC-MAIN-20200217175826-20200217205826-00528.warc.gz"} |
https://cs.stackexchange.com/tags/network-analysis/hot?filter=year | # Tag Info
2
Another heuristic idea: Find a long shortest path, and pick the vertex halfway along it. Pick a vertex and run BFS from it. For some small $k$, take the $k$ furthest vertices from the original vertex that the BFS determines, and repeat the process on each of them, keeping the $k$ overall furthest vertices each time. Repeat a few times. If the graph is a ...
1
If you consider the format of the TCP-IP datagram. Source Address: The 32-bit IP address of the originator of the datagram. Note that even though intermediate devices such as routers may handle the datagram, they do not normally put their address into this field—it is always the device that originally sent the datagram. Destination Address: The 32-bit IP ...
1
After a bit of reading through literature I've come upon "closeness centrality" which is the reciprocal of what I'm calculating (mean distance, which they call "farness" in the article). But I still haven't found any algorithms for finding the "closeness center" (node with maximum closeness centrality) that is faster than $O(N^2)$. As a heuristic, I have ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2020-10-31 19:29:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4900771379470825, "perplexity": 1572.7407929498556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922411.94/warc/CC-MAIN-20201031181658-20201031211658-00531.warc.gz"} |
http://horizonone.net/index.html | Robert Miller
RESUME
Education
Bachelor of Science – Mechanical Engineering
Master of Science – Aerospace Engineering
Experiences
NASA Jet Propulsion Laboratory
Projects
REMORA-6U and 12U Spacecraft
Open Source Modular CubeSat
Recent projects
Solar Sail Spacecraft
DARS is a solar sail CubeSat that has been in development the past three years. DARS aims to deploy a 40 square meter sail and provide passive thrust technology to the small satellite community. During the development phase various deployment and sail packing methodologies were explored. A series of deployment tests and thermal, power, trajectory analyses were conducted to determine mission feasibility. DARS is still in development and is aiming for a lunch date of late 2020.
DARPA REMORA
The REMORA-6U and REMORA-12U satellites are capable of large object rendezvous, attachment, tracking, and collision avoidance mitigation. The mission’s feasibility was accessed during the summer of 2017 where a preliminary design, mission concept, and analyses were conducted, ultimately determining REMORA is capable of the initial mission.
TEST EQUATION - DELETE ME!
$$\sum_{i=0}^n i^2 = \frac{(n^2+n)\int[0,159](2n+1)dn}{6}$$
CubeSat Thermal Analysis
A CubeSat analysis was done in conjunction with the ongoing development of the ARC2 (Alaska Research CubeSat 2) spacecraft to provide additional insight into the thermal environment and behavior of the spacecraft. The analysis accounted for various material properties, a dynamic solar flux value, planetary albedo, diffuse radiation, and their related view factors relative to the spacecraft. | 2018-05-22 21:21:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4794086515903473, "perplexity": 6948.612786999436}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864968.11/warc/CC-MAIN-20180522205620-20180522225620-00523.warc.gz"} |
http://rpg.stackexchange.com/questions/2564/where-does-the-dd-concept-of-a-dwarf-come-from/2572 | # Where does the D&D concept of a dwarf come from?
Where does the modern D&D of a dwarf come from? I've noticed it's fairly far from the old Germanic concept of a dwarf. Where did Gary Gygax and company get the idea for the current long bearded, honorable, armored, axe wielding creature we know (and love) today?
-
Tolkien's Hobbit, Lord of the Rings, and the Silmarillion lead directly to Perren and Gygax's minis-battles fantasy supplement, Dwarves in Chainmail (see Chainmail 3rd Ed, p. 28, and the later designer's notes article). Also, Gygax and Arneson made much use of this in the games which would later become D&D.
Tolkien claimed inspiration from the Norse and Anglo-Saxon dwarves, but admitted to changing them substantially. Terry Brooks (Shanarra Series), Poul Anderson, and several other authors specifically draw from Tolkien's presentation of the race of Dwarves.
A number of games specifically move away from Tolkien; Some make them short Norse, some make them otherwise different (like the Mostal of Glorantha, or the Shtuntee of Orkworld), and later D&D dwarves are drifted from Tolkien's model as well.
Most games follow the D&D pseudo-Tolkienian model. Most modern authors do so as well; Tolkien has been noted as stating he intended to create a saga for the English Speaking World; by most accounts, he's succeeded.
His vision of Elves, Dwarves, Hobbits/Halflings, Trolls, Orcs, and Goblins have become the English cultural norm; only Elfquest has given much challenge to this superiority, tho' the medieval English and French fairy elf views still have some traction, especially since they have much place in another still widely held cultural myth: King Arthur.
-
To give something similar to Aramis' excellent answer but with a different emphasis: the word "dwarf" etymologically related to the German word "Zwerg", who are small, magical people who live on mountains or underground, like their privacy, and may do good or harmful things to humans in their rare contact.
Gygax read widely, not just Howard & Leiber, but also fairy tales, &c. so his concept of dwarf is a bringing together of many strands that came from this source. It would be great if someone could document all of the characteristics and how they arose and were inherited by various incarnations of the dwarf archetype.
-
Zwerg are thematically more like the D&D Gnomes than the D&D Dwarves. – aramis Apr 13 '14 at 9:37
Of course the Hobbit (1937) has some pretty dwarfy dwarves in it. Not a perfect D&D analog but it would be hard to say that other dwarf fantasy wasn't influenced by Tolkien.
-
All modern fantasy is influenced by Tolkien. Well, ok, just 99.9% of it. :) – BBlake Sep 17 '10 at 2:58
I would say all. The ones that aren't similar are deliberately so, often trying to be different. That's still an influence. – Covar Nov 15 '10 at 4:12
Poul Anderson's Three hearts and Three Lions (1961) has a very modern sort of dwarf in it, and it was definitely an influence on early D&D.
-
As Aramis pointed out the basis for the English speaking worlds common perception of dwarves is Tolkiens mythology which is in turn based on old Norse mythology.
A fun fact which, might also serve to demonstrate this point: Tolkien borrowed many of his characters' names, especially the dwarf names, from old Norse writing.
Prominent here, from the Poetic Edda/Völuspá II, the seeress' prophecy, two verses:
1. Then went reigns all
to their smoking seats,
the high-holy gods
held council:
Who should the Dwarfs,
the kings' men create,
from oceans blood
and the blue calves.
..
1. Measure is the Dwarfs
in Dvalin's flock
the men of lions,
and Lofars count.
There they went
from temples rocks,
to Aurvanga shoots,
and earth dwellings.
There are many more verses, listing dozens of dwarf names -- many of which should be quite familiar to fans of Tolkiens writings.
-
Norse Mythology, but in that they are sometimes referred to as Black Elves or Svartalfar
-
Could you perhaps go into more detail? – Oblivious Sage Mar 1 '15 at 23:03 | 2016-07-23 21:22:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3501184284687042, "perplexity": 7941.322833994593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823670.44/warc/CC-MAIN-20160723071023-00103-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://www.codebus.net/d-hgfC.html | Description: Four beautiful site navigation label style effects, pure CSS code realization, have rounded corners menu, there is also a TAB menu, each have their own characteristics, the use of technology also is very normal, is primarily a DIV CSS, you look at the demo screenshot, you'll see, 4 menu with GIF images to modify menu effects, but now mostly menu beautification depend on the picture, for reference only.
To Search:
File list (Click to check if it's the file you need, and recomment it at the bottom):
文件名大小更新时间codesc.net\tabbednavigation\images\1\center.jpg 10706 2008-03-25 | 2021-01-21 15:33:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22537517547607422, "perplexity": 6473.433413696813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524858.74/warc/CC-MAIN-20210121132407-20210121162407-00608.warc.gz"} |
https://quant.stackexchange.com/questions?sort=unanswered&page=5 | All Questions
3,763 questions with no upvoted or accepted answers
Filter by
Sorted by
Tagged with
1k views
How to show that this weak scheme is a cubature scheme?
Weak schemes, such as Ninomiya-Victoir or Ninomiya-Ninomiya, are typically used for discretization of stochastic volatility models such as the Heston Model. Can anyone familiar with Cubature on ...
2k views
4k views
Bridgewater's Daily Observations
Bridgewater Associates send out Daily Observations to their clients, but I haven't found many traces of these publications online. The series started some 40 years ago by Ray Dalio, and there're just ...
781 views
Alternative to Block Bootstrap for Multivariate Time Series
I currently use the following process for bootstrapping a multivariate time series in R: Determine block sizes - run the function b.star in the np package which produces a block size for each series ...
342 views
Market Maker portfolio management
I am interested in articles/strategies related to portfolio and inventory management for market makers and to management of order cancellation, updates of order, etc. Most of the strategies from ...
192 views
Determining Hurst exponent of a Brownian motion
I am trying to determine the Hurst exponent of a simple Brownian motion, however, I seem to get a result that differs from 0.5. I am following the instructions given on the Wikipedia-page, and here is ...
297 views
Transition densities in the Heston model
Knowing the Characteristic function $\Phi_{T,t} = \mathbb{E} [ e^{i u S_T} | S_t, V_t]$ (or equivalently, the Laplace transform) of an affine process, it's possible to know the distribution of the ...
377 views
What is the most convenient data structure for backtesting a model of futures options prices?
I have an empirical model for the dynamics of futures prices in a particular market that I have implemented using a long series of the front five contracts. (I account for the roll in my model.) I ... | 2020-05-28 22:25:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7392839193344116, "perplexity": 1785.5409079318836}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347400101.39/warc/CC-MAIN-20200528201823-20200528231823-00085.warc.gz"} |
https://arbital.greaterwrong.com/p/arithmetical_hierarchy?l=1mg | Arithmetical hierarchy
The arithmetical hierarchy classifies statements according to the number of unbounded $$\forall x$$ and $$\exists y$$ quantifiers, treating adjacent quantifiers of the same type as a single quantifier.
The formula $$\phi(x, y) \leftrightarrow [(x + y) = (y + x)],$$ treating $$x$$ and $$y$$ as constants, contains no quantifiers and would occupy the lowest level of the hierarchy, $$\Delta_0 = \Pi_0 = \Sigma_0.$$ (Assuming that the operators $$+$$ and $$=$$ are themselves considered to be in $$\Delta_0$$, or from another perspective, that for any particular $$c$$ and $$d$$ we can verify whether $$c + d = d + c$$ in bounded time.)
Adjoining any number of $$\forall x_1: \forall x_2: ...$$ quantifiers to a statement that would be in $$\Sigma_n$$ if the $$x_i$$ were considered as constants, creates a statement in $$\Pi_{n+1}.$$ Thus, the statement $$\forall x: (x + 3) = (3 + x)$$ is in $$\Pi_1.$$
Similarly, adjoining $$\exists x_1: \exists x_2: ...$$ to a statement in $$\Pi_n$$ creates a statement in $$\Sigma_{n+1}.$$ Thus, the statement $$\exists y: \forall x: (x + y) = (y + x)$$ is in $$\Sigma_2$$, while the statement $$\exists y: \exists x: (x + y) = (y + x)$$ is in $$\Sigma_1.$$
Statements in both $$\Pi_n$$ and $$\Sigma_n$$ (e.g. because they have provably equivalent formulations belonging to both classes) are said to lie in $$\Delta_n.$$
Quantifiers that can be bounded by $$\Delta_0$$ functions of variables already introduced are ignored by this classification schema: the sentence $$\forall x: \exists y < x: (x + y) = (y + x)$$ is said to lie in $$\Pi_1$$, not $$\Pi_2$$. We can justify this by observing that for any particular $$c,$$ the statement $$\forall x < c: \phi(x)$$ can be expanded into the non-quantified statement $$\phi(0) \wedge \phi(1) ... \wedge \phi(c)$$ and similarly $$\exists x < c: \phi(x)$$ expands to $$\phi(0) \vee \phi(1) \vee ...$$
This in turn justifies collapsing adjacent quantifiers of the same type inside the classification schema. Since, e.g., we can uniquely encode every pair (x, y) in a single number $$z = 2^x \cdot 3^y$$, to say “there exists a pair (x, y)” or “for every pair (x, y)” it suffices to quantify over z encoding (x, y) with x and y less than z.
We say that $$\Delta_{n+1}$$ includes the entire sets $$\Pi_n$$ and $$\Sigma_n$$, since from a $$\Pi_{n}$$ statement we can produce a $$\Pi_{n+1}$$ statement just by adding an inner $$\exists$$ quantifier and then ignoring it, and we can obtain a $$\Sigma_{n+1}$$ statement from a $$\Pi_{n}$$ statement by adding an outer $$\forall$$ quantifier and ignoring it, etcetera.
This means that the arithmetic hierarchy talks about power sufficient to resolve statements. To say $$\phi \in \Pi_n$$ asserts that if you can resolve all $$\Pi_n$$ formulas then you can resolve $$\phi$$, which might potentially also be doable with less power than $$\Pi_n$$, but can definitely not require more power than $$\Pi_n.$$
Consequences for epistemic properties
All and only statements in $$\Sigma_1$$ are verifiable by observation. If $$\phi \in \Delta_0$$ then the sentence $$\exists x: \phi(x)$$ can be positively known by searching for and finding a single example. Conversely, if a statement involves an unbounded universal quantifier, we can never be sure of it through simple observation because we can’t observe the truth for every possible number.
All and only statements in $$\Pi_1$$ are falsifiable by observation. If $$\phi$$ can be tested in bounded time, then we can falsify the whole statement $$\forall x: \phi(x)$$ by presenting some single x of which $$\phi$$ is false. Conversely, if a statement involves an unbounded existential quantifier, we can never falsify it directly through a bounded number of observations because there could always be some higher, as-yet untested number that makes the sentence true.
This doesn’t mean we can’t get probabilistic confirmation and disconfirmation of sentences outside $$\Sigma_1$$ and $$\Pi_1.$$ E.g. for a $$\Pi_2$$ statement, “For every x there is a y”, each time we find an example of a y for another x, we might become a little more confident, and if for some x we fail to find a y after long searching, we might become a little less confident in the entire statement.
Children:
Parents:
• Mathematics
Mathematics is the study of numbers and other ideal objects that can be described by axioms. | 2021-12-02 00:09:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6836138367652893, "perplexity": 330.40020009825764}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.58/warc/CC-MAIN-20211201234046-20211202024046-00599.warc.gz"} |
http://www.dolarfiyatibugun.com/rig-veda-jcruho/fy4jr3.php?page=bayesian-linear-regression-in-r-4ddcb5 | In R again this is written as: This implies a single covariate with a single value for each snake. Linear Regression Diagnostics. The standard non-informative prior for the linear regression analysis example (Bayesian Data Analysis 2nd Ed, p:355-358) takes an improper (uniform) prior on the coefficients of the regression (: the intercept and the effects of the âTrtâ variable) and the logarithm of the residual variance . In our example these assume the values of , while is the standard frequentist estimate of the residual variance. Linear regression in R 17:09. The newcomers though will face some hurdles in this journey: Though there are excellent resources out there to deal with philosophy/theory (e.g. Recently STAN came along with its R package: rstan, STAN uses a different algorithm than WinBUGS and JAGS that is designed to be more powerful so in some cases WinBUGS will failed while S⦠12.2.1 Example: expenditures of U.S. households. D&D’s Data Science Platform (DSP) – making healthcare analytics easier, High School Swimming State-Off Tournament Championship California (1) vs. Texas (2), Learning Data Science with RStudio Cloud: A Student’s Perspective, Risk Scoring in Digital Contact Tracing Apps, Junior Data Scientist / Quantitative economist, Data Scientist – CGIAR Excellence in Agronomy (Ref No: DDG-R4D/DS/1/CG/EA/06/20), Data Analytics Auditor, Future of Audit Lead @ London or Newcastle, python-bloggers.com (python/data-science news), Python Musings #4: Why you shouldn’t use Google Forms for getting Data- Simulating Spam Attacks with Selenium, Building a Chatbot with Google DialogFlow, LanguageTool: Grammar and Spell Checker in Python, Click here to close (This popup will not appear again), philosophical (the need to adapt to an “alternative” inferential lifestyle), practical (gather all the data that came before one’s definitive study, and process them mathematically in order define the priors), technical (learn the tools required to carry out Bayesian analyses and summarizes results). Hierarchical Linear Model. Bayesian simple linear regression 8:11. ... 1974) and the Bayesian information criterion - BIC (Schwarz, 1978) are measures of the goodness of fit of an estimated statistical model and can also be used for model selection. The introduction to Bayesian logistic regression and rstanarm is from a CRAN vignette by Jonah Gabry and Ben Goodrich. Bayes rule tells us how to combine such an assumption about a parameter with our current observations into a logical, quantitative conclusion. So how can one embark on the Bayesian journey by taking small steps towards the giant leap? The rstanarm package aims to address this gap by allowing R users to fit common Bayesian regression models using an interface very similar to standard functions R functions such as lm () and glm (). Try changing you code to reflect this. Description. Posted on November 17, 2013 by Christos Argyropoulos in R bloggers | 0 Comments. Bayesian methods are sure to get some publicity after Vale Johnson’s PNAS paper regarding the use of Bayesian approaches to recalibrate p-value cutoffs from 0.05 to 0.005. If we assume then we should test for normality of the individual residuals when using a t-test. In this section, we will turn to Bayesian inference in simple linear regressions. A prior distribution does not necessarily imply a temporal priority, instead, it simply represents a specific assumption about a model parameter. Robust Bayesian linear regression with Stan in R Adrian Baez-Ortega 6 August 2018 Simple linear regression is a very popular technique for estimating the linear relationship between two variables based on matched pairs of observations, as well as for predicting the probable value of one variable (the response variable) according to the value of the other (the explanatory variable). In statistics, Bayesian linear regression is an approach to linear regression in which the statistical analysis is undertaken within the context of Bayesian inference. In Linear Regression these two variables are related through an equation, where exponent (power) of both these variables is 1. R – Risk and Compliance Survey: we need your help! December 3, 2014. Dimension D is understood in terms of features, so if we use a list of x, a list of x² (and a list of 1âs corresponding to w_0), we say D=3. So under this model the mass of snake is made up of three components. CRAN vignette was modified to this notebook by Aki Vehtari. Checking for outliers 4:04. Lesson 12 presents Bayesian linear regression with non-informative priors, which yield results comparable to those of classical regression. It is a very important function that helps us understand what is going on. ëí 기ì í¨ìì ê°ì를 ì¡°ì íì¬ ê°ì¥ í¨ì¨ì ì¸ ëª¨ë¸ ë³µì¡ëì ëí´ìë ì´í´ë³´ìë¤. View source: R/BayesReg.R. This conservativeness is an inherent feature of Bayesian analysis which guards against too many false positives hits. Prerequisites library ("rstan") library ("tidyverse") library ("recipes"). As the multiple linear regression design is very much consistent between frequentist and Bayesian approaches, you are advised to review the tutorial on frequentist multiple linear regression. Date. The intercept has little meaning as it says that a snake of length 0 weight -5.6 units. The AIC is defined as: This article describes the classes of models implemented in the BLR package and illustrates their use through examples. ... 12.2 Bayesian Multiple Linear Regression. Now, say we want run a linear regression on presidential heights over time, and we want to use the Bayesian bootstrap to gauge the uncertainty in the regression coefficients. A non-linear relationship where the exponent of any variable is not equal to 1 creates a curve. The BLR (âBayesian Linear Regressionâ) function was designed to fit parametric regression models using different types of shrinkage methods. Regularized Bayesian Linear Regression as a Gaussian Process A gaussian process is a collection of random variables, any finite number of which have a joint gaussian distribution (See Gaussian Processes for Machine Learning, Ch2 - Section 2.2 ). Continuing the previous post concerning linear regression analysis with non-informative priors in R, I will show how to derive numerical summaries for the regression parameters without Monte Carlo integration. This setup is known as the effects parameterization. Bayesian regression. 4. An earlier version of this program was presented in ⦠An equivalent way to look at differences in regions with respect to mass is to reparameterize the model as a means parameterization. 6.1 Bayesian Simple Linear Regression. In Chapter 11, we introduced simple linear regression where the mean of a continuous response variable was represented as a linear function of a single predictor variable. A prior may be uninformative for a data set, but upon transformation with say log the assumptions about a prior may no longer hold. see the books by: Jaynes, Gelman, Robert, Lee) and the necessary tools to implement Bayesian analyses (in R, JAGS, OpenBUGS, WinBUGS, STAN) my own (admittedly biased) perspective is that many people will be reluctant to simultaneously change too many things in their scientific modus operandi. The effects parameterization lets us test for differences for means between the two regions and the means parameterization lets us report the expected mass of snakes for each region. One can call it intellectual laziness, human inertia or simply lack of time, but the bottom line is that one is more likely to embrace change in small steps and with as little disturbance in one’s routine as possible. The individual deviation is called . The Bayesian equivalent of the "no effect" hypothesis, where $\beta_1=\beta_2=0$ isn't to see if the intervals contain zero, but to run separate regressions for all combinations of possible variables. Today we are again walking through a multivariate linear regression method (see my previous post on the topic here). We use simple linear regression. We can give the model a more relevant meaning by transforming svl. The indicator variable region2 contains a 1 for the snakes that are in region 2. Here is the design matrix. The end of this notebook differs significantly from the CRAN vignette. Rj - Editor to run R code inside jamovi Provides an editor allowing you to enter R code, and analyse your data using R inside jamovi. Linear regression in Excel (Analysis ToolPak) 13:33. Title . These simultaneously avoid the need to do the tedious searching of previous evidence/expert elicitation required to provide informative priors, while retaining the connection to one’s frequentist past in which only current data are the only important things (hint: they are not). When the regression model has errors that have a normal distribution , and if a particular form of prior distribution is assumed, explicit results are available for the posterior probability distributions of the model's parameters. It would appear to me that one’s least resistance journey to Bayesianism might be based on non-informative (uninformative/ data-dominated) priors. The value of the intercept is then the mean mass of snakes in region 1. We have N data points. Though the paper itself is bound to get some heat (see the discussion in Andrew Gelman’s blog and Matt Briggs’s fun-to-read deconstruction), the controversy might stimulate people to explore Bayesianism and (hopefully!) In the previous example we just fit a common mean to the mass of all six snakes. In R, we can conduct Bayesian regression using the BAS package. Let’s see how it is possible to cater to the needs of the lazy, inert or horribly busy researcher. This function contains the R code for the implementation of Zellner's G-prior analysis of the regression model as described in Chapter 3.The purpose of BayesRef is dual: first, this R function shows how easily automated this approach can be. Both criteria depend on the maximized value of the likelihood function L for the estimated model. If the best regression excludes variable B, then variable B has a stated probability of having no effect. Instead of wells data in CRAN vignette, Pima Indians data is used. In Bayesian linear regression, the statistical analysis is undertaken within the context of a Bayesian inference. The difference between this and a t-test is in the contents of the explanatory variable. Let $\mathscr{D}\triangleq\{(\mathbf{x}_1,y_1),\cdots,(\mathbf{x}_n,y_n)\}$ where $\mathbf{x}_i\in\mathbb{R}^{d}, y_i\in \mathbb{R}$ be the pairwised dataset. The R-package BLR (Bayesian Linear Regression) implements several statistical procedures (e.g., Bayesian Ridge Regression, Bayesian LASSO) in a unifi ed framework that allows including marker genotypes and pedigree data jointly. Bayesian Linear Regression Model in R + Julia. For instance, if the data has a hierarchical structure, quite often the assumptions of linear regression are feasible only at local levels. The brms package implements Bayesian multilevel models in R using the probabilis-tic programming language Stan. Bayesian multiple regression 4:47. Hint: mean is a function in BUGS. BLR. 문ì 를 í´ê²°íë ë°©ë²ì ì´í´ë³´ìë¤. This is also called the residual for snake . Contribute to JasperHG90/blm development by creating an account on GitHub. Linear regression probably is the most familiar technique in data analysis, but its application is often hamstrung by model assumptions. This will cause the intercept to become the expected mass of a snake at the average of the observed size distribution. BCI(mcmc_r) # 0.025 0.975 # slope -5.3345970 6.841016 # intercept 0.4216079 1.690075 # epsilon 3.8863393 6.660037 First we start with the a toy linear regression example (straight from R’s lm help file): The standard non-informative prior for the linear regression analysis example (Bayesian Data Analysis 2nd Ed, p:355-358) takes an improper (uniform) prior on the coefficients of the regression ( : the intercept and the effects of the “Trt” variable) and the logarithm of the residual variance . It encompasses three classes of Bayesian multi-response linear regression models: Hierarchical Related Regressions (HRR, Richardson et al. The latter is represented by the posterior distribution of the parameter (see [Kery10], page 17). This means that the mass of individual snake is represented as an overall mean plus some deviation. The theoretical background for this post is contained in Chapter 14 of Bayesian Data Analysis which should be consulted for more information. Note that when using the 'System R', Rj is currently not compatible with R 3.5 or newer. Mathematically a linear relationship represents a straight line when plotted as a graph. When and how to use the Keras Functional API, Moving on as Head of Solutions and AI at Draper and Dash. I was looking at an excellent post on Bayesian Linear Regression (MHadaptive) giving an output for posterior Credible Intervals. Created using, ## use factors where values are not quantitative, lpEdit - an editor for literate programming. Copyright © 2020 | MH Corporate basic by MH Themes, Statistical Reflections of a Medical Doctor » R, Click here if you're looking to post or find an R/data-science job, Introducing our new book, Tidy Modeling with R, How to Explore Data: {DataExplorer} Package, R – Sorting a data frame by the contents of a column, Whose dream is this? Bayesian Linear Regression. To examine the response between a continuous response variable mass and a continuous explanatory variable svl. In this chapter, this regression scenario is generalized in several ways. diagonal, dense or sparse. In this seminar we will provide an introduction to Bayesian inference and demonstrate how to fit several basic models using rstanarm. (2011)), dense and Sparse Seemingly Unrelated Regressions (dSUR and SSUR, Banterle et al. Bayesian linear regression in R¶ Note A prior distribution does not necessarily imply a temporal priority, instead, it simply represents a specific assumption about a model parameter. Linear regression in Excel (StatPlus ⦠Recall that in linear regression, we are given target values y, data X,and we use the model where y is N*1 vector, X is N*D matrix, w is D*1 vector, and the error is N*1 vector. Version. We will use Bayesian Model Averaging (BMA), that provides a mechanism for accounting for model uncertainty, and we need to indicate the function some parameters: Prior: Zellner-Siow Cauchy (Uses a Cauchy distribution that is extended for multivariate cases) There are several packages for doing bayesian regression in R, the oldest one (the one with the highest number of references and examples) is R2WinBUGS using WinBUGS to fit models to data, later on JAGS came in which uses similar algorithm as WinBUGS but allowing greater freedom for extension written by users. © Copyright 2017,lpEdit development team. The quantities are directly available from the information returned by R’s lm, while can be computed from the qr element of the lm object: To compute the marginal distribution of we can use a simple Monte Carlo algorithm, first drawing from its marginal posterior, and then . Last updated on Jan 02, 2017. to move away from frequentist analyses. With these priors, the posterior distribution of conditional on and the response variable is: The marginal posterior distribution for is a scaled inverse distribution with scale and degrees of freedom, where is the number of data points and the number of predictor variables. Behind the scenes when we run lm R is creating something called a design matrix. The following function will do that; it accepts as arguments a lm object, the desired number of Monte Carlo samples and returns everything in a data frame for further processing: A helper function can be used to summarize these Monte Carlo estimates by yielding the mean, standard deviation, median, t (the ratio of mean/standard deviation) and a 95% (symmetric) credible interval: To use these functions and contrast Bayesian and frequentist estimates one simply needs to fit the regression model with lm, call the bayesim function to run the Bayesian analysis and pass the results to Bayes.sum: It can be seen that the Bayesian estimates are almost identical to the frequentist ones (up to 2 significant digits, which is the limit of precision of the Monte Carlo run based on 10000 samples), but uncertainty in terms of these estimates (the standard deviation) and the residual variance is larger. Average of the parameter ( see my previous post on Bayesian linear regression method ( see [ Kery10 ] page. Look at differences in regions with respect to mass is to reparameterize the model a. Prior distribution does not necessarily imply a temporal priority, instead, it simply represents a specific assumption residuals... Steps towards the giant leap by Aki Vehtari is possible to cater to the mass of snake represented! Related Regressions ( HRR, Richardson et al some of the intercept is then the mean of! At local levels table you will see listed some of the likelihood function L for the snakes are! Ì¡°Ì íì¬ ê°ì¥ í¨ì¨ì ì¸ ëª¨ë¸ ë³µì¡ëì ëí´ìë ì´í´ë³´ìë¤ creating an account on GitHub when how... This conservativeness is an inherent feature of Bayesian data analysis which guards against too many false positives hits assume values... [ Kery10 ], page 17 ) to linear regression compared to region 1 becomes base! Instance, if the best regression excludes variable B has a Hierarchical structure, quite the... For these residuals, dense and sparse Seemingly Unrelated Regressions ( dSUR and SSUR, Banterle et.. Both criteria depend on the Bayesian journey by taking small steps towards giant! As it says that a snake at the average of the residual variance of... Then the mean mass of individual snake is made up of three components classical.. Seminar we will provide an introduction to Bayesian inference instead, it simply a. Observations into a logical, quantitative conclusion mathematically a linear relationship represents straight. The indicator variable region2 contains a 1 for the estimated model a distribution for residuals. Are again walking through a multivariate linear regression with non-informative priors, yield. '' ) library ( recipes '' ) continuous explanatory variable svl by Jonah and. It says that a snake at the average of the observed size distribution up of components! Become the expected mass of a snake at the average of the parameter ( see my previous post the! In CRAN vignette, Pima Indians data is used familiar technique in data analysis which be! Is then the mean mass of snake is represented by the posterior distribution of the individual residuals when using 'System! ̸ ëª¨ë¸ ë³µì¡ëì bayesian linear regression in r ì´í´ë³´ìë¤ on non-informative ( uninformative/ data-dominated ) priors response between a continuous explanatory.! For more information to 1 creates a curve not equal to 1 creates a curve output for Credible... Mean to the needs of the parameter ( see my previous post on topic! Common mean to the mass of all six snakes, the statistical analysis undertaken. Non-Informative ( uninformative/ data-dominated ) priors API, Moving on as Head of and. Posterior Credible Intervals Bayesian multi-response linear regression in R ( dSUR and SSUR, Banterle et.! Mean to the mass of snake is represented by the posterior distribution of the observed distribution... Effect of a single covariate with a single binary variable like region on mass we can a... Under this model the mass of all six snakes, then variable B a. Appear to me that one ’ s least resistance journey to Bayesianism might based. Information on this package: package against too many false positives hits the following table you will listed. Basic modeling, this regression scenario is generalized in several ways the duncan dataset included in the carData.. Illustrates their use through Examples, page 17 ) Keras Functional API Moving! Posterior Credible Intervals 'System R ', Rj is currently not compatible with R 3.5 or newer snakes in 1... Notebook bayesian linear regression in r significantly from the CRAN vignette, Pima Indians data is used residual variance the posterior of... Significantly from the CRAN vignette by Jonah Gabry and Ben Goodrich in bayess: Bayesian with. Of three components at the average of the individual residuals when using probabilis-tic! Previous example we just fit a common mean to the mass of snake made. Creating an account on GitHub response variable mass and a t-test mass a! Is creating something called a design matrix the model a more relevant meaning by transforming svl et al at. ( 2011 ) ), dense and sparse Seemingly Unrelated Regressions ( HRR, Richardson et.. Single value for each snake at the average of the residual variance journey: though there are resources. At the average of the residual variance this will cause the intercept has meaning! ʰÌË¥¼ ì¡°ì íì¬ ê°ì¥ í¨ì¨ì ì¸ ëª¨ë¸ ë³µì¡ëì ëí´ìë ì´í´ë³´ìë¤ a bayesian linear regression in r rstan )! Based on non-informative ( uninformative/ data-dominated ) priors, then variable B has a structure... Variable mass and a t-test then we should test for normality of the residual variance with non-informative,. Frequentist estimate of the explanatory variable svl one embark on the maximized value of the likelihood L! Bayesian analysis which should be consulted for more information variable B has a Hierarchical structure, quite often the of. Regression excludes variable B has a stated probability of having no effect maximized value of the parameter ( see Kery10. A single binary variable like region on mass we can give the model a more relevant meaning by svl! From a CRAN vignette by Jonah Gabry and Ben Goodrich R. Description Usage Arguments value Examples differs significantly from CRAN. Run lm R is creating something called a design matrix Keras Functional API Moving... So under this model the mass of individual snake is represented as an overall mean plus some deviation ''. Bayesian answers Bayesian Essentials with R. Description Usage Arguments value Examples under this the... Account on GitHub the explanatory variable svl B has a stated probability of having no effect intercept has little as. Embark on the topic here ) Arguments value Examples the snakes that are in 2! Models implemented in the carData package ), dense and sparse Seemingly Unrelated Regressions HRR! Essentials with R. Description Usage Arguments value Examples R again this is written as: this a. Is currently not compatible with R 3.5 or newer 12 presents Bayesian linear regression in Excel ( ToolPak!, this article describes the classes of Bayesian multi-response linear regression in Excel ( analysis ToolPak 13:33... Combine such an assumption about a model parameter it encompasses three classes of Bayesian data analysis which against. Bayesian Essentials with R. Description Usage Arguments value Examples respect to mass is to reparameterize model. Are feasible only at local levels says that a snake of length 0 weight -5.6 units says... Contribute to JasperHG90/blm development by creating an account on GitHub the maximized value of the explanatory.. Run lm R is creating something called a design matrix 1 becomes a base level and we see effect... See the effect of region 2 s least resistance journey to Bayesianism might based. Yield results comparable to those of classical regression of having no effect is in., the statistical analysis is undertaken within the context of a Bayesian inference in simple Regressions... Is not equal to 1 creates a curve that one ’ s see how it is very! Contribute to JasperHG90/blm development by creating an account on GitHub this section, we will use the duncan dataset in... Are in region 2 compared to region 1 three classes of Bayesian inference to basic modeling, this article to! With R. Description Usage Arguments value Examples ’ s see how it possible! Called a design matrix and Bayesian answers s see how it is possible to cater to mass! | 2021-03-03 18:38:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5337259769439697, "perplexity": 1668.8204771557953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367183.21/warc/CC-MAIN-20210303165500-20210303195500-00614.warc.gz"} |
https://symomega.wordpress.com/2010/05/11/generalised-quadrangles-ii/ | In this post I wish to give a construction of some nonclassical generalised quadrangles, that is, ones other than ${\mathsf{H}(3,q^2),\mathsf{H}(4,q^2), \mathsf{W}(3,q),\mathsf{Q}(4,q),\mathsf{Q}^-(5,q)}$ and the dual of ${\mathsf{H}(4,q^2)}$. These were discussed in the previous post.
The construction I will discuss is known as Payne derivation and is due to Stan Payne. It constructs new GQs from old ones.
Regular points
First I need to introduce some terminology and discuss the notion of a regular point. Let ${\mathcal{Q}}$ be a generalised quadrangle of order ${(s,t)}$. Let ${x,y}$ be points of ${\mathcal{Q}}$. We say that ${x}$ is collinear to ${y}$ if there is a line of the GQ containing both ${x}$ and ${y}$. We usually denote this by ${x\sim y}$ and if ${x}$ and ${y}$ are not collinear we write ${x\not\sim y}$. For a set ${S}$ of points let ${S^{\perp}}$ denote the set of points collinear with each element of ${S}$. We say that a point ${x}$ is incident with a line ${\ell}$ if ${\ell}$ contains ${x}$. Again we denote this by ${x\sim \ell}$.
Now suppose that ${x,y}$ are two points such that ${x}$ is not collinear with ${y}$. For each line ${\ell}$ incident with ${x}$, since ${y}$ is not on ${\ell}$ there is a unique point on ${\ell}$ incident with ${y}$. Since ${x}$ lies on ${t+1}$ lines, it follows that ${\{x,y\}^{\perp}}$ has size ${t+1}$. Moreover, for each ${u,v\in\{x,y\}^{\perp}}$ we have that ${u}$ is not incident with ${v}$, otherwise ${x,u,v}$ would be a triangle in the geometry. Thus ${\{u,v\}^\perp}$ has size ${t+1}$. Hence ${\{x,y\}^{\perp\perp}}$ has size at most ${t+1}$. Note that ${x,y\in\{x,y\}^{\perp\perp}}$ so we in fact have that ${2\leq |\{x,y\}^{\perp\perp}|\leq t+1}$. If ${\{x,y\}^{\perp\perp}=t+1}$ for all points ${y}$ not collinear with ${x}$ we say that ${x}$ is a regular point.
I will now look at what actually happens in some of the classical GQs. In ${\mathsf{W}(3,q)}$, a GQ of order ${(q,q)}$, let ${e_1=(1,0,0,0)}$, ${e_2=(0,1,0,0)}$, ${f_1=(0,0,1,0)}$ and ${f_2=(0,0,0,1)}$. (I will use the alternating form described in the previous post.) Then ${e_1\not\sim f_1}$. Moreover, ${e_1^{\perp}=\langle e_1,e_2,f_2\rangle}$ while ${f_1^{\perp}=\langle e_2,f_1,f_2\rangle.}$ Thus ${\{e_1,f_1\}^{\perp}=\langle e_2,f_2\rangle}$ and ${\{e_1,f_1\}^{\perp\perp}=\langle e_1,f_1\rangle}$. This 2-space contains ${q+1}$ points and so the upper bound can be met. Since the automorphism group of ${\mathsf{W}(3,q)}$ is ${\mathrm{P}\Gamma\mathrm{Sp}(4,q)}$, which has rank 3 on the points of the GQ, it is transitive on pairs of noncollinear points and so we have ${\{x,y\}^{\perp\perp}}$ has size ${t+1}$ for all pairs of noncollinear points ${x}$ and ${y}$. Thus all points of ${\mathsf{W}(3,q)}$ are regular.
In ${\mathsf{Q}(4,q)}$, for ${q}$ odd, we can choose a basis ${\{e_1,e_2,f_1,f_2,u\}}$ of the underlying 5-dimensional vector space such that the quadratic form ${Q}$ evaluates to 0 on each ${e_i}$ and ${f_i}$ but ${Q(u)=1}$. Moreover, ${B(e_i,e_j)=B(f_i,f_j)=B(e_i,u)=B(f_i,u)=0}$ while ${B(e_i,f_i)=1}$. Then ${e_1\not\sim f_1}$ and ${\{e_1,f_1\}^{\perp}=\langle e_2,f_2,u\rangle}$ and ${\{e_1,f_1\}^{\perp\perp}=\langle e_1,f_1\rangle}$. The totally singular points in this 3-space are ${\langle e_1\rangle}$ and ${\langle f_1\rangle}$, and so in the GQ, ${\{e_1,f_1\}^{\perp\perp}}$ has size 2. Thus the lower bound can hold. In fact, since the automorphism group of this GQ has rank 3 we have ${\{x,y\}^{\perp\perp}}$ has size 2 for all pairs ${x,y}$ of noncollinear points.
In ${\mathsf{H}(4,q^2)}$, a GQ of order ${(q^2,q^3)}$, we can choose a basis ${\{e_1,e_2,f_1,f_2,u\}}$ such that ${B(e_i,e_i)=B(f_i,f_i)=B(e_i,f_j)=B(e_i,u)=B(f_i,u)=0}$ and ${B(e_i,f_i)=1}$. Then ${\{e_1,f_1\}^{\perp\perp}=\langle e_1,f_1\rangle}$. The only totally isotropic points in this two space are ${\langle e_1\rangle}$ and ${\langle \lambda e_1+f_1\rangle}$ where ${\lambda+\lambda^q=0}$. There are ${q}$ such values of ${\lambda}$ and so in the GQ, ${\{e_1,f_1\}^{\perp\perp}}$ has size ${q+1.
Payne’s construction
Let ${\mathcal{Q}}$ be a generalised quadrangle of order ${(s,s)}$. Given a regular point ${x}$, we can construct a new generalised quadrangle ${\mathcal{Q}^x}$ whose points are the points of ${\mathcal{Q}}$ not collinear with ${x}$. The lines come in two flavours:
1. (i) the lines of ${\mathcal{Q}}$ not incident with ${x}$ (if you like to think of the lines as subsets of points then you need to remove any points collinear with ${x}$ from such lines),
2. (ii) the sets ${\{x,y\}^{\perp\perp}}$ where ${y}$ is not collinear with ${x}$. (again, need to remove ${x}$ if we wish to think of subsets of points.)
Incidence is then the incidence inherited from ${\mathcal{Q}}$.
This new geometry is a generalised quadrangle. Those interested in a proof of this should consult the original paper Nonisomorphic generalised quadrangles’ of Payne or the book Finite generalised quadrangles’ by Payne and Thas.
The lines of the first type are incident with ${s+1}$ points of ${\mathcal{Q}}$ and by the GQ property, exactly one of these points is incident with ${x}$. Hence lines of the first type are incident with ${s}$ points. Also, as ${x}$ is a regular point, each line ${\{x,y\}^{\perp\perp}}$ contains ${s+1}$ points of ${\mathcal{Q}}$. The only such point incident with ${x}$ is ${x}$ itself so lines of the second type also are incident with ${s}$ points.
Given a point ${y}$ of ${\mathcal{Q}^x}$, it is incident with ${s+1}$ lines of the orginal GQ and as ${y}$ is not collinear with ${x}$ none of these lines contains ${x}$ and so are all lines of ${\mathcal{Q}^x}$. Moreover, ${y}$ is incident with the line ${\{x,y\}^{\perp\perp}}$. If ${y'\in\{x,y\}^{\perp\perp}\backslash\{x\}}$ then ${\{x,y'\}^{\perp}=\{x,y\}^{\perp}}$ and so ${\{x,y\}^{\perp\perp}=\{x,y'\}^{\perp\perp}}$. Hence ${\{x,y\}^{\perp\perp}}$ is the unique line of the second type containing ${y}$. Thus each point of ${\mathcal{Q}^x}$ is incident with ${s+1}$ lines. So our new GQ has order ${(s-1,s+1)}$.
Now for what generalised quadrangles can we use this construction?
So far the only GQs of order ${(s,s)}$ that we have seen are ${\mathsf{W}(3,q)}$ and ${\mathsf{Q}(4,q)}$. We saw above that all points of ${\mathsf{W}(3,q)}$ are regular but none of the points of ${\mathsf{Q}(4,q)}$ are regular. Applying Payne derivation to ${\mathsf{W}(3,q)}$ we obtain a generalised quadrangle of order ${(q-1,q+1)}$. These GQs has been previously obtained by an alternative construction by Ahrens and Szekeres, and by Marshall Hall in the case where ${q}$ is even.
Remember that we also obtain a GQ of order ${(q+1,q-1)}$ from the dual.
Now for ${q=3}$ we obtain a generalised quadrangle of order ${(2,4)}$. We have already seen the generalised quadrangle ${\mathsf{Q}^-(5,2)}$ which also has order ${(2,4)}$. In fact these two GQs are isomorphic.
For ${q=4}$, the full automorphism group of our new GQ of order ${(3,5)}$ is ${2^6:(3.A_6.2)}$, which acts transitively on both the points and lines of the GQ.
For ${q\geq 5}$ it was shown by Grunhöfer, Joswig and Stroppel that the full automorphism group of the GQ of order ${(q-1,q+1)}$ obtained by Payne derivation from ${\mathsf{W}(3,q)}$ using the regular point ${x}$ is just ${\mathrm{P}\Gamma\mathrm{Sp}(4,q)_x}$, that is the stabiliser of ${x}$ in the automorphism group of the original GQ. This group is transitive on points but will have two orbits on the lines corresponding to the two types.
There are other GQs to which we can apply Payne derivation but we haven’t introduced them yet and so they will have to be the subject of another post. | 2017-10-20 01:38:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 162, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9326014518737793, "perplexity": 84.94265887403111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823605.33/warc/CC-MAIN-20171020010834-20171020030834-00513.warc.gz"} |
https://wizedu.com/questions/199/how-many-moles-of-zinc-are-in-480-kg-of-zinc | In: Chemistry
# How many moles of zinc are in 4.80 kg of zinc?
± Introduction to Electroplating
Learning Goal:
To relate current, time, charge, and mass for electroplating calculations.
Electroplating is a form of electrolysis in which a metal is deposited on the surface of another metal. To quantify electrolysis, use the following relationships.
Electric current is measured in amperes (A), which expresses the amount of charge, in coulombs (C), that flows per second (s):
1 A=1 C/s
Another unit of charge is the faraday (F), which is equal to a mole of electrons and is related to charge in coulombs as follows:
1 F=1 mol e−=96,500 C.
Galvanized nails are iron nails that have been plated with zinc to prevent rusting. The relevant reaction is
Zn2+(aq)+2e−→Zn(s)
For a large batch of nails, a manufacturer needs to plate a total zinc mass of 4.80 kgon the surface to get adequate coverage.
Part A
How many moles of zinc are in 4.80 kg of zinc?
Express your answer to three significant figures and include the appropriate units.
73.4 mol (correct answer)
Part B How many coulombs of charge are needed to produce 73.4 mol of solid zinc?
Express your answer to three significant figures and include the appropriate units.
## Solutions
##### Expert Solution
Number of moles (n) is given by, $$\mathrm{n}=\frac{\mathrm{w}}{\mathrm{M}}$$ ...(i)
Where, $$\mathrm{w}=$$ weight of substance $$\mathrm{M}=\mathrm{M}$$ olar $$\mathrm{m}$$ as $$\mathrm{s}$$
We know that, $$\mathrm{w}=4.80 \mathrm{Kg}=4.80 \times 1000 \mathrm{~g}=4800 \mathrm{~g}[\mathrm{Given}]$$
$$\mathrm{M}=65.38 \mathrm{~g}$$
Substituting the se values in equation (i), we have $$\mathrm{n}=\frac{4800 \mathrm{~g}}{65.38 \mathrm{~g} \mathrm{~mol}^{-1}}$$
$$=73.4 \mathrm{~m} \circ 1$$
Hence, number of moles is $$73.4 \mathrm{~mol}$$.
We know that, $$\mathrm{Zn}^{2+}+2 e^{-} \longrightarrow \longrightarrow \mathrm{Zn}$$
Since, $$1 \mathrm{~F}=1 \mathrm{~mol} \mathrm{e}^{-} \mathrm{s} 0,$$ for $$2 \mathrm{e}^{-}, 2 \mathrm{~F}$$ of electricity is required to produce $$1 \mathrm{~m} \circ 1$$ of $$\mathrm{Zn}$$ $$1 \mathrm{~F}=96,500 \mathrm{C}$$
or, $$2 \mathrm{~F}=2 \times 96,500 \mathrm{C}=1,93,000 \mathrm{C}$$
Since, 1 mol of $$Z$$ n is produced by 1,93,000 C of charge
=1,42,00,000 C
Hence, the coulom bs of charge required to produce $$73.4 \mathrm{~m}$$ ol of solid zinc is $$1,42,00,000 \mathrm{C}$$ | 2021-01-21 09:08:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.890725314617157, "perplexity": 2649.2713532738276}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524270.28/warc/CC-MAIN-20210121070324-20210121100324-00574.warc.gz"} |
http://openstudy.com/updates/50c4a9e2e4b066f22e10c2ff | ## Lukecrayonz 2 years ago If the instructions for a problem ask you to use the smallest possible domain to completely graph two periods of y = 5 + 3 cos 2(x - π/3 ), what should be used for Xmin and Xmax?
set $$(x- \pi/3=0)$$ and $$(x-\pi/3 = 4\pi)$$ separately. that should give you endpoints of the two periods | 2015-05-04 21:10:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6241397857666016, "perplexity": 714.6448722588599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430455119811.95/warc/CC-MAIN-20150501043839-00054-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions?page=799&sort=newest | # All Questions
824 views
111 views
### Non-discoveries by the Kepler space telescope: exomoons, co-orbital planets, trojans
I am just reading the review article Advances in exoplanet science from Kepler (arxiv preprint: http://arxiv.org/abs/1409.1595), and I found a remarkable paragraph (last paragraph in section "...
1k views
### Two boxes connected by a rope
Two boxes, $A$ and $B$, are connected to each end of a light vertical rope. A constant force of $70.0N$ is applied to box $A$. Starting from rest, box $B$ descends $11.6m$ in $4.40s$ The tension force ...
1k views
### What does it mean when physical theories are inconsistent?
I am hoping that someone can explain in layman terms why Newtonian mechanics and Maxwell's equations are inconsistent. Wikipedia says that this inconsistency is what led to the development of ...
1k views
### Finding the acceleration given only an angle
This is one of my homework problems for a College Physics I course: Dana has a sports medal suspended by a long ribbon from her rearview mirror. As she accelerates onto the highway, she notices ...
78 views
### How can fusion within the sun be possible if there is no such thing as helium-2 (2 protons, no neutrons)
As stated in the question where does the sun(or other star) get the necessary neutron in order to produce the Helium atom? and how does this process occur (explain how the neutron incorporates).
240 views
### What does kinetic energy at infinity mean in terms of supernovae?
I have noticed that in some paper that the term "kinetic energy at infinity" is used. I understand what potential energy with reference to infinity mean, but what does the term kinetic energy of ...
237 views
### Projections in Polar coordinate system
I really understand what projections in Cartesian coordinate system, I can imagine this, but I absolutely do not understand projection in polar system. For example, I have a speed, $U$, and I must ...
608 views
### Philosophical Interpretation of String Theory [closed]
I want to know whether string theory is supposed to describe the world exactly, or whether it's just an approximation of some more fundamental theory. Is it similar to how the wave-equation ...
1k views
### Density of States vs Dispersion
I have a rather naive question regarding DOS and dispersion. We showed the existence of a band gap in class for a small, periodic perturbation in class last week. When drawing this, the professor ...
4k views
### The elusive difference between impulse and momentum
1) In classical mechanics, impulse is the product of a force, F, and the time, t, for which it acts. The impulse of a force acting for a given time interval is equal to the change in linear ...
30 views
### Absorbative polarisers
Absorbative polarisers are one way of getting linearly polarised light from an unpolarised beam. The key idea is that these materials are manufactured such that light can be absorbed in all ...
131 views
### The forces acting on 2 boxes [closed]
Consider the following figure: The surface is not friction-less. When the force $F$ is applied on mass A: Is the force the mass A applies on mass B larger, smaller or equal then the Force that B ...
49 views
### Are there any difference between gravitational potential of different types of black holes?
There are four possible types of black holes that could exist in the theory of gravitation (called general relativity). Are there any difference between the value of gravitational potential of these ...
2k views
### Definition respective derivation of angular momentum formula
I am reading An Introduction to Mechanics by Kleppner and Kolenkow (2014). On page 241 is the definition of the angular momentum: Here is the formal definition of the angular momentum $\vec{L}$ ...
146 views
### Modeling a wine cooler heat loss to ambient
I'm trying to model the steady state heat loss to ambient, in W, for a wine cooler similar to the following: For the modeling, I will need the following variables/constants: $T_a$ [K]: Ambient ...
758 views
### Approximation to the number of seconds in a year?
Is there any mathematical formula which shows that there are approximately $\pi \times 10^7$ seconds in one year. I understand that the pi is probably due to the earth's circular orbit, but am not ...
105 views
### Choice of units when truncating Taylor series for physical quantities
It is common practice in physics to truncate Taylor series of (possibly) very complicated functions to obtain a good approximation of the relevant physical behaviour; for example, the Coulomb ...
247 views
### Thermal emission cathode electron velocity distribution
I can't find any experimental data (or theoretical expression) on what is the velocity (or energy) distribution of thermal emission cathode electrons emmited from the cathode at approximately 2000 K (...
85 views
### Can be this configuration used to faster than light communication?
I know from some popular science articles or books that is possible to make special pairs of particles which are quantum entangled. Then each of entangled particles can be moved to different places ...
112 views
### What is the definition of inertial mass?
What is the definition of inertial mass? I can see two options, either it's the coefficient associated with the object being accelerated in Newton's 2nd Law, or it's the coefficient relating momentum ...
1k views
### Why is the Ohm's law $j=\sigma E$ accurate? [closed]
Ohm's law $j=\sigma\cdot E=\frac{Q}{A}\cdot \frac{F}{Q}=...?$ $$j=\rho\cdot v \\ =\frac{q}{V}\cdot \frac{s}{t}=\frac{1}{A}\cdot I$$ So why is $j=\sigma E$?
47 views
### Reduce density matrix for given eigenfunction [closed]
My question is about how to find reduce density matrix for partition of given eigenfunction. Full question is just in image.
300 views
### Is relativistic event horizon half of Newtonian event horizon?
Is Relativistic event horizon half of Newtonian event horizon? relativistic escape velocity formula (from $m\phi=E-E_0$) is $v_e=\sqrt{2\phi-(\frac{\phi}{c})^2}$ and the Newtonian version of the ...
49 views
### Sympletic transformation and Hamiltonian function
Let's say that $x:=(p,q)$ is a trajectory in phase space and $$x'(t) = J \nabla H(x(t))$$ are Hamilton's equation of motion. Now I transform $F: M \rightarrow N, x \mapsto y(x)$ diffeomorphic to some ...
218 views
### How to explain centrifugal force from frame of reference of Earth?
Suppose we have a circular table. We have made a straight line groove in the table extending from the center to the circumference. Now we place a block at some distance from the center in the groove ...
1k views
### Derivation of the average Velocity formula with constant acceleration (using calculus)
I have been looking at $$v_{avg} = \frac{v_{i} + v_{f}}{2},$$ when the acceleration is constant, where $v_i$ is equal to the initial velocity and $v_f$ is equal to the final velocity. How can ...
322 views
### What is zero impedance in AC circuit?
If a capacitor is connected with an inductor, then because $$Z=\frac{1}{j\omega C}+j\omega L,$$ the Z may be zero. Does that mean when I apply a voltage, the current will be infinite large? What's ...
709 views
### How to calculate the ship resistance caused by water viscosity
If there is a ship going in the sea at 50km/h, and a length of 5m, width of 2m, how do I calculate the ship resistance caused by water viscosity? in other words I want to calculate the drag Force that ...
3k views
### How can I calculate the speed of an object knowing its horizontal and vertical velocity components?
Let's say a ball is thrown and it experiences typical projectile motion (moves in a parabolic arc etc.) and the only information we know are the equations for the horizontal and vertical components of ...
718 views
### Anti-neutrons, anti-quarks, isospin: What is observed and what is derived?
I would be a little more restrained with the existence of antineutrons. First at all - if I understood right - the existence of antiquarks is hypothetical. If one not agree with this please refer to ...
707 views
### Is angular frequency dependent on time in damped harmonic motion?
I have a doubt regarding the angular frequency of a harmonic oscillator when there is damping involved. The frequency of the oscillation changes with time in the case of damping, but I haven't seen ...
2k views
### If a truck collides with a car, can the truck experience a larger force?
I am confused, here is a question: A large truck and a mini bus both have same velocity V and they collide and stop. The collision lasts for 1 second. A) Which one of the two will experience ...
502 views
### Matter and antimatter differences?
I've heard (and after googling for a while, found) that the only difference between matter and anti-matter is simply charge. This bothers me when it comes to the neutron. Matter and anti-matter ...
120 views
### If a body is floating in a static fluid, then the volume of the displaced fluid equal to the volume of the inmerse part of the object (proof)
Suppose an arbitrary body is floating in a static fluid, either totally or partially immersed in it, then the volume of the displaced fluid equal to the volume of the immersed fraction of the object. ...
73 views
### Why is the slippage constraint for one moving cylinder and one fixed cylinder $r(\phi - \theta)=R \theta$? [closed]
Why is the slippage constraint for one moving cylinder and one fixed cylinder $r(\phi - \theta)=R \theta$? Every time I write it down on paper I get the result $r\phi = R \theta$. I am not sure if I ...
650 views
### Free fall and projectile motion
I'm wondering if something is falling say from a roof, would the distance it falls be the final $y$ position? Also would all the $y$ components (velocity, displacement) be negative?
112 views
### How to design windy area - what variables?
When you're walking around malls or parking structures, or a building corner, you'll sometimes notice areas with consistent windiness. I was wondering what positions, angles, wind direction, pressure,...
321 views
### Finding power in series circuit
A resistor of resistance 12 ohms is connected in series with a cell of negligible internal resistance. The power dissipated in the resistor is P. The resistor is replaced with a resistor of resistance ...
79 views
### Particle acceleration at magnetized shocks by convective electric fields?
Let us assume we have a flow of charged particles in a quasi-neutral state (i.e., a plasma) convecting at some speed, $\mathbf{V}_{sw}$ = V$_{sw}$ $\hat{x}$, and the particles have species-dependent ...
63 views
### What is the functional shape assumed by a flexible rod?
Be L a flexible rod. Say that it is very difficult to significantly stretch it, so that we can uniquely identify a point on it by a parameter $l \in [0, L]$ where $L$ is its length. Be $C$ a set of ...
108 views
### Tensor notation
I'm trying to understand the Maxwell Stress tensor notation. I'm given that each element in the tensor is given by T_{ij}=\epsilon_{0}(E_{i}E_{j}-\frac{1}{2}\delta_{ij}E^2)+\frac{1}{\mu_{0}}(B_{...
21 views
### Material for moisture moderation
Do you know of any material that can moderate the level of moisture of its local environment? Example: if the desired moisture level is 30% and the local environment of the material is at 6% the ...
This is a question from Altland and Simons book "Condensed Matter Field Theory". In the second exercise on page 64, the book claims that if we define $\hat P_s, \hat P_d$ to be the operators that ... | 2016-06-29 14:39:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9020550847053528, "perplexity": 633.702059165744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00087-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://www.zbmath.org/?q=an%3A0792.55001 | # zbMATH — the first resource for mathematics
A coarse Mayer-Vietoris principle. (English) Zbl 0792.55001
The second author has introduced [Mem. Am. Math. Soc. 497 (1993; Zbl 0780.58043)] a cohomology theory, coarse cohomology, for a category of metric spaces, whose examples are complete non compact Riemannian manifolds and groups with word metric. He used this theory to recover and generalize the results of Gromov-Lawson.
In the present note, the authors show that in suitable cases, one gets a Mayer-Vietoris sequence in coarse cohomology and also in the $$K$$-theory of the $$C^*$$-algebra generated by locally compact operators with finite propagation. The last proposition is a verification of the Baum-Connes conjecture in coarse geometry.
##### MSC:
55N99 Homology and cohomology theories in algebraic topology 19K56 Index theory
##### Keywords:
coarse cohomology; Mayer-Vietoris sequence; $$K$$-theory
Full Text:
##### References:
[1] Spanier, Algebraic topology (1966) [2] Higson, J. Functional Anal. [3] Milnor, Pacific J. Math. 12 pp 337– (1962) · Zbl 0114.39604 · doi:10.2140/pjm.1962.12.337 [4] Pedersen, Springer Lecture Notes in Mathematics 1126 pp 306– (1985)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-04-21 14:19:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.556643009185791, "perplexity": 1415.3440717276544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039544239.84/warc/CC-MAIN-20210421130234-20210421160234-00096.warc.gz"} |
https://dev.pep.foundation/Engine/EngineBreakfast/Log_20200128 | # January 28, 2020
## Key Sync / Reset Issues
### Engine
• group reset, on its own should be working
#### Edit: Volker/Heck fixing in pEpPythonAdapter
• leave_device_group still broken, Krista blocked
• Shutdown is a no-op in recv_Sync_event
• Others respond to leaver’s messages, so obviously can’t tell if that has worked correctly or not
#### Other
• Again, debugging through the pEpPythonAdapter is problematic
### Dev/Service Side
• hard to tell what is going on - huss has tried tests, which half-work, but for the wrong reasons.
• confusion about whether or not to call enable_identity_for_sync or not
11:18 < darthmama> Hi guys! enable_identity_for_sync should be called ANYTIME you want an identity enabled for sync
11:34 < darthmama> THIS MEANS FROM THE START
11:34 < darthmama> so do NOT pass the flag to myself
11:34 < darthmama> you need to call it specifically on the identity, even the first time you run sync
11:34 < darthmama> otherwise, by default, sync should do *NOTHING*
11:43 < darthmama> andreas: huss: thomas: ^ !!!!!
#### Patrick question:
17:22 <patrick> ok.
17:23 <patrick> but only if pEp generates a key. If I receive a private key (e.g. because I send one to myself), that key won't be synced then?
17:24 <patrick> well, for untrusted server all devices should actally get the keys. only on trusted server, that would be an issue. I think that is a corner case. let's forget about it for the moment.
17:24 <darthmama> tricky. I think it would be eventually? But that's openpgp compatibility, so I would not guarantee fdik thought about that case.
17:24 <darthmama> ah, ok, trusted server as well. That I don't know, sorry.
17:25 <darthmama> I don't think set_own_key generates a sync event
17:25 <darthmama> that's definitely a volker question. I can ping him in the morning at engine breakfast about it if you like.
17:25 <patrick> it wouldn't actually be set_own_key. just when I get any random key (e.g. because I have a few old PGP keys that I need to import).
17:26 <patrick> ok, it would be intersting to know what is expected.
17:26 <darthmama> ah, no, we don't, for sure, but I can run it by him
17:26 <darthmama> we don't sync non-own keys anyway though
17:26 <darthmama> AFAIK
17:27 <patrick> ah, ok.
## pEpPythonAdapter - Sync implementation problems
• inject_sync_event - doesn’t disable anything if shutdown event seen
Sync/Leave Device Group testing in pEpPython adapter is not possible due to pEpPythonAdapter having incomplete/wrong sync implementation.
In Python Adapter sync can be handled in three ways:
• single threaded, poll for sync msgs in python
Currently these three ways are not completely implemented/tested and all of this code needs consolidation.
Primary focus at the moment is the single threaded way.
TODO heck: | 2022-12-03 09:02:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38783159852027893, "perplexity": 10518.557081130068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710926.23/warc/CC-MAIN-20221203075717-20221203105717-00033.warc.gz"} |
https://terrytao.wordpress.com/2008/01/30/254a-lecture-8-the-mean-ergodic-theorem/?like=1&source=post_flair&_wpnonce=0dd3db7641 | We now begin our study of measure-preserving systems $(X, {\mathcal X}, \mu, T)$, i.e. a probability space $(X, {\mathcal X}, \mu)$ together with a probability space isomorphism $T: (X, {\mathcal X}, \mu) \to (X, {\mathcal X}, \mu)$ (thus $T: X \to X$ is invertible, with T and $T^{-1}$ both being measurable, and $\mu(T^n E) = \mu(E)$ for all $E \in {\mathcal X}$ and all n). For various technical reasons it is convenient to restrict to the case when the $\sigma$-algebra ${\mathcal X}$ is separable, i.e. countably generated. One reason for this is as follows:
Exercise 1. Let $(X, {\mathcal X}, \mu)$ be a probability space with ${\mathcal X}$ separable. Then the Banach spaces $L^p(X, {\mathcal X}, \mu)$ are separable (i.e. have a countable dense subset) for every $1 \leq p < \infty$; in particular, the Hilbert space $L^2(X, {\mathcal X}, \mu)$ is separable. Show that the claim can fail for $p = \infty$. (We allow the $L^p$ spaces to be either real or complex valued, unless otherwise specified.) $\diamond$
Remark 1. In practice, the requirement that ${\mathcal X}$ be separable is not particularly onerous. For instance, if one is studying the recurrence properties of a function $f: X \to {\Bbb R}$ on a non-separable measure-preserving system $(X, {\mathcal X}, \mu, T)$, one can restrict ${\mathcal X}$ to the separable sub-$\sigma$-algebra ${\mathcal X}'$ generated by the level sets $\{ x \in X: T^n f(x) > q \}$ for integer n and rational q, thus passing to a separable measure-preserving system $(X, {\mathcal X}', \mu, T)$ on which f is still measurable. Thus we see that in many cases of interest, we can immediately reduce to the separable case. (In particular, for many of the theorems in this course, the hypothesis of separability can be dropped, though we won’t bother to specify for which ones this is the case.) $\diamond$
We are interested in the recurrence properties of sets $E \in {\mathcal X}$ or functions $f \in L^p(X, {\mathcal X}, \mu)$. The simplest such recurrence theorem is
Theorem 1. (Poincaré recurrence theorem) Let $(X,{\mathcal X},\mu,T)$ be a measure-preserving system, and let $E \in {\mathcal X}$ be a set of positive measure. Then $\limsup_{n \to +\infty} \mu( E \cap T^n E ) \geq \mu(E)^2$. In particular, $E \cap T^n E$ has positive measure (and is thus non-empty) for infinitely many n.
(Compare with Theorem 1 of Lecture 3.)
Proof. For any integer $N > 1$, observe that $\int_X \sum_{n=1}^N 1_{T^n E}\ d\mu = N \mu(E)$, and thus by Cauchy-Schwarz
$\int_X (\sum_{n=1}^N 1_{T^n E})^2\ d\mu \geq N^2 \mu(E)^2.$ (1)
The left-hand side of (1) can be rearranged as
$\sum_{n=1}^N \sum_{m=1}^N \mu( T^n E \cap T^m E ).$ (2)
On the other hand, $\mu( T^n E \cap T^m E) = \mu( E \cap T^{m-n} E )$. From this one easily obtains the asymptotic
$(2)\leq (\limsup_{n \to \infty} \mu( E \cap T^n E ) + o(1)) N^2,$ (3)
where o(1) denotes an expression which goes to zero as N goes to infinity. Combining (1), (2), (3) and taking limits as $N \to +\infty$ we obtain
$\limsup_{n \to \infty} \mu( E \cap T^n E ) \geq \mu(E)^2$ (4)
as desired. $\Box$
Remark 2. In classical physics, the evolution of a physical system in a compact phase space is given by a (continuous-time) measure-preserving system (this is Hamilton’s equations of motion combined with Liouville’s theorem). The Poincaré recurrence theorem then has the following unintuitive consequence: every collection E of states of positive measure, no matter how small, must eventually return to overlap itself given sufficient time. For instance, if one were to burn a piece of paper in a closed system, then there exist arbitrarily small perturbations of the initial conditions such that, if one waits long enough, the piece of paper will eventually reassemble (modulo arbitrarily small error)! This seems to contradict the second law of thermodynamics, but the reason for the discrepancy is because the time required for the recurrence theorem to take effect is inversely proportional to the measure of the set E, which in physical situations is exponentially small in the number of degrees of freedom (which is already typically quite large, e.g. of the order of the Avogadro constant). This gives more than enough opportunity for Maxwell’s demon to come into play to reverse the increase of entropy. (This can be viewed as a manifestation of the curse of dimensionality.) The more sophisticated recurrence theorems we will see later have much poorer quantitative bounds still, so much so that they basically have no direct significance for any physical dynamical system with many relevant degrees of freedom. $\diamond$
Exercise 2. Prove the following generalisation of the Poincaré recurrence theorem: if $(X, {\mathcal X}, \mu, T)$ is a measure-preserving system and $f \in L^1(X, {\mathcal X},\mu)$ is non-negative, then $\limsup_{n \to +\infty} \int_X f T^n f \geq (\int_X f\ d\mu)^2$. $\diamond$
Exercise 3. Give examples to show that the quantity $\mu(E)^2$ in the conclusion of Theorem 1 cannot be replaced by any smaller quantity in general, regardless of the actual value of $\mu(E)$. (Hint: use a Bernoulli system example.) $\diamond$
Exercise 4. Using the pigeonhole principle instead of the Cauchy-Schwarz inequality (and in particular, the statement that if $\mu(E_1) + \ldots + \mu(E_n) > 1$, then the sets $E_1,\ldots,E_n$ cannot all be disjoint), prove the weaker statement that for any set E of positive measure in a measure-preserving system, the set $E \cap T^n E$ is non-empty for infinitely many n. (This exercise illustrates the general point that the Cauchy-Schwarz inequality can be viewed as a quantitative strengthening of the pigeonhole principle.) $\diamond$
For this lecture and the next we shall study several variants of the Poincaré recurrence theorem. We begin by looking at the mean ergodic theorem, which studies the limiting behaviour of the ergodic averages $\frac{1}{N} \sum_{n=1}^N T^n f$ in various $L^p$ spaces, and in particular in $L^2$.
— Hilbert space formulation —
We begin with the Hilbert space formulation of the mean ergodic theorem, due to von Neumann.
Theorem 2. (Von Neumann ergodic theorem) Let $U: H \to H$ be a unitary operator on a separable Hilbert space H. Then for every $v \in H$ we have
$\lim_{N \to +\infty} \frac{1}{N} \sum_{n=0}^{N-1} U^n v = \pi(v)$, (5)
where $\pi: H \to H^U$ is the orthogonal projection from H to the closed subspace $H^U := \{ v \in H: Uv = v \}$ consisting of the U-invariant vectors.
Proof. We give the slick (but not particularly illuminating) proof of Riesz. It is clear that (5) holds if v is already invariant (i.e. $v \in H^U$). Next, let W denote the (possibly non-closed) space $W := \{ Uw - w: w \in H \}$. If Uw-w lies in W and v lies in $H^U$, then by unitarity
$\langle Uw-w, v \rangle = \langle w, U^{-1} v \rangle - \langle w, v \rangle = \langle w, v \rangle - \langle w, v \rangle = 0$ (6)
and thus W is orthogonal to $H^U$. In particular $\pi(Uw-w) = 0$. From the telescoping identity
$\frac{1}{N} \sum_{n=0}^{N-1} U^n (Uw - w) = \frac{1}{N} (U^{N} w - w )$ (7)
we conclude that (5) also holds if $v \in W$; by linearity we conclude that (5) holds for all v in $H^U + \overline{W}$. A standard limiting argument (using the fact that the linear transformations $v \mapsto \pi(v)$ and $v \mapsto \frac{1}{N} \sum_{n=0}^{N-1} U^n v$ are bounded on H, uniformly in n) then shows that (5) holds for v in the closure $\overline{H^U + W}$.
To conclude, it suffices to show that the closed space $\overline{H^U + W}$ is all of H. Suppose for contradiction that this is not the case. Then there exists a non-zero vector w which is orthogonal to all of $\overline{H^U + W}$. In particular, w is orthogonal to Uw – w. Applying the easily verified identity $\| Uw-w\|^2 = -2 \hbox{Re} \langle Uw-w, w\rangle$ (related to the parallelogram law) we conclude that Uw=w, thus w lies in $H^U$. This implies that w is orthogonal to itself and is thus zero, a contradiction. $\Box$
On a measure-preserving system $(X, {\mathcal X}, \mu, T)$, the shift map $f \mapsto Tf$ is a unitary transformation on the separable Hilbert space $L^2(X, {\mathcal X}, \mu)$. We conclude
Corollary 1. (mean ergodic theorem) Let $(X, {\mathcal X}, \mu, T)$ be a measure-preserving system, and let $f \in L^2(X,{\mathcal X},\mu)$. Then we have $\frac{1}{N} \sum_{n=1}^N T^n f$ converges in $L^2(X,{\mathcal X},\mu)$ norm to $\pi(f)$, where $\pi(f): L^2(X,{\mathcal X},\mu) \to L^2(X,{\mathcal X},\mu)^T$ is the orthogonal projection to the space $\{ f \in L^2(X,{\mathcal X},\mu): Tf =f \}$ consists of the shift-invariant functions in $L^2(X, {\mathcal X},\mu)$.
Example 4. (Finite case) Suppose that $(X, {\mathcal X}, \mu, T)$ is a finite measure-preserving system, with ${\mathcal X}$ discrete and $\mu$ the uniform probability measure. Then T is a permutation on X and thus decomposes as the direct sum of disjoint cycles (possibly including trivial cycles of length 1). Then the shift-invariant functions are precisely those functions which are constant on each of these cycles, and the map $f \mapsto \pi(f)$ replaces a function $f: X \to {\Bbb C}$ with its average value on each of these cycles. It is then an instructive exercise to verify the mean ergodic theorem by hand in this case. $\diamond$
Exercise 5. With the notation and assumptions of Corollary 1, show that the limit $\lim_{N \to \infty} \frac{1}{N} \sum_{n=0}^{N-1} \int_X T^n f \overline{f}\ d\mu$ exists, is real, and is greater than or equal to $|\int_X f|^2$. (Hint: the constant function $1$ lies in $L^2(X, {\mathcal X}, \mu)^T$.) Note that this is stronger than the conclusion of Exercise 2. $\diamond$
Let us now give some other proofs of the von Neumann ergodic theorem. We first give a proof (close to von Neumann’s original proof) using the spectral theorem for unitary operators. This theorem asserts (among other things) that a unitary operator $U: H \to H$ can be expressed in the form $U = \int_{S^1} \lambda\ d\mu(\lambda)$, where $S^1 := \{ z \in {\Bbb C}: |z|=1\}$ is the unit circle and $\mu$ is a projection-valued Borel measure on the circle. More generally, we have
$U^n = \int_{S^1} \lambda^n\ d\mu(\lambda)$ (8)
and so for any vector v in H and any positive integer N
$\frac{1}{N} \sum_{n=0}^{N-1} U^n v = \int_{S^1} \frac{1}{N} \sum_{n=0}^{N-1} \lambda^n\ d\mu(\lambda) v$. (9)
We separate off the $\lambda=1$ portion of this integral. For $\lambda \neq 1$, we have the geometric series formula
$\frac{1}{N} \sum_{n=0}^{N-1} \lambda^n = \frac{1}{N} \frac{\lambda^{N}-1}{\lambda-1}$ (10)
(compare with (7)), thus we can rewrite (9) as
$\mu(\{1\}) v + \int_{S^1 \backslash \{1\}} \frac{1}{N} \frac{\lambda^{N}-1}{\lambda-1}\ d\mu(\lambda) v$. (11)
Now observe (using (10)) that $\frac{1}{N} \frac{\lambda^{N}-1}{\lambda-1}$ is bounded in magnitude by 1 and converges to zero as $N \to \infty$ for any fixed $\lambda \neq 1$. Applying the dominated convergence theorem (which requires a little bit of justification in this vector-valued case), we see that the second term in (11) goes to zero as $N \to \infty$. So we see that (9) converges to $\mu(\{1\}) v$. But $\mu(\{1\})$ is just the orthogonal projection to the eigenspace of U with eigenvalue 1, i.e. the space $H^U$, thus recovering the von Neumann ergodic theorem. (It is instructive to use spectral theory to interpret Riesz’s proof of this theorem and see how it relates to the argument just given.)
Remark 3. The above argument in fact shows that the rate of convergence in the von Neumann ergodic theorem is controlled by the spectral gap of U – i.e. how well-separated the trivial component $\{1\}$ of the spectrum is from the rest of the spectrum. This is one of the reasons why results on spectral gaps of various operators are highly prized. $\diamond$
We now give another proof of Theorem 2, based on the energy decrement method; this proof is significantly lengthier, but is particularly well suited for conversion to finitary quantitative settings. For any positive integer N, define the averaging operators $A_N := \frac{1}{N} \sum_{n=0}^{N-1} U^n$; by the triangle inequality we see that $\| A_N v \| \leq \|v\|$ for all v. Now we observe
Lemma 1. (Lack of uniformity implies energy decrement) Suppose $\| A_N v \| \geq \varepsilon$. Then $\| v - A_N^* A_N v \|^2 \leq \|v\|^2 - \varepsilon^2$.
Proof. This follows from the identity
$\| v - A_N^* A_N v \|^2 = \|v\|^2 - 2 \|A_N v \|^2 + \| A_N^* A_N v \|^2$ (12)
and the fact that $A_N^*$ has operator norm at most 1. $\Box$
We now iterate this to obtain
Proposition 1. (Koopman-von Neumann type theorem) Let v be a unit vector, let $\varepsilon > 0$, and let $1 < N_1 < N_2 < \ldots < N_J$ be a sequence of integers with $J > 1/\varepsilon^2 + 2$. Then there exists $1 \leq j < J$ and a decomposition $v = s + r$ where $\| Us - s\| = O( J \frac{1}{N_{j+1}} )$ and $\| A_N r \| \leq \varepsilon$ for all $N \geq N_j$.
(The letters s, r stand for “structured” and “random” (or “residual”) respectively. For more on decompositions into structured and random components, see my FOCS lecture notes.)
Proof. We perform the following algorithm:
1. Initialise j := J-1, s := 0, and r := v.
2. If $\| A_N r \| \leq \varepsilon$ for all $N \geq N_j$ then STOP. If instead $\|A_N r \| > \varepsilon$ for some $N \geq N_j$, observe from Lemma 1 that $\| r - A_N^* A_N r \|^2 \leq \|r\|^2 - \varepsilon^2$.
3. Replace r with $r - A_N^* A_N r$, replace s with $s + A_N^* A_N r$, and replace j with j-1. Then return to Step 2.
Observe that this procedure must terminate in at most $1/\varepsilon^2$ steps (since the energy $\|r\|^2$ starts at 1, drops by at least $\varepsilon^2$ at each stage, and cannot go below zero). In particular, j stays positive. Observe also that r always has norm at most 1, and thus $\| (U - I) A_N^* A_N r \| = O( 1/N )$ at any given stage of the algorithm. From this and the triangle inequality one easily verifies the required claims. $\Box$
Corollary 2 (partial von Neumann ergodic theorem). For any vector v, the averages $A_N v$ form a Cauchy sequence in H.
Proof. Without loss of generality we can take v to be a unit vector. Suppose for contradiction that $A_N v$ was not Cauchy. Then one could find $\varepsilon > 0$ and $1 < N_1 < M_1 < N_2 < M_2 < \ldots$ such that $\|A_{N_j} v - A_{M_j} v \| \geq 5 \varepsilon$ (say) for all j. By sparsifying the sequence if necessary we can assume that $N_{j+1}$ is large compared to $N_j$, $M_j$ and $\varepsilon$. Now we apply Proposition 1 to find $j = O_\varepsilon(1)$ and a decomposition v = s + r such that $\|Us-s\| = O_\varepsilon( 1 / N_{j+1} )$ and $\| A_{N_j} r \|, \|A_{M_j} r \| \leq \varepsilon$. If $N_{j+1}$ is large enough depending on $N_j, M_j, \varepsilon$, we thus have $\|A_{N_j} s - s\|, \|A_{M_j} s - s \| \leq \varepsilon$, and thus by the triangle inequality, $\|A_{N_j} v - A_{M_j} v \| \leq 4 \varepsilon$, a contradiction. $\diamond$
Remark 4. This result looks weaker than Theorem 2, but the argument is much more robust; for instance, one can modify it to establish convergence of multiple averages such as $\frac{1}{N} \sum_{n=1}^N T_1^n f_1 T_2^n f_2 T_3^n f_3$ in $L^p$ norms for commuting shifts $T_1, T_2, T_3$, which does not seem possible using the other arguments given here; see this paper of mine for details. Further quantitative analysis of the mean ergodic theorem can be found in this paper of Avigad, Gerhardy, and Towsner. $\diamond$
Corollary 2 can be used to recover Theorem 2 in its full strength, by combining it with a weak form of Theorem 2:
Proposition 2 (Weak von Neumann ergodic theorem) The conclusion (5) of Theorem 2 holds in the weak topology.
Proof. The averages $A_N v$ lie in a bounded subset of the separable Hilbert space H, and are thus precompact in the weak topology by the sequential Banach-Alaoglu theorem. Thus, if (5) fails, then there exists a subsequence $A_{N_j} v$ which converges in the weak topology to some limit w other than $\pi(v)$. By telescoping series we see that $\| U A_{N_j} v - A_{N_j} v \| \leq 2 \|v\|/N_j$, and so on taking limits we see that $\|Uw - w\|=0$, i.e. $w \in H^U$. On the other hand, if y is any vector in $H^U$, then $A_{N_j}^* y = y$, and thus on taking inner products with v we obtain $\langle y, A_{N_j} v \rangle = \langle y, v \rangle$. Taking limits we obtain $\langle y, w \rangle = \langle y, v \rangle$, i.e. v-w is orthogonal to $H^U$. These facts imply that $w = \pi(v)$, giving the desired contradiction. $\Box$
— Conditional expectation —
We now turn away from the abstract Hilbert approach to the ergodic theorem (which is excellent for proving the mean ergodic theorem, but not flexible enough to handle more general ergodic theorems) and turn to a more measure-theoretic dynamics approach, based on manipulating the four components $X, {\mathcal X}, \mu, T$ of the underlying system separately, rather than working with the single object $L^2( X, {\mathcal X}, \mu)$ (with the unitary shift T). In particular it is useful to replace the $\sigma$-algebra ${\mathcal X}$ by a sub-$\sigma$-algebra ${\mathcal X}' \subset {\mathcal X}$, thus reducing the number of measurable functions. This creates an isometric embedding of Hilbert spaces
$L^2( X, {\mathcal X}', \mu) \subset L^2( X, {\mathcal X}, \mu)$ (13)
and so the former space is a closed subspace of the latter. In particular, we have an orthogonal projection ${\Bbb E}( \cdot|{\mathcal X}'): L^2( X, {\mathcal X}, \mu) \to L^2( X, {\mathcal X}', \mu)$, which can be viewed as the adjoint of the inclusion (13). In other words, for any $f \in L^2(X,{\mathcal X},\mu)$, ${\Bbb E}(f|{\mathcal X}')$ is the unique element of $L^2(X, {\mathcal X}',\mu)$ such that
$\int_X {\Bbb E}(f|{\mathcal X}') \overline{g}\ d\mu = \int_X f \overline{g}\ d\mu$ (14)
for all $g \in L^2(X, {\mathcal X}', \mu)$. (A reminder: when dealing with $L^p$ spaces, we identify any two functions which agree $\mu$-almost everywhere. Thus, technically speaking, elements of $L^p$ spaces are not actually functions, but rather equivalence classes of functions.)
Example 5. (Finite case) Let X be a finite set, thus ${\mathcal X}$ can be viewed as a partition of X, and ${\mathcal X}' \subset {\mathcal X}$ is a coarser partition of X. To avoid degeneracies, assume that every point in X has positive measure with respect to $\mu$. Then an element f of $L^2(X, {\mathcal X}, \mu)$ is just a function $f: X \to {\Bbb C}$ which is constant on each atom of ${\mathcal X}$. Similarly for $L^2(X, {\mathcal X}', \mu)$. The conditional expectation ${\Bbb E}(f|{\mathcal X}')$ is then the function whose value on each atom A of ${\mathcal X}'$ is equal to the average value $\frac{1}{\mu(A)} \int_A f\ d\mu$ on that atom. (What needs to be changed here if some points have zero measure?) $\diamond$
We leave the following standard properties of conditional expectation as an exercise.
Exercise 6. Let $(X, {\mathcal X}, \mu)$ be a probability space, and let ${\mathcal X}'$ be a sub-$\sigma$-algebra. Let $f \in L^2(X, {\mathcal X}, \mu)$.
1. The operator $f \mapsto {\Bbb E}( f|{\mathcal X'})$ is a bounded self-adjoint projection on $L^2(X,{\mathcal X},\mu)$. It maps real functions to real functions, it preserves constant functions (and more generally preserves ${\mathcal X'}$-valued functions), and commutes with complex conjugation.
2. If f is non-negative, then ${\Bbb E}( f|{\mathcal X'})$ is non-negative (up to sets of measure zero, of course). More generally, we have a comparison principle: if f, g are real-valued and $f \leq g$ pointwise a. e., then ${\Bbb E}( f|{\mathcal X'}) \leq {\Bbb E}( g|{\mathcal X'})$ a.e. Similarly, we have the triangle inequality $|{\Bbb E}( f|{\mathcal X'})| \leq {\Bbb E}( |f||{\mathcal X'})$ a.e..
3. (Module property) If $g \in L^\infty(X, {\mathcal X}', \mu)$, then ${\Bbb E}( f g|{\mathcal X'}) = {\Bbb E}( f|{\mathcal X'}) g$ a.e..
4. (Contraction) If $f \in L^2(X, {\mathcal X},\mu) \cap L^p(X, {\mathcal X},\mu)$ for some $1 \leq p \leq \infty$, then $\|{\Bbb E}(f|{\mathcal X'})\|_{L^p} \leq \|f\|_{L^p}$. (Hint: do the p=1 and $p=\infty$ cases first.) This implies in particular that conditional expectation has a unique continuous extension to $L^p(X, {\mathcal X},\mu)$ for $1 \leq p \leq \infty$ (the $p=\infty$ case is exceptional, but note that $L^\infty$ is contained in $L^2$ since $\mu$ is finite). $\diamond$
For applications to ergodic theory, we will only be interested in taking conditional expectations with respect to a shift-invariant sub-$\sigma$-algebra ${\mathcal X'}$, thus $T$ and $T^{-1}$ preserve ${\mathcal X'}$. In that case T preserves $L^2(X,{\mathcal X}',\mu)$, and thus T commutes with conditional expectation, or in other words that
${\Bbb E}( T^n f | {\mathcal X}' ) = T^n {\Bbb E}( f | {\mathcal X}' )$ (15)
a.e. for all $f \in L^2(X,{\mathcal X}, \mu)$ and all n.
Now we connect conditional expectation to the mean ergodic theorem. Let ${\mathcal X}^T := \{ E \in {\mathcal X}: TE = E \hbox{ a.e.} \}$ be the set of essentially shift-invariant sets. One easily verifies that this is a shift-invariant sub-$\sigma$-algebra of ${\mathcal X}$.
Exercise 7. Show that if E lies in ${\mathcal X}^T$, then there exists a set $F \in {\mathcal X}$ which is genuinely invariant (TF=F) and which differs from E only by a set of measure zero. Thus it does not matter whether we deal with shift-invariance or essential shift-invariance here. (More generally, it will not make any significant difference if we modify any of the sets in our $\sigma$-algebras by null sets.) $\diamond$
The relevance of this algebra to the mean ergodic theorem arises from the following identity:
Exercise 8. Show that $L^2( X, {\mathcal X}, \mu)^T = L^2( X, {\mathcal X}^T, \mu)$. $\diamond$
As a corollary of this and Corollary 1, we have
Corollary 2. (Mean ergodic theorem, again) Let $(X, {\mathcal X}, \mu, T)$ be a measure-preserving system. Then for any $f \in L^2(X, {\mathcal X}, \mu)$, the averages $\frac{1}{N} \sum_{n=0}^{N-1} T^n f$ converge in $L^2$ norm to ${\Bbb E}(f|{\mathcal X}^T)$.
Exercise 9. Show that Corollary 2 continues to hold if $L^2$ is replaced throughout by $L^p$ for any $1 \leq p < \infty$. (Hint: for the case $p<2$, use that $L^2$ is dense in $L^p$. For the case $p>2$, use that $L^\infty$ is dense in $L^p$.) What happens when $p = \infty$? $\diamond$
Let us now give another proof of Corollary 2 (leading to a fourth proof of the mean ergodic theorem). The key here will be the decomposition $f = f_{U^\perp} + f_U$, where $f_{U^\perp} := {\Bbb E}(f|{\mathcal X}^T)$ is the “structured” part of f (at least as far as the mean ergodic theorem is concerned) and $f_U := f - f_{U^\perp}$ is the “random” part. (The subscripts $U^\perp, U$ stand for “anti-uniform” and “uniform” respectively; this notation is not standard.) As $f_{U^\perp}$ is shift-invariant, we clearly have
$\frac{1}{N} \sum_{n=0}^{N-1} T^n f_{U^\perp} = f_{U^\perp}$ (16)
so it suffices to show that
$\| \frac{1}{N} \sum_{n=0}^{N-1} T^n f_U \|_{L^2}^2 \to 0$ (17)
as $N \to \infty$. But we can expand out the left-hand side (using the unitarity of T) as
$\langle F_N, f_U \rangle := \int_X F_N \overline{f_U}\ d\mu$ (18)
where $F_N$ is the dual function of f_U, defined as
$F_N := \frac{1}{N^2} \sum_{n=0}^{N-1} \sum_{m=0}^{N-1} T^{n-m} f_U$. (19)
Now, from the triangle inequality we know that the sequence of dual functions $F_N$ is uniformly bounded in $L^2$ norm, and so by Cauchy-Schwarz we know that the inner products $\langle F_N, f_U \rangle$ are bounded. If they converge to zero, we are done; otherwise, by the Bolzano-Weierstrass theorem, we have $\langle F_{N_j}, f_U \rangle \to c$ for some subsequence $N_j$ and some non-zero c.
(One could also use ultrafilters instead of subsequences here if desired, it makes little difference to the argument.) By the Banach-Alaoglu theorem (or more precisely, the sequential version of this in the separable case), there is a further subsequence $F_{N'_j}$ which converges weakly (or equivalently in this Hilbert space case, in the weak-* sense) to some limit $F_\infty \in L^2(X, {\mathcal X}, \mu)$. Since c is non-zero, $F_\infty$ must also be non-zero. On the other hand, from telescoping series one easily computes that $\| T F_N - F_N\|_{L^2}$ decays like O(1/N) as $N \to \infty$, so on taking limits we have $T F_\infty - F_\infty = 0$. In other words, $F_\infty$ lies in $L^2(X, {\mathcal X}^T, \mu)$.
On the other hand, by construction of $f_U$ we have ${\Bbb E}(f_U|{\mathcal X}^T) = 0$. From (15) and linearity we conclude that ${\Bbb E}(F_N|{\mathcal X}^T) = 0$ for all N, so on taking limits we have ${\Bbb E}(F_\infty|{\mathcal X}^T) = 0$. But since $F_\infty$ is already in $L^2(X, {\mathcal X}^T, \mu)$, we conclude $F_\infty=0$, a contradiction.
Remark 5. This argument is lengthier than some of the other proofs of the mean ergodic theorem, but it turns out to be fairly robust; it demonstrates (using the compactness properties of certain “dual functions”) that a function $f_U$ with sufficiently strong “mixing” properties (in this case, we require that ${\Bbb E}(f_U | {\mathcal X}^T) = 0$) will cancel itself out when taking suitable ergodic averages, thus reducing the study of averages of f to the study of averages of $f_U = {\Bbb E}(f|{\mathcal X}^T)$. In the modern jargon, this means that ${\mathcal X}^T$ is (the $\sigma$-algebra induced by) a characteristic factor of the ergodic average $f \mapsto \lim_{N \to \infty} \frac{1}{N} \sum_{n=1}^N T^n f$. We will see further examples of characteristic factors for other averages later in this course. $\diamond$
Exercise 10. Let $(\Gamma,\cdot)$ be a countably infinite discrete group. A Følner sequence is a sequence of increasing finite non-empty sets $F_n$ in $\Gamma$ with $\bigcup_n F_n = \Gamma$ with the property that for any given finite set $S \subset \Gamma$, we have $|(F_n \cdot S) \Delta F_n|/|F_n| \to 0$ as $n \to \infty$, where $F_n \cdot S := \{ fs: f \in F_n, s \in S\}$ is the product set of $F_n$ and S, $|F_n|$ denotes the cardinality of $F_n$, and $\Delta$ denotes symmetric difference. (For instance, in the case $\Gamma = {\Bbb Z}$, the sequence $F_n := \{-n,\ldots,n\}$ is a Følner sequence.) If $\Gamma$ acts (on the left) in a measure-preserving manner on a probability space $(X, {\mathcal X}, \mu)$, and $f \in L^2(X, {\mathcal X}, \mu)$, show that $\frac{1}{|F_n|} \sum_{\gamma \in F_n} f \circ \gamma^{-1}$ converges in $L^2$ to ${\Bbb E}(f|{\mathcal X}^\Gamma)$, where ${\mathcal X}^\Gamma$ is the collection of all measurable sets which are $\Gamma$-invariant modulo null sets, and $f \circ \gamma^{-1}$ is the function $x \mapsto f(\gamma^{-1} x)$. $\diamond$
[Update, Jan 30: exercise corrected, another exercise added.]
[Update, Feb 1: Some corrections.]
[Update, Feb 4: Ergodic averages changed to sum over 0 to N-1 rather than over 1 to N.]
[Update, Feb 11: Discussion of the ergodic theorem in weak topologies (Proposition 2) added.]
[Update, Feb 21: Exercise 10 corrected.]
[Update, Mar 31: Exercise 5 corrected.]
[Update, Jun 14: Some corrections.] | 2015-08-01 07:47:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 393, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9765549302101135, "perplexity": 152.82355517013633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988598.68/warc/CC-MAIN-20150728002308-00204-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://ai.stackexchange.com/questions/14322/numpy-ndarray-object-is-not-callable-error | # 'numpy.ndarray' object is not callable error [closed]
I want to find optimize minimum value but it shows 'numpy.ndarray' object is not callable error.
This is my code in python
from scipy.optimize import fmin
import numpy as np
import pylab
wavelenght = np.array([0.2,0.3,0.4,0.5,0.6,0.7,1.0,2.0,4.0,10.0 ])
ns = np.array([1.07,1.51,0.17,0.13,0.12,0.14,0.21,0.65,2.30,19.3])
nc = np.array([1.01,1.39,1.18,1.13,0.40,0.21,0.33,0.85,2.41,11.6])
ng = np.array([1.43,1.80,1.66,0.85,0.22,0.16,0.26,0.85,2.60,12.4])
ks = np.array([1.24,0.96,1.95,2.92,3.73,4.52,6.76,12.2,24.3,54.0])
kc = np.array([1.50,1.67,2.21,2.56,2.95,4.16,6.60,10.6,21.5,49.1])
kg = np.array([1.22,1.92,1.96,1.90,2.97,3.95,6.82,12.6,24.6,55.0])
Rs=((((ns-1)**2)+ks**2)/(((ns+1)**2)+ks**2))*100
Rc=((((nc-1)**2)+kc**2)/(((nc+1)**2)+kc**2))*100
Rg=((((ng-1)**2)+kg**2)/(((ng+1)**2)+kg**2))*100
x0 = -5 # start from x = -5
xmin0 = fmin(Rs,x0)
pylab.plot(wavelenght,Rs)
pylab.plot(wavelenght,Rc)
pylab.plot(wavelenght,Rg)
## closed as off-topic by nbro, DuttaA, hisairnessag3, Djib2011, malioboroSep 8 at 8:56
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question does not appear to be about artificial intelligence, within the scope defined in the help center." – nbro, DuttaA, hisairnessag3, Djib2011, malioboro
If this question can be reworded to fit the rules in the help center, please edit the question.
The definition of fmin according to scipy is:
fmin(func, x0, args=(), **kwargs)
The reason fmin doesn't take Rs is because it isn't a callable, but an array. A callable is simply an object that implements __call__, such objects are methods, anonymous functions(lambdas), classes and instantiated classes.
def callable_function(*vargs):
pass
class Callable:
def __init__(self, *vargs):
pass
def __call__(self, *vargs):
pass
@staticmethod
def staticcallable(*vargs):
pass
callable_lambda = lambda *vargs: None
All of the above are callables. However, not all callables can be used with fmin as it expects the callable to return an int or a float. | 2019-10-18 13:48:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23885779082775116, "perplexity": 10117.309544141996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986682998.59/warc/CC-MAIN-20191018131050-20191018154550-00290.warc.gz"} |
https://cracku.in/blog/number-series-questions-for-ssc-gd-pdf/ | 0
1296
Number Series Questions For SSC GD PDF
Question 1: In the following question, select the missing number from the given series:
6, 19, 54, 167, 494, ?
a) 1491
b) 1553
c) 1361
d) 1642
Question 2: In the following question, select the missing number from the given series:
67, 70, 74, 77, 81, 84 ?
a) 87
b) 88
c) 86
d) 89
Question 3: In the following question, a series is given with one or more number (s) missing. Choose the correct alternative from the given options.
22, 22, 23, 20, 16, 17, 17, ?, ?, 8
a) 18, 9
b) 12, 13
c) 10, 9
d) 13, 10
Question 4: In the following question, a series is given with one or more number (s) missing. Choose the correct alternative from the given options.
?, 5, 30, 186, 1309, 10480
a) 0.25
b) 0.75
c) 1.00
d) 0
Question 5: In the following question, a series is given with one or more number (s) missing. Choose the correct alternative from the given options.
7, 51, 8, 65, 9, ?
a) 79
b) 80
c) 81
d) 82
Question 6: In the following question, a series is given with one or more number (s) missing. Choose the correct alternative from the given options.
0.2, 0.16, 0.072, 0.0256, ?
a) 0.0016
b) 0.004
c) 0.00512
d) 0.008
Question 7: Which of the following replaces ‘?’ in the following series?
10,21,45,84,500,?
a) 1050
b) 1045
c) 985
d) 1025
Instructions
In the following question, select the missing number from the given series.
Question 8: 84, 42, 44, 22, 24, 12, ?
a) 20
b) 14
c) 24
d) 28
Question 9: 1, 4, 13, 40, 121, ?
a) 284
b) 286
c) 364
d) 396
Instructions
In the following question, select the missing number from the given series.
Question 10: 2, 7, 22, 67, ?
a) 197
b) 198
c) 200
d) 202
Question 11: 1, 8, 29, 92, 281, ?
a) 567
b) 628
c) 776
d) 848
Instructions
In the following question, select the missing number from the given series.
Question 12: 1, 7, 3, 9, 6, 12, 10, 16, 15, ?
a) 18
b) 15
c) 20
d) 21
Question 13: 7, 10, 14, 19, 25, ?
a) 32
b) 36
c) 38
d) 40
Question 14: In the following question, select the missing number from the given series.
?, 5, 15, 45, 113
a) 1
b) 2
c) 3
d) 4
Question 15: In the following question, select the missing number from the given series.
1, 1, 3, 4, 5, 9, 7, 16, 9, 25, 11, ?
a) 17
b) 36
c) 49
d) 37
Instructions
In the following question, select the missing number from the given series.
Question 16: 2.2, 14.8, 40, 90.4, ?
a) 191.2
b) 194.4
c) 196.2
d) 208.4
Question 17: 84, 42, 28, 21, ?
a) 10.5
b) 16.8
c) 18.4
d) 19.6
Question 18: In the following series find 20th number
9, 5, 1, -3 -7, -11,……..
a) -64
b) -75
c) -70
d) -67
Instructions
A series is given, with one from missing, Choose the correct alternative from the given ones that will complete the series.
Question 19: 16, 61, 25, 52, 36, 63, 49, ?
a) 36
b) 94
c) 72
d) 46
Question 20: Find the wrong number in the given series ?
15, 28, 30, 39, 48
a) 28
b) 15
c) 30
d) 39
Difference between the numbers is in the form of $[x][x^{2}]$
$10+[1][1^{2}]=10+11=21$
$21+[2][2^{2}]=21+24=45$
$45+[3][3^{2}]=45+39=84$
$84+[4][4^{2}]=84+416=500$
Next number:
$500+[5][5^{2}]=500+525=1025$
The given series is an arithmetic progression with first term $a=9$ and common difference $d=-4$
$n^{th}$ term in an A.P. = $A_n=a+(n-1)d$
=> $A_{20}=9+(20-1)\times(-4)$
= $9+(19)(-4)$
= $9-76=-67$
=> Ans – (D) | 2022-11-29 23:49:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6643316745758057, "perplexity": 387.65444644414816}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710712.51/warc/CC-MAIN-20221129232448-20221130022448-00807.warc.gz"} |
https://tex.stackexchange.com/questions/454770/how-to-protect-a-fragile-command-in-a-moving-argument-the-case-of-macro-contain | # How to protect a fragile command in a moving argument? The case of macro containing \ifthenelse in \caption
This script works as expected:
\documentclass{article}
\usepackage{ifthen}
\usepackage{graphicx}
\newcommand{\foo}[1]{
\newcommand{\capt}{
\ifthenelse{\equal{#1}{ORIG}}
{original}
{optimised}
}
}
\begin{document}
\foo{ORIG}
\end{document}
However, when the macro \capt is inserted into a \caption in a figure environment
\documentclass{article}
\usepackage{ifthen}
\usepackage{graphicx}
\newcommand{\foo}[1]{
\newcommand{\capt}{
\ifthenelse{\equal{#1}{ORIG}}
{original}
{optimised}
}
\begin{figure}
\includegraphics[width=0.5\textwidth]{example-image-a}
\caption{\capt}
\end{figure}
}
\begin{document}
\foo{ORIG}
\end{document}
the error is
! Undefined control sequence.
<argument> \equal
What is the problem?
I am probably observing a symptom of a general feature. Therefore, help is appreciated how to form the title and this question to reflect the more general nature of this issue.
• Does it work with \protect\capt? – user36296 Oct 11 '18 at 14:07
• @samcarter, yes \protect\capt helps. – Viesturs Oct 11 '18 at 14:09
• this is typical "fragile command in moving argument" error (which is what \protect is designed to protect you from) – David Carlisle Oct 11 '18 at 14:14
I'm not sure why you do the intermediate step of defining \capt; however, the problem is that \ifthenelse is a fragile command, so it should be protected when in a caption or other moving argument.
There are better ways that don't require acrobatics.
\documentclass{article}
\usepackage{graphicx}
\usepackage{etoolbox}
\newcommand{\foo}[1]{%
\newcommand{\capt}{%
\ifstrequal{#1}{ORIG}
{original}
{optimised}%
}%
\begin{figure}
\includegraphics[width=0.5\textwidth]{example-image-a}
\caption{\capt}
\end{figure}
}
\begin{document}
\foo{ORIG}
\end{document}
Don't forget to protect the end of lines which could generate spaces in output. | 2019-04-24 23:57:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7966824173927307, "perplexity": 2426.0893429223033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578675477.84/warc/CC-MAIN-20190424234327-20190425020327-00463.warc.gz"} |
https://www.physicsforums.com/threads/mass-spectrometer-question.97202/ | # Mass Spectrometer Question!
#### 123Sub-Zero
1.Some data obtained from the mass spectrum of a sample of carbon are given below.
Ion 12C+ 13C+
Absolute mass of one ion/g 1.993 x 10-23 2.158 x 10-23
Relative abundance/ % 98.9 1.1
Use the data to calculate the mass of one neutron, the RAM of 13C and the RAM of carbon in the sample.
You may neglect the mass of an electron.
Work out:
1.Mass of one neutron
2.RAM of 13C
3.RAM of carbon in the sample
I know how to work out the RAM form a mass spectrometer graph but cannot work this out. I presume you multiply the mass of one ion by relative abundance to (for the RAM questions) but then dont know the next step.
$$A_{r} of samples:\\ ^{13}C:~~\frac{13 x 1.1}{100} = 1.43 x 10^{-1} = 0.143\\ Carbon:~~\frac{(12 x 98.9)~+~(13 x 1.1)}{(98.9 + 1.1)} = 12.011 = 12$$
Last edited:
### The Physics Forums Way
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving | 2019-03-25 17:46:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2915072441101074, "perplexity": 6025.660350378106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204086.87/warc/CC-MAIN-20190325174034-20190325200034-00000.warc.gz"} |
https://www.enotes.com/homework-help/calculus-ii-461964 | # Calculus II
Images:
This image has been Flagged as inappropriate Click to unflag
Image (1 of 1)
First we need to calculate intersection points. This we can determine from these two equations
`y=-t-3/t`
`y=4`
`-t-3/t=4`
`(-t^2-3)/t=4`
`t^2+4t+3=0`
`t_1,2=(-4pmsqrt(16-4cdot1cdot3))/2`
`t_1=-3`
`t_2=-1`
Also since `x(t_1)=-2` and `x(t_2)=2` it's easy to see that area under `y=4` (this is simple to calculate because it is actually square) is `4cdot(2-(-2))=16.` If we subtract area under the other curve we will get area between the curves.
And since area under parametrically defined curve `x=f(t),` `y=g(t)` is given by formula
`A=int_(t_1)^(t_2)g(t)f'(t)dt`
we have
`A=16-int_-3^-1(-t-3/t)(1+3/t^2)dt=`
`16-int_-3^-1(-t-3/t-3/t-9/t^3)dt=`
`16+(t^2-6ln|t|-9/(2t^2))|_-3^-1=`
`16+1/2+6cdot0-9/2-9/2-6ln3+1/2=` | 2022-10-05 05:50:30 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9815520644187927, "perplexity": 2325.182051640993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00252.warc.gz"} |
https://projecteuclid.org/euclid.aoap/1015345344 | The Annals of Applied Probability
Point processes in fast Jackson networks
James B. Martin
Abstract
We consider a Jackson-type network, each of whose nodes contains N identical channels with a single server. Upon arriving at a node, a task selects m of the channels at random and joins the shortest of the m queues observed.We fix a collection of channels in the network, and analyze how the queue-length processes at these channels vary as $N \to \infty$. If the initial conditions converge suitably, the distribution of these processes converges in local variation distance to a limit under which each channel evolves independently.We discuss the limiting processes which arise, and in particular we investigate the point processes of arrivals and departures at a channel when the networks are in equilibrium, for various values of the system parameters.
Article information
Source
Ann. Appl. Probab., Volume 11, Number 3 (2001), 650-663.
Dates
First available in Project Euclid: 5 March 2002
Permanent link to this document
https://projecteuclid.org/euclid.aoap/1015345344
Digital Object Identifier
doi:10.1214/aoap/1015345344
Mathematical Reviews number (MathSciNet)
MR1865019
Zentralblatt MATH identifier
1021.90007
Subjects
Primary: 90B15: Network models, stochastic 60G55: Point processes
Citation
Martin, James B. Point processes in fast Jackson networks. Ann. Appl. Probab. 11 (2001), no. 3, 650--663. doi:10.1214/aoap/1015345344. https://projecteuclid.org/euclid.aoap/1015345344
References
• [1] Aalen, O. (1978). Nonparametric inference for a family of countingprocesses. Ann. Statist. 6 701-726.
• [2] Brown, T. C. and Pollett, P. K. (1982). Some distributional approximations in Markovian queueingnetworks. Adv. Appl. Probab. 14 654-671.
• [3] Daley, D. J. and Vere-Jones, D. (1988). An Introduction to the Theory of Point Processes. Springer, New York.
• [4] Jackson, J. R. (1957). Networks of waitingtimes. Oper. Res. 5 518-527.
• [5] Jackson, J. R. (1965). Jobshop-like queueingsystems. Management Sci. 10 131-142.
• [6] Kabanov, Y. M. and Liptser, R. S. (1983). Convergence of the distributions of multivariate point processes. Z. Wahrsch. Verw. Gebiete 63 475-485.
• [7] Kelly, F. P. (1979). Reversibility and Stochastic Networks. Wiley, New York.
• [8] Martin, J. B. and Suhov, Y. M. (1999). Fast Jackson networks. Ann. Appl. Probab. 9 854-870.
• [9] Vvedenskaya, N. D., Dobrushin, R. L. and Karpelevich, F. I. (1996). Queueingsystem with selection of the shortest of two queues: an asymptotic approach. Problems Inform. Transmission 32 15-27.
• [10] Williams, D. (1991). Probability with Martingales. Cambridge Univ. Press. | 2019-10-19 15:09:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19500167667865753, "perplexity": 3486.346957694295}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986696339.42/warc/CC-MAIN-20191019141654-20191019165154-00006.warc.gz"} |
https://msp.org/agt/2005/5-1/p14.xhtml | #### Volume 5, issue 1 (2005)
Recent Issues
Author Index
The Journal About the Journal Editorial Board Subscriptions Editorial Interests Editorial Procedure Submission Guidelines Submission Page Ethics Statement ISSN (electronic): 1472-2739 ISSN (print): 1472-2747 To Appear Other MSP Journals
The periodic Floer homology of a Dehn twist
### Michael Hutchings and Michael G Sullivan
Algebraic & Geometric Topology 5 (2005) 301–354
arXiv: math.SG/0410059
##### Abstract
The periodic Floer homology of a surface symplectomorphism, defined by the first author and M. Thaddeus, is the homology of a chain complex which is generated by certain unions of periodic orbits, and whose differential counts certain embedded pseudoholomorphic curves in $ℝ$ cross the mapping torus. It is conjectured to recover the Seiberg-Witten Floer homology of the mapping torus for most spin-c structures, and is related to a variant of contact homology. In this paper we compute the periodic Floer homology of some Dehn twists.
##### Keywords
periodic Floer homology, Dehn twist, surface symplectomorphism
##### Mathematical Subject Classification 2000
Primary: 57R58
Secondary: 53D40, 57R50 | 2020-06-03 03:23:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6901012659072876, "perplexity": 2293.6121927016347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347428990.62/warc/CC-MAIN-20200603015534-20200603045534-00086.warc.gz"} |
https://wiki.kidzsearch.com/wiki/James_Black | kidzsearch.com > wiki Explore:web images videos games
# James Black
James Whyte Black
Born14 June 1924
Uddingston, Lanarkshire, Scotland
Died22 March 2010 (aged 85)[1]
London
CitizenshipUnited Kingdom
NationalityBritish
FieldsPharmacology
InstitutionsUniversity of Glasgow
ICI Pharmaceuticals
University College London
King's College London
University of St Andrews
University of Malaya
Alma materUniversity College, Dundee
Known forwork towards the use of propranolol and cimetidine
Artois-Baillet Latour Health Prize (1979)
Wolf Prize in Medecine (1982)
Nobel Prize for Medicine (1988)
Royal Medal (2004)
Sir James Whyte Black, OM, FRS, FRSE, FRCP (14 June 1924 – 22 March 2010[2]) was a Scottish doctor and pharmacologist. Black started in the physiology department at the University of Glasgow. While there, he became interested in how adrenaline affected the human heart. He went to work for ICI Pharmaceuticals in 1958. While working for ICI, he created propranolol. Propranolol is a beta blocker, used to treat heart disease. Black also created cimetidine. It is a drug used to treat stomach ulcers. He was awarded the Nobel Prize in Physiology or Medicine in 1988 for creating these drugs.[3]
## Early life and education
Black was born in Uddingston, Lanarkshire. He was the fourth of five sons of a Baptist family.[4] His father was a mining engineer.[4] He grew up in Fife. Black went to Beath High School, Cowdenbeath. At the age of 15, he won a scholarship to the University of St Andrews. At St Andrews, he studied medicine.[4] He graduated in 1946.[5]
After graduating he joined the Physiology department at University College as an Assistant Lecturer. He later took a lecturer position at the University of Malaya.[5][6] Black had decided against a career as a medical practitioner as he objected to what he perceived as the insensitive treatment of patients at the time.[6]
## Career
After graduation, Black took a teaching job in Singapore for three years. He moved to London in 1950.[7] Later in 1950, he returned to Scotland. He joined the University of Glasgow (Veterinarian School). There that he became interesteed in how adrenaline affects the human heart. He was mainly interesting in how it affected people with angina.[8] He found the effects of adrenaline did not help. He joined ICI Pharmaceuticals in 1958. He worked with the company until 1964. During this time, he created propranolol, which became the world's best-selling drug.[8] While at ICI, Black developed a new way of doing research. Before this, drug molecules would be created and then tested to find in what ways the molecules could be used as medicine. Black chose to pick a medical use and then try to create the molecules for that medicine.[6] The discovery of propranolol was said to be the greatest discovery in the treatment of heart disease since the discovery of digitalis.[8]
At the same time, Black was trying to find a treatment for stomach ulcers. ICI did not wish to do this so Black stopped working for them in 1964. He joined Smith, Kline and French. He worked for them for nine years until 1973.[9] While there, Black developed his second major drug, cimetidine. It was first sold under the brand name Tagamet in 1975. Tagamet soon outsold propranolol to become the world's largest-selling prescription drug.[8]
Black became the department head of pharmacology at University College London in 1973. He created a new undergraduate course in medicinal chemistry.[4] He had many problems trying to get money for research, so he quit. He went to work for Wellcome Research Laboratories in 1978.[6] He worked there until 1984.[6] Black then became Professor of Analytical Pharmacology at the Rayne Institute of King's College London medical school. He stayed there until 1992.[6] He created the James Black Foundation in 1988 with money from Johnson and Johnson. He worked with 25 scientists in drugs research. This research included gastrin inhibitors which may stop some stomach cancers.[6]
Black helped increase basic scientific and clinical knowledge in cardiology. His creation of propranolol is thought to be one of the most important contributions to clinical medicine and pharmacology of the 20th century.[10][11] Propranolol has helped millions of people.[6]
## Honours and awards
Black was made a Knight Bachelor on 10 February 1981 for services to medical research. He received the honour from Queen Elizabeth II at Buckingham Palace.[8][12] On 26 May 2000 he was appointed to the Order of Merit by the Queen.[13][14]
He was elected a Fellow of the Royal Society in 1976. That same year he was awarded the Lasker award.[15] In 1979, he was awarded the Artois-Baillet Latour Health Prize. In 1982 Black was awarded the Wolf Prize in Medicine.[6] He was awarded the 1988 Nobel Prize in Medicine with Gertrude B. Elion and George H. Hitchings for their work on drug development.[16] In 1994 he received the Ellison-Cliffe Medal from the Royal Society of Medicine.
## References
1. Scottish Nobel prize winner Sir James Black dies at age 85. The Daily Record. 23 March 2010. Retrieved 25 March 2010.
2. "Nobel Prize winning scientist dies" stv.tv 22 March 2010 Link accessed 22 March 2010
3. Tore Frängsmyr (1989). "Sir James W. Black: The Nobel Prize in Physiology or Medicine". Les Prix Nobel. Nobel Foundation. Retrieved 2007-08-25.
4. Black, Sir James W. "Autobiography". The Nobel Foundation. Retrieved 23 March 2010.
5. "Death of Sir James Black". Archives, Records and Artefacts at the University of Dundee. Retrieved 13 June 2011.
6. Sir James Black, OM. The Telegraph. 23 March 2010. Retrieved 25 March 2010.
7. Heart disease treatment pioneer James Black dies. Associated Press. 22 March 2010. Retrieved 25 March 2010.
8. "Led the way in heart drug find". The Age (Melbourne: Fairfax Digital). 25 March 2010. Retrieved 25 March 2010.
9. Stapleton, Melanie P. (1997). "Sir James Black and Propranolol". Texas Heart Institute Journal 24 (4): 336–342. . .
10. ""anTAGonist" and "ciMETidine"". American Chemical Society. 2005. Retrieved December 25, 2005.
11. You must specify issue= and startpage= when using {{London Gazette}}. Available parameters:
{{LondonGazette
|issue=
|date=
|startpage=
|endpage=
|title=
|supp=
|city=
|accessdate=
}}
, 3 March 1981.
12. You must specify issue= and startpage= when using {{London Gazette}}. Available parameters:
{{LondonGazette
|issue=
|date=
|startpage=
|endpage=
|title=
|supp=
|city=
|accessdate=
}}
, 26 May 2000.
13. Diffin, Elizabeth (24 March 2010). "What is the Order of Merit?". BBC News Magazine (BBC). Retrieved 25 March 2010.
14. "1976 winners: Albert Lasker Award for Clinical Medical Research". Lasker Medical Research Network. 1976.
15. "1988 Nobel Prize for Medicine". Karolinska Institute. 1988. | 2020-11-26 13:28:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3885323405265808, "perplexity": 8556.34416968749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188146.22/warc/CC-MAIN-20201126113736-20201126143736-00391.warc.gz"} |
https://dkfzsearch.kobv.de/advancedSearch.do?f1=author&v1=Lee%2C+T.-Y.&index=internal&plv=2 | An error occurred while sending the email. Please try again.
Proceed reservation?
Export
• 1
Electronic Resource
Springer
ISSN: 1588-2780
Source: Springer Online Journal Archives 1860-2000
Topics: Chemistry and Pharmacology , Energy, Environment Protection, Nuclear Power Engineering
Notes: Abstract The dynamic adsorption of Kr and Xe in activated charcoal were measured. The temperature dependence of breakthrough curves for individual isotopes of85mKr,87Kr,88Kr and135Xe have been determined from the γ-spectra at temperatures from 78 K to 291 K. The effective hold up and dynamic adsorption coefficient have been deduced. We find that adsorption is very sensitive to temperature and also depends on the size rather than on the mass of the adsorbed atom. Form total growth radioactivity, the time dependent brakthrough curves at the temperatures of 113, 195 and 220 K have been constructed. The curves were analyzed and compared with the model calculations. Fick's law describing the mass transfer of gas into porous solid was employed to obtain the adsorption coefficient from fitting the experimental data. The results show fairly good agreement between model predictions and the experiments.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 2
Electronic Resource
Springer
ISSN: 1573-2878
Keywords: D-convex set functions ; D-extreme points ; vector-valued Lagrangian functions ; primal maps ; dual maps
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Abstract Using the concept of vector-valued Lagrangian functions, we characterize a special class of solutions,D-solutions, of a multiobjective programming problem with set functions in which the domination structure is described by a closed convex coneD. Properties of two perturbation functions, primal map and dual map, are also studied. Results lead to a general duality theorem.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 3
Electronic Resource
Springer
ISSN: 1573-4862
Keywords: Crack detection ; modal testing ; nondestructive evaluation ; fracture mechanics ; structure ; finite element method
Source: Springer Online Journal Archives 1860-2000
Topics: Electrical Engineering, Measurement and Control Technology , Mathematics
Notes: Abstract An energy method for identifying the size of a crack at given location in structures using one measured eigencouple of the cracked structures is presented. The method utilizes the maximum strain energies of the structures both with and without a crack and the additional strain energy induced by the crack to construct the energy balance equation from which the size of the crack is evaluated through an iteration procedure. A pair of measured vibration frequency and mode shape of the cracked structure is used in a free vibration analysis to derive for its maximum strain energy. The maximum inertia force of the cracked structure is then applied to the uncracked structure with known stiffness and the resulting strain energy of the uncracked structure is obtained in a finite element analysis. Fracture mechanics is used to derive for the additional strain energy induced by the crack. Experimental investigations of several cracked free-free beams are performed to validate the proposed method. Examples of the identification of crack sizes for a number of damaged beam structures are given to further illustrate the feasibility and effectiveness of the present method. Overall the results are encouraging showing that the present method has the prospect of becoming an alternative approach for crack size detection.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 4
Electronic Resource
Springer
ISSN: 1573-2878
Keywords: Multiobjective programming problems ; D-convex set functions ; D-extreme points ; properD-solutions
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Abstract A special class of solutions for multiobjective programming problems with set functions is considered. A subset of nondominated solutions, called properD-solution set, with respect to a given domination structure is characterized under two situations, with and without inequality constraints.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 5
Electronic Resource
Springer
ISSN: 1573-2878
Keywords: Multiobjective programming ; set functions ; Lagrange multiplier theorems ; zero-like functions ; proper D-solutions
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Abstract In this paper, Lagrange multiplier theorems are developed for the cases of single-objective and multiobjective programming problems with set functions. Properly efficient solutions are also characterized by subdifferentials and zero-like functions.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 6
Electronic Resource
Springer
Probability theory and related fields 84 (1990), S. 505-520
ISSN: 1432-2064
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Summary For a critical binary branching Bessel process starting fromx≫1 and stopped atx=1, we prove some conditional limit laws of the number of particles arriving atx=1 before a scaled large time. Five regions of the dimensional index of a Bessel process: −∞〈d〈2,d=2, 2〈d〈4,d=4 and 4〈d〈∞ are showed to have somewhat different behaviors. Our probabilistic results are proved by analyzing differential equations satisfied by generating functions. A salient theme is a comparison principle technique deliberately used to estimate solutions of $$u_t - \left( {D^2 + \frac{{d - 1}}{x}D} \right)u + u^p = 0$$ inR +×R + wherep is greater than 1. The casep=2 corresponds to the process considered.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 7
Electronic Resource
Springer
Probability theory and related fields 105 (1996), S. 227-254
ISSN: 1432-2064
Keywords: 60F 10 ; 35J55
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Summary The diffusion-transmutation processes are considered as the diffusivities are of order ε,ε→0 and the transmutation intensities are of order ε−1. We prove a large deviation principle for the position joint with the type occupation times as ε→0 and study the exit problem for this process. We consider the Levinson case where a trajectory of the average drift field exits from a domain in finite time in a regular way and the large deviation case where the average drift field on the boundary points inward at the domain. The exit place and the type distribution at the exit time are determined as ε→0; this gives the limit of the Dirichlet problems for the corresponding PDE systems with a parameter ε→0.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 8
Electronic Resource
Springer
Probability theory and related fields 105 (1996), S. 227-254
ISSN: 1432-2064
Keywords: Mathematics Subject classification (1991): 60F10 ; 35J55
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Summary. The diffusion-transmutation processes are considered as the diffusivities are of order ɛ, ɛ→ 0 and the transmutation intensities are of order ɛ −1 . We prove a large deviation principle for the position joint with the type occupation times as ɛ→ 0 and study the exit problem for this process. We consider the Levinson case where a trajectory of the average drift field exits from a domain in finite time in a regular way and the large deviation case where the average drift field on the boundary points inward at the domain. The exit place and the type distribution at the exit time are determined as ɛ→ 0; this gives the limit of the Dirichlet problems for the corresponding PDE systems with a parameter ɛ→ 0.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 9
Electronic Resource
Springer
Probability theory and related fields 106 (1996), S. 39-70
ISSN: 1432-2064
Keywords: Mathematics Subject classification (1991): 60J60 ; 35K55
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Summary. We study systems of reaction – diffusion equations of KPP-type with the coefficients and nonlinear terms slowly varying in the space variables. The long time behavior of the solution to such systems can be characterized by the motion of wave fronts. We describe the wave front motion, using the Feynman–Kac formula and the large deviation principle for the corresponding diffusion – transmutation process. We give a geometrical description of the motion in the examples and show some effects which appear in case of systems but not in the single RDE’s.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 10
Electronic Resource
Hoboken, NJ : Wiley-Blackwell
AIChE Journal 40 (1994), S. 1976-1982
ISSN: 0001-1541
Keywords: Chemistry ; Chemical Engineering
Source: Wiley InterScience Backfile Collection 1832-2000
Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology
Notes: A 200-s PSA cycle involving both pressure equalization and product backfill steps has been experimentally studied on a four-bed system, where LINDE 5 A zeolites were used as the adsorbent to separate oxygen from air. This cycle is operated under a pressure ratio of 4.3. During the experiment, the pressure history and flow rates, as well as the concentration of the product stream have been continuously monitored. This is the first time detailed experimental data on a four-bed system are presented. Under favorable conditions, this system produces better than 90% oxygen at a recovery of 17%. For the low-pressure ratio, such a recovery could not have been achieved without the pressure equalization step and the reduced purge operation. Recovery and throughput, however, are not as high as one would expect from a linear local equilibrium model. The self-broadening effect of the purge wave has been identified as the major cause of underperformance. | 2020-02-23 01:36:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.67787766456604, "perplexity": 2147.5978337734527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145742.20/warc/CC-MAIN-20200223001555-20200223031555-00170.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-11-rational-expressions-and-functions-11-1-simplifying-rational-expressions-practice-and-problem-solving-exercises-page-668/35 | ## Algebra 1: Common Core (15th Edition)
$\frac{3z+12}{z^3}$
$\frac{z(3z+12)}{z∗z^3}$ Factor out a z from the numerator and the denominator $\frac{3z+12}{z^3}$ | 2019-11-20 17:15:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9824929237365723, "perplexity": 1907.071155931912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670597.74/warc/CC-MAIN-20191120162215-20191120190215-00066.warc.gz"} |
https://www.physicsforums.com/threads/where-to-put-the-transpose.226528/ | # Where to put the transpose?
1. Apr 4, 2008
### daviddoria
if you want to find the derivative (gradient) of f(x)^2 when f is a vector, you would get
2*f(x)*del(f(x))
I never know where to put the transpose!! sometimes its clear because another term in the equation will be a scalar, so you know an inner product is needed, but if you dont have a hint like that, how do you know if you should put the transpose on the f(x) or the del(f(x))? I suppose it depends on if f is a column or row vector, but alot of times this is not given in the statement of the problem.
Any thoughts on this? Does anyone have a good online tutorial on vector/matrix differentiation?
Thanks!
Dave
2. Apr 4, 2008
### Peeter
In terms of geometric algebra (since f is a vector grad and the the vector don't commute) one has:
$$\nabla f(x)^2 = (\nabla f(x)) f(x) + f(x) \nabla(f(x)) = 2 f(x) \cdot \nabla f(x)$$
(this works for a scalar f too since the dot product will just be scalar multiplication and you get $$2 f(x) \nabla f(x)$$.
Since you are talking about transposes, I'm assuming you've taken coordinate for your f and grad in some basis, in which case you can do it either way:
$$2 f(x)^\text{T}\nabla f(x)$$
or:
$$2 ({\nabla{f(x)}})^\text{T} f(x)$$
but in the second case you have to restrict the gradient to operating just on the first f(x).
3. Apr 4, 2008
### Peeter
Hestenes's New Foundations for Classical Mechanics.
Doran/Lasenby's Geometric Algebra for Physicists.
4. Apr 4, 2008
### Peeter
pps. If you choose to use coordinate vectors, and tranposition I think that basis also has to be orthonormal. Better to express using the dot product directly.
5. Apr 5, 2008
### Peeter
It occured to me that what I initially wrote is wrong (has to be since it was a scalar result when it should be a vector).
Using ticks to mark what the grad is operating on when separated one can write:
$$\nabla \lvert f \rvert ^2 = \nabla f^2 = \nabla f f = \acute{\nabla}f\acute{f} + (\nabla f)f$$
(again using the geometric product to multiply the two vectors).
Expanding this I get:
$$\nabla \lvert f \rvert ^2 = f (\nabla \cdot f) + (f \cdot \nabla) f - f \cdot (\nabla \wedge f) - (f \wedge \nabla) \cdot f$$
Which in the three dimensional case can be written in terms of the "normal" vector products.
$$\nabla \lvert f \rvert ^2 = f (\nabla \cdot f) + (f \cdot \nabla) f + f \times (\nabla \times f) + (f \times \nabla) \times f$$
Here we have a divergence, curl, directional derivative, and a normal'' directional derivative term (name?).
In my original post I allowed grad to commute with the vector, which means they are colinear (not generally true). Note that with the correction I don't really see how one would express this naturally with matrixes at all, so I've no idea now to answer your question of the where to put the transpose except for the colinear case. | 2017-02-23 09:54:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284682035446167, "perplexity": 560.799615379908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00059-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://ringtheory.herokuapp.com/rings/ring/12/ | # Ring detail
## Name: $M_n(F)$: the matrix ring over an infinite field
Description: The ring of $n \times n$ matrices with entries from a field $F$, $n$ a natural number greater than $1$, and $F$ an infinite field.
Notes:
Keywords matrix ring
Reference(s):
• (Citation needed)
• This ring has the following properties:
The ring lacks the following properties:
We don't know if the ring has or lacks the following properties: | 2017-12-17 14:03:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8167521953582764, "perplexity": 501.82439284232964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948596051.82/warc/CC-MAIN-20171217132751-20171217154751-00706.warc.gz"} |
https://mklarqvist.github.io/tomahawk/tutorial/ | # Getting started with tomahawk¶
This is an introductory tutorial for using Tomahawk. It will cover:
• Importing data into Tomahawk
• Subsetting and filtering output data
• Aggregating datasets
## Usage instructions¶
The CLI of Tomahawk comprises of several distinct subroutines (listed below). Executing tomahawk gives a list of commands with brief descriptions and tomahawk <command> gives detailed details for that command.
All primary Tomahawk commands operate on the binary Tomahawk twk and Tomahawk output two file format. Interconversions between twk and vcf/bcf is supported through the commands import for vcf/bcf->twk and view for twk->vcf. Linkage disequilibrium data is written out in binary two format or human-readable ld format.
Command Description
aggregate data rasterization framework for TWO files
calc calculate linkage disequilibrium
scalc calculate linkage disequilibrium for a single site
concat concatenate TWO files from the same set of samples
import import VCF/VCF.gz/BCF to TWK
sort sort TWO file
view TWO->LD/TWO view, TWO subset and filter
haplotype extract per-sample haplotype strings in FASTA/binary format
relationship compute marker-based pair-wise sample relationship matrices
decay compute LD-decay over distance
prune perform graph-based LD-pruning of variant sites
## Importing into Tomahawk¶
By design Tomahawk only operates on diploid and bi-allelic SNVs and as such filters out indels and complex variants. Tomahawk does not support mixed phasing of genotypes in the same variant (e.g. 0|0, 0/1). If mixed phasing is found for a record, all genotypes for that site are converted to unphased genotypes. This is a conscious design choice as this will internally invoke the correct algorithm to use for mixed-phase cases.
Importing standard files to Tomahawk involes using the import command. The following command imports a bcf file and outputs file.twk while filtering out variants with >20% missingness and sites that deviate from Hardy-Weinberg equilibrium with a probability < 0.001.
1 tomahawk import -i file.bcf -o file -m 0.2 -H 1e-3
1 2 $md5sum 1kgp3_chr6.bcf 8c05554ebe1a51e99be2471c96fad4d9 1kgp3_chr6.bcf 1 2 $ bcftools view 1kgp3_chr6.bcf -HG | wc -l 5024119
1 $time tomahawk import -i 1kgp3_chr6.bcf -o 1kgp3_chr6 Auto-completion of file extensions You do not have to append the .twk suffix to the output name as Tomahawk will automatically add this if missing. In the example above "1kgp3_chr6" will be converted to "1kgp3_chr6.twk" automatically. This is true for most Tomahawk commands when using the CLI. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Program: tomahawk-264d039a-dirty (Tools for computing, querying and storing LD data) Libraries: tomahawk-0.7.0; ZSTD-1.3.8; htslib 1.9 Contact: Marcus D. R. Klarqvist Documentation: https://github.com/mklarqvist/tomahawk License: MIT ---------- [2019-01-21 11:58:15,692][LOG] Calling import... [2019-01-21 11:58:15,692][LOG][READER] Opening 1kgp3_chr6.bcf... [2019-01-21 11:58:15,695][LOG][VCF] Constructing lookup table for 86 contigs... [2019-01-21 11:58:15,695][LOG][VCF] Samples: 2,504... [2019-01-21 11:58:15,695][LOG][WRITER] Opening 1kgp3_chr6.twk... [2019-01-21 11:58:39,307][LOG] Duplicate site dropped: 6:18233985 [2019-01-21 11:59:26,749][LOG] Duplicate site dropped: 6:55137646 [2019-01-21 11:59:39,315][LOG] Duplicate site dropped: 6:67839893 [2019-01-21 11:59:47,176][LOG] Duplicate site dropped: 6:74373442 [2019-01-21 11:59:51,440][LOG] Duplicate site dropped: 6:77843171 [2019-01-21 12:00:41,784][LOG] Duplicate site dropped: 6:121316830 [2019-01-21 12:01:12,830][LOG] Duplicate site dropped: 6:148573620 [2019-01-21 12:01:16,557][LOG] Duplicate site dropped: 6:151397786 [2019-01-21 12:01:42,644][LOG] Wrote: 4,784,608 variants to 9,570 blocks... [2019-01-21 12:01:42,644][LOG] Finished: 03m26,952s [2019-01-21 12:01:42,644][LOG] Filtered out 239,511 sites (4.76722%): [2019-01-21 12:01:42,645][LOG] Invariant: 15,485 (0.308213%) [2019-01-21 12:01:42,645][LOG] Missing threshold: 0 (0%) [2019-01-21 12:01:42,645][LOG] Insufficient samples: 0 (0%) [2019-01-21 12:01:42,645][LOG] Mixed ploidy: 0 (0%) [2019-01-21 12:01:42,645][LOG] No genotypes: 0 (0%) [2019-01-21 12:01:42,645][LOG] No FORMAT: 0 (0%) [2019-01-21 12:01:42,645][LOG] Not biallelic: 26,277 (0.523017%) [2019-01-21 12:01:42,645][LOG] Not SNP: 194,566 (3.87264%) ## Computing linkage-disequilibrium¶ ### All pairwise comparisons¶ In this first example, we will compute all-vs-all LD associations using the data we imported in the previous section. To limit compute, we restrict our attention to a 1/45 section of the data by passing the -c and -C job parameters. We will be using 8 threads (-t), but you may need to modify this to match the hardware available on your host machine. This job involves comparing a pair of variants >141 billion times and as such takes around 30 min to finish on most machines. 1 $ tomahawk calc -pi 1kgp3_chr6.twk -o 1kgp3_chr6_1_45 -C 1 -c 45 -t 8
Valid job balancing partitions
When computing genome-wide LD the balancing requires that number of sub-problems (-c) must be a member of the function: $$\binom{c}{2} + c = \frac{c^2 + c}{2}, c > 0$$ This function is equivalent to the upper-triangular of a square (c-by-c) matrix plus the diagonal. Read more about load partitioning in Tomahawk.
Here is the first 100 valid partition sizes:
1, 3, 6, 10, 15, 21, 28, 36, 45, 55, 66, 78, 91, 105, 120, 136, 153, 171, 190, 210, 231, 253, 276, 300, 325, 351, 378, 406, 435, 465, 496, 528, 561, 595, 630, 666, 703, 741, 780, 820, 861, 903, 946, 990, 1035, 1081, 1128, 1176, 1225, 1275, 1326, 1378, 1431, 1485, 1540, 1596, 1653, 1711, 1770, 1830, 1891, 1953, 2016, 2080, 2145, 2211, 2278, 2346, 2415, 2485, 2556, 2628, 2701, 2775, 2850, 2926, 3003, 3081, 3160, 3240, 3321, 3403, 3486, 3570, 3655, 3741, 3828, 3916, 4005, 4095, 4186, 4278, 4371, 4465, 4560, 4656, 4753, 4851, 4950, 5050
By default, Tomahawk will attempt to compute the genome-wide linkage-disequilibrium for all the provided variants and samples. This generally require huge amount of pairwise variant comparisons. Depending on your interests, you may not want to undertake this large compute and need to parameterize for your particular use-case.
As a consequence of the job balancing approach in Tomahawk, the diagonal jobs will always involve less variant comparisons ($$\binom{n}{2}$$) compared to off-diagonal jobs ($$n \times m$$), where $$n$$ and $$m$$ are the number of records in either block. Keep this in mind if you are choosing a partition size given the run-times of a given job. For example, examine the total run-time of the second job (-C 2) — the first off-diagonal job — as a proxy for the expected average run-time per job.
Output ordering
By design, Tomahawk computes pairwise associations using an out-of-order execution paradigm and further permutes the input ordering by using cache-blocking. Additionally, each thread/slave transiently store a number of computed associations in a private buffer that will be flushed upon reaching some frequency threshold. Because of these technicalities, Tomahawk will neither produce ordered output nor will the permutation of the output order be identical between runs on identical data. This has no practical consequence to most downstream applications in Tomahawk with the exception of query speeds. To address this limitation we have introduced a powerful sorting paradigm that will be discussed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 Program: tomahawk-264d039a-dirty (Tools for computing, querying and storing LD data) Libraries: tomahawk-0.7.0; ZSTD-1.3.8; htslib 1.9 Contact: Marcus D. R. Klarqvist Documentation: https://github.com/mklarqvist/tomahawk License: MIT ---------- [2019-01-21 12:13:50,210][LOG] Calling calc... [2019-01-21 12:13:50,211][LOG][READER] Opening 1kgp3_chr6.twk... [2019-01-21 12:13:50,212][LOG] Samples: 2,504... [2019-01-21 12:13:50,212][LOG][BALANCING] Using ranges [0-1063,0-1063] in square mode... [2019-01-21 12:13:50,212][LOG] Allocating 1,063 blocks... [2019-01-21 12:13:50,213][LOG] Running in standard mode. Pre-computing data... [2019-01-21 12:13:50,213][LOG][SIMD] Vectorized instructions available: SSE4... [2019-01-21 12:13:50,213][LOG] Constructing list, vector, RLE... [2019-01-21 12:13:50,213][LOG][THREAD] Unpacking using 7 threads: ....... Done! 01,291s [2019-01-21 12:13:51,504][LOG] 531,500 variants from 1,063 blocks... [2019-01-21 12:13:51,504][LOG][PARAMS] square=TRUE,window=FALSE,low_memory=FALSE,bitmaps=FALSE,single=FALSE,force_phased=TRUE,force_unphased=FALSE,compression_level=1,block_size=500,output_block_size=10000,l_surrounding=500000,minP=1.000000,minR2=0.100000,maxR2=100.000000,minDprime=0.000000,maxDprime=100.000000,n_chunks=45,c_chunk=0,n_threads=8,ldd_type=3,cycle_threshold=0 [2019-01-21 12:13:51,504][LOG] Performing: 141,245,859,250 variant comparisons... [2019-01-21 12:13:51,504][LOG][WRITER] Opening 1kgp3_chr6_1_45.two... [2019-01-21 12:13:51,505][LOG][THREAD] Spawning 8 threads: ........ [2019-01-21 12:13:51,533][PROGRESS] Time elapsed Variants Genotypes Output Progress Est. Time left [2019-01-21 12:14:21,533][PROGRESS] 30,000s 3,685,746,500 9,229,109,236,000 988,858 2.60945% 18m39s [2019-01-21 12:14:51,533][PROGRESS] 01m00,000s 6,425,868,750 16,090,375,350,000 1,689,422 4.54942% 20m58s [truncated] [2019-01-21 12:39:51,546][PROGRESS] 26m00,013s 140,131,382,750 350,888,982,406,000 48,635,114 99.211% 12s [2019-01-21 12:40:04,317][PROGRESS] Finished in 26m12,784s. Variants: 141,245,859,250, genotypes: 353,679,631,562,000, output: 49,870,388 [2019-01-21 12:40:04,317][PROGRESS] 89,806,242 variants/s and 224,874,830,855 genotypes/s [2019-01-21 12:40:04,321][LOG][PROGRESS] All done...26m12,815s!
This run generated 935817820 bytes (935.8 MB) of output data in binary format using the default compression level (1):
1 2 $ls -l 1kgp3_chr6_1_45.two -rw-rw-r-- 1 mk819 mk819 935817820 Jan 21 12:40 1kgp3_chr6_1_45.two If this data was stored in plain, human-readable, text format it would use over 45 GB: 1 2 $ tomahawk view -i 1kgp3_chr6_1_45.two | wc -c 4544702179
### Sliding window¶
If you are working with a species with well-know LD structure (such as humans) you can reduce the computational cost by limiting the search-space to a fixed-sized sliding window (-w). In window mode you are free to choose any arbitrary sub-problem (-c) size.
Window properties
By default, Tomahawk computes the linkage disequilibrium associations for all pairs of variants within -w bases of the target SNV on either direction. In order to maximize computational throughput, we made the design decision to maintain blocks of data that overlap to the target interval. This has the consequence that adjacent data that may not directly overlap with the target region will still be computed. The technical reasoning for this is, in simplified terms, that the online repackaging of internal data blocks is more expensive than computing a small (relatively) number of non-desired (off-target) associations.
1 time tomahawk calc -pi 1kgp3_chr6.twk -o 1kgp3_chr6_4mb -w 4000000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 Program: tomahawk-264d039a-dirty (Tools for computing, querying and storing LD data) Libraries: tomahawk-0.7.0; ZSTD-1.3.8; htslib 1.9 Contact: Marcus D. R. Klarqvist Documentation: https://github.com/mklarqvist/tomahawk License: MIT ---------- [2019-01-21 14:10:33,616][LOG] Calling calc... [2019-01-21 14:10:33,616][LOG][READER] Opening 1kgp3_chr6.twk... [2019-01-21 14:10:33,619][LOG] Samples: 2,504... [2019-01-21 14:10:33,619][LOG][BALANCING] Using ranges [0-9570,0-9570] in window mode... [2019-01-21 14:10:33,619][LOG] Allocating 9,570 blocks... [2019-01-21 14:10:33,620][LOG] Running in standard mode. Pre-computing data... [2019-01-21 14:10:33,620][LOG][SIMD] Vectorized instructions available: SSE4... [2019-01-21 14:10:33,620][LOG] Constructing list, vector, RLE... [2019-01-21 14:10:33,620][LOG][THREAD] Unpacking using 7 threads: ....... Done! 15,774s [2019-01-21 14:10:49,394][LOG] 4,784,608 variants from 9,570 blocks... [2019-01-21 14:10:49,394][LOG][PARAMS] square=TRUE,window=TRUE,low_memory=FALSE,bitmaps=FALSE,single=FALSE,force_phased=TRUE,force_unphased=FALSE,compression_level=1,block_size=500,output_block_size=10000,window_size=4000000,l_surrounding=500000,minP=1.000000,minR2=0.100000,maxR2=100.000000,minDprime=0.000000,maxDprime=100.000000,n_chunks=1,c_chunk=0,n_threads=8,ldd_type=3,cycle_threshold=0 [2019-01-21 14:10:49,394][LOG] Performing: 11,446,234,464,528 variant comparisons... [2019-01-21 14:10:49,394][LOG][WRITER] Opening 1kgp3_chr6_4mb.two... [2019-01-21 14:10:49,402][LOG][THREAD] Spawning 8 threads: ........ [2019-01-21 14:10:49,424][PROGRESS] Time elapsed Variants Genotypes Output Progress Est. Time left [2019-01-21 14:11:19,424][PROGRESS] 30,000s 2,311,240,500 5,787,346,212,000 853,368 0 0 [2019-01-21 14:11:49,424][PROGRESS] 01m00,000s 4,562,731,500 11,425,079,676,000 1,717,956 0 0 [truncated] [2019-01-21 16:12:49,492][PROGRESS] 02h02m00,068s 527,889,726,500 1,321,835,875,156,000 470,354,846 0 0 [2019-01-21 16:13:17,236][PROGRESS] Finished in 02h02m27,812s. Variants: 529,807,522,528, genotypes: 1,326,638,036,410,112, output: 473,514,018 [2019-01-21 16:13:17,237][PROGRESS] 72,104,114 variants/s and 180,548,701,880 genotypes/s [2019-01-21 16:13:17,257][LOG][PROGRESS] All done...02h02m27,854s!
### Single interval vs its local neighbourhood¶
In many cases we are not interested in computing large-scale linkage-disequilibrium associations but have a limited genomics window of interest. If this is the case, you can use the specialization subroutine of calc called scalc that is parameterizable with a target region and a neighbourhood size in bases.
In this example, we will compute the regional associations of variants mapping to the interval chr6:10e6-10.1e6 and a 100kb neighbourhood.
1 tomahawk scalc -i 1kgp3_chr6.twk -o test -I 6:10e6-10.1e6 -w 100000
Neighbourhood interval
The neighbourhood is computed as the region from [start of interval - neighbourhood, start of interval) and (end of interval + neighbourhood). If the start of the interval minus the neighbourhood falls outside the chromosome (position < 0) then this interval is truncated to 0 in the left end.
Scalability
This subroutine was designed to be used with a relatively short interval to maximize computability in this case. It is however possible to provide any valid interval as the target region to compute linkage-disequilibrium for. Providing large intervals will result in poor performance and potentially undesired output.
## Concatenating multiple archives¶
On of the immediate downsides of partitioning compute into multiple non-overlapping sub-problems is that we will generate a large number of independent files that must be merged prior to downstream analysis. Fortunatly, this is a trivial operation in Tomahawk and involves the concat command.
First lets compute some data using the first 3 out of 990 partitions of the dataset used above.
1 for i in {1..3}; do time tomahawk calc -pi 1kgp3_chr6.twk -c 990 -C $i -o part$i\_3.two; done
Next, since we only have three files, we can concatenate (merge) these files together into a single archieve by passing each file name to the command:
1 2 3 4 5 6 7 8 9 10 11 $time tomahawk concat -i part1_3.two -i part2_3.two -i part3_3.two -o part3_concat [2019-01-22 10:55:47,799][LOG] All files are compatible. Beginning merging... [2019-01-22 10:55:47,800][LOG] Appending part1_3.two... 397.953344 Mb/98.987848 Mb [2019-01-22 10:55:47,889][LOG] Appending part2_3.two... 359.538884 Mb/54.763322 Mb [2019-01-22 10:55:47,938][LOG] Appending part3_3.two... 346.075500 Mb/52.072741 Mb [2019-01-22 10:55:47,988][LOG] Finished. Added 3 files... [2019-01-22 10:55:47,988][LOG] Total size: Uncompressed = 1.103568 Gb and compressed = 205.823911 Mb real 0m0.226s user 0m0.036s sys 0m0.190s Passing every single input file in the command line does not become feasble when we have many hundreds to thousands of files. To address this, it is possible to first store the target file paths in a text file and then pass that to Tomahawk. Using the data from above, we save these file paths to the file part_file_list.txt. 1 for i in {1..3}; do echo part$i\_3.two >> part_file_list.txt; done
Then we simply pass this list of file paths to Tomahawk using the -I argument:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 $time tomahawk concat -I part_file_list.txt -o part3_concat adding file=part1_3.two adding file=part2_3.two adding file=part3_3.two [2019-01-22 10:57:43,227][LOG] All files are compatible. Beginning merging... [2019-01-22 10:57:43,262][LOG] Appending part1_3.two... 397.953344 Mb/98.987848 Mb [2019-01-22 10:57:43,350][LOG] Appending part2_3.two... 359.538884 Mb/54.763322 Mb [2019-01-22 10:57:43,400][LOG] Appending part3_3.two... 346.075500 Mb/52.072741 Mb [2019-01-22 10:57:43,447][LOG] Finished. Added 3 files... [2019-01-22 10:57:43,447][LOG] Total size: Uncompressed = 1.103568 Gb and compressed = 205.823911 Mb real 0m0.477s user 0m0.040s sys 0m0.241s Possible duplications The concat subroutine will dedupe input file paths but will neiter check nor guarantee that the actual input data is not duplicated. Therefore, it is possible to get duplicates in your data if you are not careful. These duplicate entries could corrupt any downstream insights! ## Sorting output files¶ All subroutines in Tomahawk will work on unsorted output files. However, sorted files are much faster to query and will result in considerable time savings. This is especially true for very large files. Sorting files in Tomahawk is trivial and involves calling the sort command with the required fields input (-i) and output (-o). Disk usage Tomahawk can easily sort huge .two files with the help of external merge sorting algorithms followed by a single k-way merge, memory assisted operation. The external merge step require additional disk space approximately equal to the size of the input dataset. Then, the merge operation will write a, generally, smaller output file. On average, we can approxiate that the sort operation require an additional two times the input file size on disk. After the sorting procedure is completed, the intermediate files are removed. Temporary files By default, Tomahawk will generate temporary files in the same directory as the input file. If this file path is undesired then you have to modify the appropriate parameters. In this example we will sort the almost 500 million (473,514,826) records computed from the sliding window example above. This archive corresponds to >50 Gb of binary data. 1 $ time tomahawk sort -i 1kgp3_chr6_4mb.two -o 1kgp3_chr6_4mb_sorted
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 Program: tomahawk-264d039a-dirty (Tools for computing, querying and storing LD data) Libraries: tomahawk-0.7.0; ZSTD-1.3.8; htslib 1.9 Contact: Marcus D. R. Klarqvist Documentation: https://github.com/mklarqvist/tomahawk License: MIT ---------- [2019-01-22 11:20:43,978][LOG] Calling sort... [2019-01-22 11:20:43,993][LOG] Blocks: 47,358 [2019-01-22 11:20:43,993][LOG] Uncompressed size: 50.192950 Gb [2019-01-22 11:20:43,993][LOG] Sorting 473,514,826 records... [2019-01-22 11:20:43,993][LOG][THREAD] Data/thread: 6.274119 Gb [2019-01-22 11:20:43,993][PROGRESS] Time elapsed Variants Progress Est. Time left [2019-01-22 11:20:43,994][LOG][THREAD] Slave-0: range=0->5919/47358 and name 1kgp3_chr6_4mb_sorted_fileSucRkC.two [2019-01-22 11:20:43,994][LOG][THREAD] Slave-1: range=5919->11838/47358 and name 1kgp3_chr6_4mb_sorted_filewPI10T.two [2019-01-22 11:20:43,994][LOG][THREAD] Slave-2: range=11838->17757/47358 and name 1kgp3_chr6_4mb_sorted_fileo1zcHb.two [2019-01-22 11:20:43,994][LOG][THREAD] Slave-3: range=17757->23676/47358 and name 1kgp3_chr6_4mb_sorted_fileBvRnnt.two [2019-01-22 11:20:43,994][LOG][THREAD] Slave-4: range=23676->29595/47358 and name 1kgp3_chr6_4mb_sorted_filewbBz3K.two [2019-01-22 11:20:43,994][LOG][THREAD] Slave-5: range=29595->35514/47358 and name 1kgp3_chr6_4mb_sorted_fileCrNLJ2.two [2019-01-22 11:20:43,994][LOG][THREAD] Slave-6: range=35514->41433/47358 and name 1kgp3_chr6_4mb_sorted_filerCfYpk.two [2019-01-22 11:20:43,994][LOG][THREAD] Slave-7: range=41433->47358/47358 and name 1kgp3_chr6_4mb_sorted_filexx9a6B.two [2019-01-22 11:21:13,994][PROGRESS] 30,000s 70,594,275 14.9086% 2m51s [2019-01-22 11:21:43,994][PROGRESS] 01m00,000s 146,732,835 30.988% 2m13s [2019-01-22 11:22:13,994][PROGRESS] 01m30,000s 245,535,675 51.8539% 1m23s [2019-01-22 11:22:43,995][PROGRESS] 02m00,001s 323,064,235 68.2268% 55s [2019-01-22 11:23:13,995][PROGRESS] 02m30,001s 421,592,790 89.0348% 18s [2019-01-22 11:23:24,951][LOG][THREAD] Finished: 1kgp3_chr6_4mb_sorted_filewbBz3K.two with 23676-29595. Sorted n=59190000 variants with size=1.176156 Gb [2019-01-22 11:23:25,114][LOG][THREAD] Finished: 1kgp3_chr6_4mb_sorted_fileBvRnnt.two with 17757-23676. Sorted n=59190000 variants with size=1.286938 Gb [2019-01-22 11:23:25,385][LOG][THREAD] Finished: 1kgp3_chr6_4mb_sorted_filerCfYpk.two with 35514-41433. Sorted n=59190000 variants with size=1.219126 Gb [2019-01-22 11:23:25,710][LOG][THREAD] Finished: 1kgp3_chr6_4mb_sorted_fileCrNLJ2.two with 29595-35514. Sorted n=59190000 variants with size=1.161081 Gb [2019-01-22 11:23:26,384][LOG][THREAD] Finished: 1kgp3_chr6_4mb_sorted_filexx9a6B.two with 41433-47358. Sorted n=59184826 variants with size=1.215231 Gb [2019-01-22 11:23:26,425][LOG][THREAD] Finished: 1kgp3_chr6_4mb_sorted_fileSucRkC.two with 0-5919. Sorted n=59190000 variants with size=1.242923 Gb [2019-01-22 11:23:29,966][LOG][THREAD] Finished: 1kgp3_chr6_4mb_sorted_fileo1zcHb.two with 11838-17757. Sorted n=59190000 variants with size=1.686758 Gb [2019-01-22 11:23:31,300][LOG][THREAD] Finished: 1kgp3_chr6_4mb_sorted_filewPI10T.two with 5919-11838. Sorted n=59190000 variants with size=2.013554 Gb [2019-01-22 11:23:31,338][PROGRESS] 02m47,344s 473,514,826 (2,829,587 variants/s) [2019-01-22 11:23:31,338][PROGRESS] Finished! [2019-01-22 11:23:31,341][LOG] Spawning 112 queues with 2.380952 Mb each... [2019-01-22 11:23:35,090][LOG][WRITER] Opening "1kgp3_chr6_4mb_sorted.two"... [2019-01-22 11:23:35,091][PROGRESS] Time elapsed Variants Progress Est. Time left [2019-01-22 11:24:05,091][PROGRESS] 30,000s 39,757,023 8.39615% 5m27s [2019-01-22 11:24:35,091][PROGRESS] 01m00,000s 76,990,000 16.2593% 5m9s [2019-01-22 11:25:05,091][PROGRESS] 01m30,000s 107,840,000 22.7744% 5m5s [2019-01-22 11:25:35,091][PROGRESS] 02m00,000s 133,420,000 28.1765% 5m5s [2019-01-22 11:26:05,092][PROGRESS] 02m30,000s 170,215,235 35.9472% 4m27s [2019-01-22 11:26:35,092][PROGRESS] 03m00,001s 208,070,000 43.9416% 3m49s [2019-01-22 11:27:05,092][PROGRESS] 03m30,001s 247,010,000 52.1652% 3m12s [2019-01-22 11:27:35,092][PROGRESS] 04m00,001s 287,690,000 60.7563% 2m35s [2019-01-22 11:28:05,092][PROGRESS] 04m30,001s 325,480,000 68.737% 2m2s [2019-01-22 11:28:35,093][PROGRESS] 05m00,001s 364,299,966 76.9353% 1m29s [2019-01-22 11:29:05,093][PROGRESS] 05m30,002s 401,367,427 84.7634% 59s [2019-01-22 11:29:35,093][PROGRESS] 06m00,002s 439,290,000 92.7722% 28s [2019-01-22 11:30:02,180][PROGRESS] 06m27,089s 473,514,826 (1,223,269 variants/s) [2019-01-22 11:30:02,180][PROGRESS] Finished! [2019-01-22 11:30:02,194][LOG] Finished merging! Time: 06m27,103s [2019-01-22 11:30:02,194][LOG] Deleting temp files... [2019-01-22 11:30:02,194][LOG] Deleted 1kgp3_chr6_4mb_sorted_fileSucRkC.two [2019-01-22 11:30:02,194][LOG] Deleted 1kgp3_chr6_4mb_sorted_filewPI10T.two [2019-01-22 11:30:02,194][LOG] Deleted 1kgp3_chr6_4mb_sorted_fileo1zcHb.two [2019-01-22 11:30:02,194][LOG] Deleted 1kgp3_chr6_4mb_sorted_fileBvRnnt.two [2019-01-22 11:30:02,194][LOG] Deleted 1kgp3_chr6_4mb_sorted_filewbBz3K.two [2019-01-22 11:30:02,194][LOG] Deleted 1kgp3_chr6_4mb_sorted_fileCrNLJ2.two [2019-01-22 11:30:02,194][LOG] Deleted 1kgp3_chr6_4mb_sorted_filerCfYpk.two [2019-01-22 11:30:02,194][LOG] Deleted 1kgp3_chr6_4mb_sorted_filexx9a6B.two [2019-01-22 11:30:02,678][LOG] Finished! real 9m18.793s user 23m31.390s sys 0m53.197s
Faster queries
Success: You can now use the newly created sorted archieve exactly as unsorted files but with faster query times.
## Format descriptions¶
Memory layout (technical)
A frequent design choice when designing a file format is how to align memory of records: either as lists of records (array-of-struct) versus column-orientated layouts (struct-of-array). The pivoted layout (struct-of-array) of twk1_two_t structs in Tomahawk will result considerable savings in disk space usage at the expense of querying speed of the resulting data. We decided to build the two format around the classical array-of-struct memory layout as we want to maximize the computability of the data. This is especially true in our application as we will never support individual columnar slicing and subset operations.
### LD format¶
Tomahawk can output binary two data in the human-readable ld format by invoking the view command. The general schema for ld is below:
Column Description
FLAG Bit-packed boolean flags (see below)
CHROM_A Chromosome for marker A
POS_A Position for marker A
CHROM_B Chromosome for marker B
POS_B Position for marker B
REF_REF Inner product of (0,0) haplotypes
REF_ALT Inner product of (0,1) haplotypes
ALT_REF Inner product of (1,0) haplotypes
ALT_ALT Inner proruct of (1,1) haplotypes
D Coefficient of linkage disequilibrium
DPrime Normalized coefficient of linkage disequilibrium (scaled to [-1,1])
R Pearson correlation coefficient
R2 Squared pearson correlation coefficient
P Fisher's exact test P-value of the 2x2 haplotype contigency table
ChiSqModel Chi-squared critical value of the 3x3 unphased table of the selected cubic root (α, β, or δ)
ChiSqTable Chi-squared critical value of table (useful comparator when P = 0)
The 2x2 contingency table, or matrix, for the Fisher's exact test (P) for haplotypes look like this:
REF-A REF-B
REF-B A B
ALT-B C D
The 3x3 contigency table, or matrix, for the Chi-squared test for the unphased model looks like this:
0/0 0/1 1/1
0/0 A B C
0/1 D E F
1/1 G H J
The two FLAG values are bit-packed booleans in a single integer field and describe a variety of states a pair of markers can be in.
Bit position Numeric value One-hot Description
1 1 0000000000000001 Used phased math.
2 2 0000000000000010 Acceptor and donor variants are on the same contig.
3 4 0000000000000100 Acceptor and donor variants are far apart on the same contig.
4 8 0000000000001000 The output contingency matrix has at least one empty cell (referred to as complete).
5 16 0000000000010000 Output correlation coefficient is perfect (1.0).
6 32 0000000000100000 Output solution is one of >1 possible solutions. This only occurs for unphased pairs.
7 64 0000000001000000 Output data was generated in 'fast mode'.
8 128 0000000010000000 Output data is estimated from a subsampling of the total pool of genotypes.
9 256 0000000100000000 Donor vector has missing value(s).
10 512 0000001000000000 Acceptor vector has missing value(s).
11 1024 0000010000000000 Donor vector has low allele count (<5).
12 2048 0000100000000000 Acceptor vector has low allele count (<5).
13 4096 0001000000000000 Acceptor vector has a HWE-P value < 1e-4.
14 8192 0010000000000000 Donor vector has a HWE-P value < 1e-4.
## Viewing and manipulating output data¶
It is possible to filter two output data by: 1) either start or end contig e.g. chr1, 2) position in that contig e.g. chr1:10e6-20e6; 3) have a particular contig mapping e.g. chr1,chr2; 4) interval mapping in both contigs e.g. chr1:10e3-10e6,chr2:0-10e6
1 tomahawk view -i 1kgp3_chr6_1_45.two -I 6:10e3-10e6,6:0-10e6 -H | head -n 6
Viewing all data
1 tomahawk view -i 1kgp3_chr6_1_45.two -H | head -n 6
FLAG CHROM_A POS_A CHROM_B POS_B REF_REF REF_ALT ALT_REF ALT_ALT D DPrime R R2 P ChiSqModel ChiSqTable
2059 6 89572 6 214654 4999 6 0 3 0.000597965 1 0.577004 0.332934 4.01511e-09 1667.33 0
2059 6 89573 6 214654 4999 6 0 3 0.000597965 1 0.577004 0.332934 4.01511e-09 1667.33 0
2059 6 122855 6 214654 4988 17 0 3 0.000596649 1 0.38664 0.149491 5.44908e-08 748.648 0
3 6 143500 6 212570 4421 387 82 118 0.0195352 0.54402 0.331324 0.109775 1.53587e-69 549.755 0
3 6 143500 6 213499 4419 387 84 118 0.0194949 0.537523 0.329068 0.108286 7.57426e-69 542.296 0
Slicing a range
1 tomahawk view -i 1kgp3_chr6_1_45.two -H -I 6:5e6-6e6 | head -n 6
FLAG CHROM_A POS_A CHROM_B POS_B REF_REF REF_ALT ALT_REF ALT_ALT D DPrime R R2 P ChiSqModel ChiSqTable
2063 6 5022893 6 73938 5000 7 0 1 0.000199362 1 0.353306 0.124825 0.00159744 625.125 0
2063 6 5020564 6 89339 5000 7 0 1 0.000199362 1 0.353306 0.124825 0.00159744 625.125 0
7 6 5018459 6 156100 1150 1283 388 2187 0.0804323 0.509361 0.348864 0.121706 1.17649e-138 609.504 0
7 6 5018691 6 156100 861 811 677 2659 0.0693918 0.339199 0.31898 0.101748 1.10017e-109 509.554 0
7 6 5018910 6 156100 861 811 677 2659 0.0693918 0.339199 0.31898 0.101748 1.10017e-109 509.554 0
Slicing matches in B string
1 tomahawk view -i 1kgp3_chr6_1_45.two -H -I 6:5e6-6e6,6:5e6-6e6 | head -n 6
FLAG CHROM_A POS_A CHROM_B POS_B REF_REF REF_ALT ALT_REF ALT_ALT D DPrime R R2 P ChiSqModel ChiSqTable
3 6 5000012 6 5073477 4988 9 5 6 0.0011915 0.544089 0.465743 0.216917 1.05037e-13 1086.32 0
1027 6 5000160 6 5072168 5000 2 4 2 0.000398404 0.4994 0.407677 0.166201 7.1708e-06 832.333 0
1027 6 5000160 6 5078340 5000 2 4 2 0.000398404 0.4994 0.407677 0.166201 7.1708e-06 832.333 0
3 6 5000482 6 5079621 4993 6 2 7 0.0013931 0.777199 0.64641 0.417846 3.9492e-18 2092.57 0
3 6 5000482 6 5082227 4994 6 1 7 0.00139362 0.874675 0.685808 0.470333 8.78523e-19 2355.43 0
As alluded to in the sorting section, sorted files have generally much faster query times. We can demonstrate this difference by using the sorted and unsorted data from the sliding window example above.
1 2 3 4 5 $time tomahawk view -i 1kgp3_chr6_4mb.two -H -I 6:1e6-5e6,6:1e6-5e6 > /dev/null real 3m1.029s user 2m41.929s sys 0m5.682s Sorted 1 2 3 4 5 $ time tomahawk view -i 1kgp3_chr6_4mb_sorted.two -H -I 6:1e6-5e6,6:1e6-5e6 > /dev/null real 0m38.167s user 0m37.579s sys 0m0.192s
Viewing ld data from the binary two file format and filtering out lines with a Fisher's exact test P-value < 1e-4, minor haplotype frequency < 5 and have both markers on the same contig (bit 2)
1 tomahawk view -i file.two -P 1e-4 -a 5 -f 2
Example output
FLAG CHROM_A POS_A CHROM_B POS_B REF_REF REF_ALT ALT_REF ALT_ALT D Dprime R R2 P ChiSqModel ChiSqTable
15 20 1314874 20 2000219 5002 5 0 1 0.00019944127 1 0.4080444 0.16650023 0.0011980831 0 833.83313
15 20 1315271 20 1992301 5005 2 0 1 0.00019956089 1 0.57723492 0.33320019 0.00059904153 0 1668.6665
15 20 1315527 20 1991024 5004 0 3 1 0.00019952102 1 0.49985018 0.2498502 0.00079872204 0 1251.2498
15 20 1315763 20 1982489 5006 0 1 1 0.00019960076 1 0.70703614 0.49990013 0.00039936102 0 2503.4999
15 20 1315807 20 1982446 5004 3 0 1 0.00019952102 1 0.49985018 0.2498502 0.00079872204 0 1251.2498
Example unphased output. Notice that estimated haplotype counts are now floating values and the bit-flag 1 is not set.
FLAG CHROM_A POS_A CHROM_B POS_B REF_REF REF_ALT ALT_REF ALT_ALT D Dprime R R2 P ChiSqModel ChiSqTable
46 20 1882564 20 1306588 5003.9996 2.0003999 1.0003999 0.99960008 0.00019936141 0.49950022 0.40779945 0.1663004 0.0011978438 0.0011996795 832.83241
46 20 1895185 20 1306588 5004.9998 1.0001999 1.0001999 0.99980011 0.0001994811 0.49970025 0.49970022 0.24970032 0.00079864228 0.00069960005 1250.4992
46 20 1306588 20 1901581 5003.9996 2.0003999 1.0003999 0.99960008 0.00019936141 0.49950022 0.40779945 0.1663004 0.0011978438 0.0011996795 832.83241
46 20 1901581 20 1306588 5003.9996 2.0003999 1.0003999 0.99960008 0.00019936141 0.49950022 0.40779945 0.1663004 0.0011978438 0.0011996795 832.83241
46 20 1306649 20 1885268 5006 1 1.2656777e-08 0.99999999 0.00019960076 1 0.70703614 0.49990013 0.00039936102 0.00039969285 2503.4999
Example output for forced phased math in fast mode. Note that only the REF_REF count is available, the fast math bit-flag is set, and all P and Chi-squared CV values are 0.
FLAG CHROM_A POS_A CHROM_B POS_B REF_REF REF_ALT ALT_REF ALT_ALT D Dprime R R2 P ChiSqModel ChiSqTable
67 20 1345922 20 1363176 4928 0 0 0 0.011788686 0.98334378 0.86251211 0.74392718 0 0 0
67 20 1345922 20 1367160 4933 0 0 0 0.011800847 0.98336071 0.89164203 0.79502547 0 0 0
67 20 1345958 20 1348347 4944 0 0 0 0.0092644105 0.97890127 0.85316211 0.72788554 0 0 0
67 20 1345958 20 1354524 4938 0 0 0 0.0092493389 0.86871892 0.80354655 0.64568704 0 0 0
75 20 1345958 20 1356626 4945 0 0 0 0.0033518653 0.99999994 0.51706308 0.26735422 0 0 0
## Aggregating and visualizing datasets¶
Tomahawk generally output many millions to many hundreds of millions to billions of output linkage disequilibrium (LD) associations generated from many millions of input SNVs. It is technically very challenging to visualize such large datasets. Read more about aggregation in Tomahawk.
Aggregation Description
R Pearson correlation coefficient
R2 Squared pearson correlation coefficient
D Coefficient of linkage disequilibrium
Dprime Scaled coefficient of linkage disequilibrium
Dp Alias for dprime
P Fisher's exact test P-value of the 2x2 haplotype contigency table
Hets Number of (0,1) or (1,0) associations
Alts Number of (1,1) associations
Het Alias for hets
Alt Alias for alts
Reduction Description
Mean Mean number of aggregate
Max Largest number in aggregate bin
Min Smallest number in aggregate bin
Count Total number of records in a bin
N Alias for count
Total Sum total of aggregated number in a bin
In this example, we will aggregate the genome-wide data generated from the sliding example above by aggregating on R2 values and reducing into count in a (4000,4000)-bin space. If a bin have less than 50 observations (-c) we will drop all the value and report 0 for that bin.
1 twk aggregate -i 1kgp3_chr6_4mb_sorted.two -x 4000 -y 4000 -f r2 -r count -c 50 -t 4 -O b -o 1kgp3_chr6_4mb_aggregate.twa
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Program: tomahawk-7f8eef9b-dirty (Tools for computing, querying and storing LD data) Libraries: tomahawk-0.7.0; ZSTD-1.3.8; htslib 1.9 Contact: Marcus D. R. Klarqvist Documentation: https://github.com/mklarqvist/tomahawk License: MIT ---------- [2019-01-25 14:55:43,299][LOG] Calling aggregate... [2019-01-25 14:55:43,299][LOG] Performing 2-pass over data... [2019-01-25 14:55:43,299][LOG] ===== First pass (peeking at landscape) ===== [2019-01-25 14:55:43,299][LOG] Blocks: 47,352 [2019-01-25 14:55:43,299][LOG] Uncompressed size: 50.192950 Gb [2019-01-25 14:55:43,299][LOG][THREAD] Data/thread: 12.548238 Gb [2019-01-25 14:55:43,299][PROGRESS] Time elapsed Variants Progress Est. Time left [2019-01-25 14:56:13,299][PROGRESS] 30,000s 344,930,000 72.8446% 11s [2019-01-25 14:56:27,979][PROGRESS] 44,679s 473,514,826 (10,597,935 variants/s) [2019-01-25 14:56:27,979][PROGRESS] Finished! [2019-01-25 14:56:27,979][LOG] ===== Second pass (building matrix) ===== [2019-01-25 14:56:27,979][LOG] Aggregating 473,514,826 records... [2019-01-25 14:56:27,979][LOG][THREAD] Allocating: 2.560000 Gb for matrices... [2019-01-25 14:56:27,979][PROGRESS] Time elapsed Variants Progress Est. Time left [2019-01-25 14:56:57,979][PROGRESS] 30,000s 348,560,000 73.6112% 10s [2019-01-25 14:57:12,499][PROGRESS] 44,520s 473,514,826 (10,635,926 variants/s) [2019-01-25 14:57:12,499][PROGRESS] Finished! [2019-01-25 14:57:13,713][LOG] Aggregated 473,514,826 records in 16,000,000 bins. [2019-01-25 14:57:13,713][LOG] Finished.
You can now use the output binary format .twa in your downstream analysis. Alternatively, it is possible to output a human-readable (x,y)-matrix by setting the -O parameter to u. This tab-delimited matrix can now be loaded in any programming language and used as input for graphical visualizations or for analysis. It is easiest to use these files directly in rtomahawk, the R-bindings for tomahawk. | 2019-09-21 22:03:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.551266074180603, "perplexity": 11707.267500674028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574665.79/warc/CC-MAIN-20190921211246-20190921233246-00100.warc.gz"} |
https://brilliant.org/problems/sonometer-experiment/ | # Sonometer experiment.
Classical Mechanics Level pending
A sonometer wire resonates with a given tuning fork forming standing waves with three antinodes between the two bridges when mass $$M$$ is suspended from the wire. When this mass is completely immersed in a liquid of density $$\rho_l$$, the wire resonates with same tuning fork forming five antinodes for the same position of the bridges. The ratio of density of liquid and density of mass $$M$$, $$\frac{\rho_l}{\rho_m}$$, is :
× | 2017-09-22 08:27:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4532598853111267, "perplexity": 1156.6140845126697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688926.38/warc/CC-MAIN-20170922074554-20170922094554-00360.warc.gz"} |
https://abakbot.com/algebra-en/elli-en | Value including complex
Full elliptic integral of 1 sort Full elliptic integral 2 sorts
Elliptic integrals first appeared in the problem of determining the perimeter of an arbitrary ellipse .
In the general case, the integral is called elliptic
$\int R(x,y) dx$
where is a rational function of $$x$$ and $$y$$ , and $$y ^ 2$$ - polynomial of the third or fourth degree fromx
Transformations are known that allow expressing any elliptic integral in terms of the integral of a rational function x and the following three canonical integrals.
Elliptic integral of the first kind
$F(\varphi,k)=\int_{0}^{\varphi} \frac{ dx }{sqrt{1-k^2sin^2(t)}}$
Elliptic integral of the second kind
$E(\varphi,k)=\int_{0}^{\varphi} \sqrt{1-k^2sin^2(t)}dt$
Elliptic integral of the third kind
$\Pi(n, \varphi, k) = \int \limits_{0}^{\varphi}\!\frac{d\varphi}{(1+n \sin^2 \varphi) \sqrt{1-k^2\sin^2\varphi}}$
Here
$$\varphi$$ - amplitude
$$k$$ - module
$$n$$ is the parameter of the elliptic integral (of the third kind)
Integrals for which the amplitude $$\varphi = \frac {\phi} {2}$$ are called complete. For integrals of the first and second kind, the notation is used, respectively.
$K(k)=F(\frac{\pi}{2},k)$
$E(k)=E(\frac{\pi}{2},k)$
An additional module is used, which is equal by definition
$k_1=\sqrt{1-k^2}$
In tables of elliptic integrals, it is customary to express the amplitude in degrees. In addition, often quantities F, E, K, E considered as functions of the modular angle - the angle replacing the modulus and expressed in degrees:
$\alpha=\frac{180}{\pi}asin(k)$
In this way
$k=sin(\alpha)$
$k=cos(\alpha)$
When calculating $$K (k)$$ one of the most effective is the iterative method ofarithmetic-geometric mean (AGS). Starting from a couple $$a_0 = 1$$ ; $$b_0 = k_1 = cos (\alpha)$$ are the following arithmetic mean and geometric mean, which form two converging sequences:
$a_1=\frac{a_0+b_0}{2}, b_1=\sqrt{a_0b_0}$
$a_2=\frac{a_1+b_1}{2}, b_2=\sqrt{a_1b_1}$
$a_n=\frac{a_{n-1}+b_{n-1}}{2}, b_n=\sqrt{a_{n-1}b_{n-1}}$
The process ends with this n for which a and b match. Sought value K determined by the formula
$K(k)=\frac{\pi}{2a_n}$
There is an even simpler formula, with k tending to unity.
$K(k)=ln(\frac{4}{k_1})$
The calculation of the full elliptic integral of the second kind is carried out according to the same scheme as in the case of the integral of the first kind, using differences
$$c_n = \frac {a_n-b_n} {2}, n = 1,2,3 .....$$
obtained at each iteration. Then
$E(k)=(1-\frac{1}{2}\sum_{n=0}^N2^nc_n^2)K(k)$
Where $$c_0 = k$$
Bot, calculates the values of the full elliptic integral of the first and second kind, for any values k
With this bot, we can easily determine the perimeter of an ellipse , as well as the length of the arc of any second-order curve .
Some examples
If $$x = i$$
Complete elliptic integral of the first kind $K(\frac{\pi}{2},i) = 1.3110287771461$ Complete elliptic integral of the 2nd kind $E(\frac{\pi}{2},i) = 1.910098894514$
I would like to note that if you check according to the data provided by the site www.wolframalpha.com it turns out that it has different values. This is due to the fact that on that site, the argument is first squared, that is, there the values are shown for the value \(i ^ 2 = -
Complete elliptic integral of the first kind $K(\frac{\pi}{2},0.5) = 1.6857503548126$ Complete elliptic integral of the 2nd kind $E(\frac{\pi}{2},0.5) = 1.4674622093395$
and one more
Complete elliptic integral of the first kind $K(\frac{\pi}{2},-8) = 0.19712334640198-0.43443093218712i$ Complete elliptic integral of the 2nd kind $E(\frac{\pi}{2},-8) = 0.098367068970897+7.7518095000745i$
If you somewhere found an error in the calculations, please kindly report it. Thank!!!
Good luck!
Copyright © 2022 AbakBot-online calculators. All Right Reserved. Author by Dmitry Varlamov | 2022-05-17 07:28:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 22, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9050172567367554, "perplexity": 562.8394041441958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00001.warc.gz"} |
http://mathhelpforum.com/advanced-applied-math/132454-forces-calculus-problem-print.html | # Forces and calculus problem
• March 7th 2010, 05:17 AM
NeilT
Forces and calculus problem
Here is a problem i've been working on. I think i've got the first two parts but I'm stuck on the last section. I'd appreciate anyone checking my solution so far and even better helping me with the third section.
Q: A particle of mass m kg moves in a horizontal straight line from the origin O with initial velocity U i $ms^{-1}$, where i is the unit vector in the direction of motion. A resistive force $-mkv^3$i acts on the particle, where $k$ is a constant and $v$i is the velocity of the particle at time $t$ seconds measured from the start of the motion.
(i) Show that the velocity of the particle satisfies the differential equation
$\frac{dv}{dx} = -kv^2$,
where $x$ is the distance of the particle from $O$.
Hence show that $v = \frac{U}{1 + kUx}$.
(ii) Using (i) or otherwise, show that
$kUx^2 + 2x = 2Ut$.
(iii) Find an expression, in terms of k and U, for the time taken for the speed of the particle to reduce to half its initial value.
Solution:
(i) $F = -mkv^3$
$ma = -mkv^3$
$v\frac{dv}{dx} = -kv^3$
$\frac{dv}{dx} = -kv^2$ as required
then
$\int \frac{1}{kv^2} dv = \int - dx$
$- \frac{1}{kv} = -x + c$ but at $x=0 , v=U$
so $c = -\frac{1}{kU}$
and $\frac{1}{kv} = -x - \frac{1}{kU}$
$-1 = -kvx - \frac{kv}{kU}$
$kvx - 1 = -\frac{v}{U}$
$kvxU + v - U = 0$
$v(kUx + 1) = U$
$v = \frac{U}{1+kUx}$as required
(ii)
$\frac{dx}{dt} = \frac{U}{1+kUx}$
$\int (1 + kUx)dx = \int U dt$
$x + \frac{1}{2} kUx^2 = Ut + c$ but at $x=0, t=0$
so $c=0$
and $\frac{1}{2} kUx^2 + x = Ut$
so $kUx^2 + 2x = 2Ut$ as required.
From $v= \frac{U}{1+ kUx}$, v will be U/2 when $\frac{U}{2}= \frac{U}{1+ kUx}$. Solve that for x.
Put that value of x into $kUx^2+ 2x= 2Ut$ and solve for t.
Thankyou Halls of Ivy, I got $t=\frac{4}{kU^2}$ | 2013-12-20 08:17:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 35, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9223753213882446, "perplexity": 296.8999955335228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345769121/warc/CC-MAIN-20131218054929-00057-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/how-to-shoot-microwaves.706749/ | # How to shoot microwaves?
1. Aug 22, 2013
How would I shoot microwaves at a piece of asteroid? Scientist shot something which I forgot, And they could get data from it? Is that possible for a price of 2,000$I would love to see if I could find out compositions of asteroids. And what could I use? I know that nasa probably uses morse code right? So how could I make my reciever read morse code? or should I say where can I learn morse code? Last edited: Aug 22, 2013 2. Aug 22, 2013 ### davenn hi there welcome to PF ummm do you have any electronics experience at all ... as in have you actually built any significant circuits ?$2000 no probably not maybe 3 or 4 times that.
1) you would need a high power transmitter maybe ~ 100W there's several $1000 for a start 2) A large parabolic dish 40 ft or so diameter with all the precision driven motors for it to be able to track a tiny fast moving lump of rock, maybe another$20,000
3) high gain very low noise receiver module, nitrogen cooled to achieve that low noise figure maybe another ~ $2000 4) a block of land big enough to mount the dish a few more 1000 if out in a rural area, 5) another few 1000 for the concrete mounting 6) hardware engineering design specifications so that the structure meets/conforms to local government building plan permits maybe another few 1000$\$
7) finally acquiring transmitter licencing for high power microwave band transmission from you local organisation in the USA the FCC
just to give you a moment of realism at 10GHz microwave frequency us amateur radio operators can just very weakly hear our signals bounced off the moon with a 20W transmitter into a 12 ft diameter dish
An asteroid is a tiny lump of rock in comparison, a reflected signal from even a high power transmitter would be extremely weak.
cheers
Dave
Last edited: Aug 22, 2013
3. Aug 22, 2013
### davenn
a little more realism
a couple of years ago a bunch of ham radio operators used the 1000ft diameter Arecibo dish in Puerto Rico in the Caribbean to bounce radio signals off the planet Venus
don't think you are going to build a 1000ft dish at home any time soon
Dave
4. Aug 22, 2013
### SteamKing
Staff Emeritus
Morse code? That's so TwenCen. Nobody uses Morse Code anymore.
5. Aug 23, 2013
### davenn
hahaha you would be surprised how many still use it ... still quite a fad amongst some radio operators. I have a few radio op friends that refuse to use voice or digital .... their view ... communicate with me on CW or not at all
Dave
6. Aug 23, 2013
### SteamKing
Staff Emeritus
Well, Morse Code is one of those things whose better days are behind it.
Morse has been superseded for maritime distress calls by the adoption of the GMDSS system, and the ITU and other communications agencies no longer require that Morse proficiency be demonstrated before granting an operator's license. The FCC has dropped all Morse requirements from its licensing tests. Even Western Union doesn't do telegrams anymore.
While I grant there may still be a lot of people who understand and can send Morse, I fear their numbers will naturally dwindle. Writing on clay tablets was once all the rage; now, not so much.
7. Aug 23, 2013
### davenn
yeah thats true
I did try learning CW many years back for my advanced radio op's licence couldnt get past 6 wpm needed 12 wpm I finally gave up and kept to my technicians licence .... I really didnt have any interest in HF anyway all my main work and experimenting has been mainly 400MHz and up to 24GHz
and as you say, they finally abolished the CW requirement in many countries including where I am.
I gave HF a hammering for ~ 12 months then lost interest and and went back to my "first love" ... the microwave bands much much more challenging!!
cheers
Dave
8. Aug 23, 2013
### davenn
this one is just for you steamking
cheers my friend
Dave
File size:
37.1 KB
Views:
131 | 2017-09-25 17:15:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2676417827606201, "perplexity": 5523.498217608238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818692236.58/warc/CC-MAIN-20170925164022-20170925184022-00091.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/calculus-10th-edition/chapter-7-applications-of-integration-7-3-exercises-page-462/9 | ## Calculus 10th Edition
$\frac{8\pi}{3}$
Setup the integration using shell method about the y-axis $2\pi \int_0^2 x(4-(4x-x^2))dx$ $2\pi \int_0^2 (4x-4x^2 +x^3)dx$, Integrate $2\pi (2x^2 - \frac{4}{3}x^3 + \frac{1}{4}x^4)]_0^2$, Take the definite integral $2\pi(8-\frac{32}{3} +4)-0$ $\frac{8\pi}{3}$ | 2018-12-18 14:47:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9993845820426941, "perplexity": 963.7877291373514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829429.94/warc/CC-MAIN-20181218143757-20181218165757-00552.warc.gz"} |
https://emeraldreverie.org/2019/12/19/visualising-the-community-contributions-to-ansible-modules/ | Let’s start with defining the question - today, I’m tasked with showing the patterns within the modules relating to where there are high levels of staff commits (i.e people employed by Red Hat within Ansible itself), and everyone else (which we’ll label ‘community’). Note that ‘community’ will still include some Red Hat folks, as we’re a big company and not everyone works on Ansible for the $day-job (there’s a caveat to this, see Appendix 1) The resulting graphic, as you’ll see, largely speaks for itself, so here it is! Sometimes, a picture really is worth a thousand words… This is a tree-map - each small circle is a single file in /lib/ansible/modules, and then the larger circles around them are the containing directory, and so on up the tree. The size of the file-circles only is determined from the total number of commits to that file. I really like this result - apart from the non-colourblind-safe palette of red/blue (I was finding it hard to make a clearer one). It’s a lovely graphic that tells a high-level story at a glance (lots of entirely community-created modules), and some more niche tales as you dive in (e.g. look closely at vmware…). So, done? Nah, I don’t write short posts - so, for a change, I’m going to dive into the R code I used to create it, so you can mess about with it yourself :) # Part 0 - Libraries We need some packages, so I load them here. My greglib package has my staff list in it, so you’ll need to create one of those for yourself if you’re following along :P library(tidyverse) # data manipulation library(greglib) # staff list library(ggraph) # graphing tools library(igraph) library(data.tree) # tree mapping tools library(git2r) # git cloning # remotes::install_github('lorenzwalthert/gitsum') library(gitsum) # nice log parsing tool set.seed(1234) ## Part 1 - The Data We’ll need some data, of course. We could use a variety of metrics to determine “contributions” or “content”, but I’m going to keep it simple. We’ll git clone Ansible, and look at the raw Git log itself. We’re going to restrict this to the last 2 years of data - if less, we don’t get more than 1 commit to many files; if more, it’s not representative of the current community. # Clone if we need to if (!dir.exists('/tmp/ansible')) { git2r::clone('https://github.com/ansible/ansible','/tmp/ansible') # Get detailed data on each commit - takes a few min :) gitsum::init_gitsum('/tmp/ansible/', over_write = T) } logs <- gitsum::parse_log_detailed('/tmp/ansible/') # Filter to commits within date & touching /lib/ansible/modules commits <- logs %>% filter(date > Sys.Date() - lubridate::years(2)) %>% # 2 year date filter unnest(nested) %>% # unpack commits to one-row-per-file-changed filter(str_starts(changed_file,'lib/ansible/modules/')) %>% # detect the commits to modules filter(!str_detect(changed_file,'=>')) %>% # drop rows with are just renames mutate(changed_file = str_remove(changed_file,'lib/ansible/')) %>% # tidy up the filename, everything is in lib/ansible mutate(staff = author_name %in% greglib::staff$gitlog.name) # note staff vs community commits
# Preview it
arrange(short_hash) %>%
select(short_hash,date,staff,changed_file) %>% # just picking a few columns to display
knitr::kable()
short_hash date staff changed_file
9f86 2017-12-19 12:22:00 TRUE modules/network/nxos/nxos_aaa_server_host.py
9f86 2017-12-19 12:22:00 TRUE modules/network/nxos/nxos_bgp.py
9f86 2017-12-19 12:22:00 TRUE modules/network/nxos/nxos_bgp_neighbor.py
9f86 2017-12-19 12:22:00 TRUE modules/network/nxos/nxos_bgp_neighbor_af.py
9f86 2017-12-19 12:22:00 TRUE modules/network/nxos/nxos_facts.py
9f86 2017-12-19 12:22:00 TRUE modules/network/nxos/nxos_gir.py
OK, so we have a nice data frame, one row per changed file with supporting data. Lovely.
## Part 2 - Tree Maps
The best way to work with this data is to use a tree map, since it is actually a tree - a directory tree to be precise. Here, we’ll turn our data frame into a tree, and add the data we know about to the leaf (file) nodes.
tree <- commits %>%
group_by(changed_file) %>%
summarise(n = n(), # total commits to this file
staff = sum(staff), # commits from staff
perc_staff = staff/n # percentage of staff commits
) %>%
rename(pathString = changed_file) %>% # renaming makes later code cleaner
FromDataFrameTable() # create the tree
print(tree,'staff', limit = 7)
## levelName staff
## 1 modules NA
## 2 ¦--cloud NA
## 3 ¦ ¦--alicloud NA
## 4 ¦ ¦ ¦--__init__.py 0
## 5 ¦ ¦ ¦--_ali_instance_facts.py 0
## 6 ¦ ¦ ¦--ali_instance_facts.py 2
## 7 ¦ ¦ °--... 2 nodes w/ 0 sub NA
## 8 ¦ °--... 41 nodes w/ 1680 sub NA
## 9 °--... 20 nodes w/ 4561 sub NA
OK, we have leaf data, but notice that the parents (directories) are NA - we need to aggregate the per-file data. We can traverse the tree from the bottom upwards to get at this.
tree$Do(function(x) { if (!isLeaf(x)) { # no need to act on leaf nodes, it's defined there already x$n <- sum(Get(x$children, "n")) # sum of commits in the dir x$staff <- sum(Get(x$children, "staff")) # sum of staff commits x$perc_staff <- x$staff / x$n # percentage for this dir
}
}, traversal = "post-order") # post-order means leaf-first
print(tree,'staff', limit = 7)
## levelName staff
## 1 modules 4693
## 2 ¦--cloud 1515
## 3 ¦ ¦--alicloud 5
## 4 ¦ ¦ ¦--__init__.py 0
## 5 ¦ ¦ ¦--_ali_instance_facts.py 0
## 6 ¦ ¦ ¦--ali_instance_facts.py 2
## 7 ¦ ¦ °--... 2 nodes w/ 0 sub NA
## 8 ¦ °--... 41 nodes w/ 1680 sub NA
## 9 °--... 20 nodes w/ 4561 sub NA
Boom, we have have all the data in the right format. But we will want an extra column for the display side of things - I want the directory labels to be sized based on their tree depth (i.e. amazon should be smaller font than cloud)
tree$Do(function(x) { x$label_size <- case_when(
x$level == 2 ~ 6, # size 6 for top-dirs x$level == 3 ~ 4, # size 4 for module dirs
TRUE ~ NA_real_ # don't label "modules" or filenames (level 1 & 4)
)
}, traversal = 'post-order')
OK, we have a tree. Let’s plot it!
## Part 3 - Edges & Nodes
To plot this, we’ll need to consume it as a network - that is, a list of points(vertices) and the connections between them (edges). For a tree map, that’s just the list of directories & files, and the name of the parent (e.g. there is an edge from modules to modules/cloud, and so on).
Happily, data.tree has functions for that! The edges are trivial.
edges <- ToDataFrameNetwork(tree,'pathString')
The vertices are a little trickier, as we want to preserve all the attributes like staff_perc and so on. Also, the data.tree method for vertices would only give the leaf vertices, so we have to be a little clever - we use the list of to names in edges and merge data to that. It’s gnarly but it works:
# get data from the tree in a data.frame format
myverts <- tibble(name = tree$Get('pathString'), n = tree$Get('n'),
perc_staff = tree$Get('perc_staff'), # Style label_text = tree$Get('name'),
label_size = tree$Get('label_size')) vertices <- edges %>% select(-from) %>% # use the to column for names add_row(to = 'modules', pathString = 'modules') %>% # nothing can point *to* "modules", so add it back distinct(pathString, .keep_all = T) %>% # deduplicate the list left_join(., myverts, by=c("pathString" = "name")) %>% # join it to the tree data rename(name = to) %>% # tidy up and handle NAs introduced for "modules" mutate(perc_staff = if_else(name == 'modules',0.5,perc_staff), label_text = if_else(name == 'modules',NA_character_,label_text)) head(vertices) %>% knitr::kable() name pathString n perc_staff label_text label_size modules/cloud modules/cloud 8182 0.1851626 cloud 6 modules/clustering modules/clustering 114 0.3859649 clustering 6 modules/commands modules/commands 66 0.3030303 commands 6 modules/crypto modules/crypto 352 0.0625000 crypto 6 modules/database modules/database 520 0.1384615 database 6 modules/files modules/files 278 0.3525180 files 6 OK, we have a list of vertices and edges, so we can finally plot it! ## Part 4 - The Plot First, the easy bit - we’ll create a graph object, and cache a few values for ease of use in the main call. mygraph <- graph_from_data_frame( edges, vertices=vertices ) total_perc = tree$Get('perc_staff')[1] # row 1 == 'modules', or *all the data*
staff_filter = 50 # These filters are arbitrary, and prevent crowding
nonstaff_filter = 200 # the plot with labels. Adjust to suit yourself!
Right, let’s do this. One immense ggplot2 invocation coming up! This is huge, so I’ll break it down piece by piece :)
plot <- ggraph(mygraph, layout = 'circlepack', weight=n)
## Non-leaf weights ignored
ggraph is our main function, it’s a ggplot2 extension and is amazing. See the GGraph website for inspiration!
The layout obviously sets us up for circle-packing, and the weight tells the algorithm to size the circles by number of commits, n.
plot <- plot +
geom_node_circle(aes(fill = perc_staff, colour = as.factor(depth)), size = 0.1)
Add the circles (the nodes), and use aesthetics (aes) that fill the circle by staff_perc, colour the boundary line by depth, and make that line really thin.
plot <- plot +
mid = 'white', midpoint = 0.5,
name='Staff Commits',
labels = scales::percent)
This sets up the colour scale from blue to red with white at 0.5 - that is, white is where there is no strong bias towards staff or community.
plot <- plot +
scale_color_manual(guide=F,values=c("0" = "white",
"1" = "black",
"2" = "black",
"3" = "black",
"4" = "black") )
Recall the colour is mapped to depth - this just says “map depth 1 to white, and the rest to black” which means we don’t see a circle for all modules. I think that looks nicer, but you can change the first entry to “black” to compare.
plot <- plot +
geom_node_label(aes(label = label_text,
size = I(label_size),
filter = (perc_staff < 0.5) & (n > nonstaff_filter),
fill = perc_staff),
repel = T)
This plots the “community” labels (perc < 0.5), using the filter of staff_perc < 0.5, and the commit filter defined above.
plot <- plot +
geom_node_label(aes(label = label_text,
size = I(label_size),
filter = (perc_staff >= 0.5) & (n > staff_filter),
fill = perc_staff),
repel = T)
And this does the “staff” labels. They’re separate label calls because they use different filters - if you wanted a single n_filter object for both types of label, you could simplify this bit!
plot <- plot +
theme_void() +
theme(plot.margin = unit(rep(0.5,4), "cm")) +
labs(title = paste0('Staff vs non-staff commits, per module file. Total staff: ', scales::percent(total_perc)),
caption = 'Source: git-log, last 2 years, commits expanded by files touched')
This final bit is just styling - setting up the look & feel, the title, etc.
OK, this plot this baby! Be warned, if you followed along, this step takes a good few minutes to process. Get a beverage :)
# save to file so we *definitely* get a big image
png('/tmp/bubbleplot.png',width=1200,height=800)
print(plot)
dev.off()
## png
## 2
## Conclusion
And there you have it, one step-by-step Ansible community bubble-plot (or circle-pack) based on commits and who wrote them. Hope you enjoyed it as much as I enjoyed making it! If you did, stop by and let me know, I’d love to chat about it :)
## Appendix 1 - The Identity Caveat
A caveat arises, then. To do this, you have to identify the staff members to categorize, and identity is a hard problem. To get around this, I had my excellent colleague Gundalow give me the list of folks he considered “staff”. No, I’m not going to share it with you, you can ask him :P
This does mean that if a name in the git logs doesn’t match my list, it’ll go in the community group. That’s fine in the vast majority of cases, but if someone updates their .gitconfig file to have a new name, I’ll lose track of them. It’s not a big impact, I think, but something to be aware of. | 2020-01-27 16:02:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27239248156547546, "perplexity": 7193.328046093302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700988.64/warc/CC-MAIN-20200127143516-20200127173516-00399.warc.gz"} |
https://www.gamedev.net/blogs/entry/2255605-lua-yaml-goblinson-crusoe/ | • entries
437
1182
• views
764828
# Lua, YAML, Goblinson Crusoe
2117 views
I love Lua, that shouldn't be a surprise to anybody who reads this blog. I am possessed of a chaotic, ad-hoc personality, so the ad-hoc nature of Lua coding suits me just fine. I love the fluidity and flow of working with a dynamically typed language, and the ability that it gives me to rapidly prototype and iterate on various data structures, code flow schemes and game designs without having to constantly recompile. Lua is awesome for quickly prototyping a game to see how it is going to work.
However, even with LuaJIT, profiling Goblinson Crusoe shows a distressing amount of time being spent in the various book-keeping tasks of Lua: garbage collection, memory allocation, etc... It's not unmanageable at the moment, given the relatively low number of objects in play at any given time, but I can foresee a time when it might be a problem. A bigger game, perhaps, with more stuff going on.
The main areas I really benefit from Lua are message passing (the backbone of object-to-object communication in GC), data structuring (iterating on the structure of a component), and data description and streaming (save/load, entity construction, etc...).
1) Message Passing - The backbone of communication in GC. Messages are constructed as ad-hoc Lua tables populated with relevant data, then sent through the central object handler via a sendMessageTo(receiver_id, message) call. It's like handing a note to someone; the sender doesn't really need to know what the receiver does with the message, or even if the receiver handles it at all. However, the data of the message can't really be nailed down with any kind of concrete type. For example, a ApplyRawDamage message would contain the amount of damage to apply to a unit, as well as the type (fire, electricity, etc...). Whereas an UpdateLogic message doesn't need any data at all; just the fact of sending it to an object causes it to do a Think() cycle followed by an action cycle.
2) Data Structuring - Especially during development of the combat system of GC, I went through many different iterations of many of the components in GC. I continue to iterate on a lot of things, though most of the main structures are nailed down now. Being able to skip the compilation step was a great time saver.
3) Data Description - An object in GC can be loaded simply by loading a save Lua file that consists of a table of tables. The top-level table describes the overall structure of the object, or which components make it up. The sub-tables describe each component. So a tree might be loaded from a table similar to this:
return{ {component="StaticSprite", sprite="Tree15", scale=1}, {component="HarvestableResource", type="Aspen Planks", quantity=Random(2,4)}}
It is easy enough to just call dofile() with the data filename to obtain a parsed and compiled Lua table with the data, then create an empty object and iterate the component list, constructing components as specified in the file and adding them to the object.
Recently, I've been exploring ways of duplicating this system in C++, hedging my bets against the day when I might find Lua to be inadequate. (I'm still not sure that day will come, to be honest; Lua, especially with LuaJIT, is pretty awesome.)
One of the biggest hangups for me has been the message passing. In my earlier days, I would often implement this type of system using the dreaded void *. Which, of course, really isn't the best idea. void * is the Wild West of variant data types, a Wild West where striking gold or getting shot by an Indian are both of equal probability. My efforts along these lines as a novice were thankfully short-lived, once I realized the sheer potential for bugs this provided. In the ten or so years since, I have never used a void *. I've been cured of it, you could say.
I've implemented roll-your-own variant classes using templates, unions, and all sorts of other schemes. I've even used a string-encoding scheme, where all data was encoded into a stringstream, something I saw once here on the forums. Perhaps the safest and cleanest was using a map of boost::any keyed on a string.
Unfortunately, rolling your own variant/any class also requires rolling your own file parser and streaming code, something which I have little patience for. I went through a phase during my early development where I was interested in parsing and writing languages and all that, but it wore off within a matter of months. I just don't really have any enthusiasm for writing any more parsing code, especially when I'm doing it to replace Lua but am not convinced I really need to, and certainly can't do it as efficiently as Lua.
My latest experiment, however, has been with YAML. Specifically, the yaml-cpp library. The cool thing about using YAML (or even JSON, though I don't like JSON syntax; seeing a readable data file full of " " quoted strings just hurts my eyes) is that the file read/parse/write code is already written. The yaml-cpp Node class provides an easy interface for constructing ad-hoc tables as well, allowing me to directly set or read key/value pairs. In all my efforts, using yaml-cpp has come the closest to feeling like Lua, as far as the data description and message passing. yaml-cpp drops in and replaces all of my configuration, data loading and message passing with relative ease.
It has gone well enough, in fact, that I have considered making the leap back to pure C++ even though I haven't hit the theoretical object limit in GC-Lua yet. I do see that limit coming (maybe not in this game, but surely in the next). I could write a tool to convert the Lua data tables for objects over to a YAML format (for many of them, such a fix would be almost as simple as converting = to : and stripping the return keyword from the top of the file) easily enough. And since the prototypes for most of the key components are nailed down now, then I don't benefit quite so much from Lua's lack of re-compilation requirements. I could port most of my components straight across and be done.
Preliminary profiling indicates a noticeable improvement over GC-Lua, but it's still a bit too early to tell. But just skipping the garbage collection cycle has made quite a bit of difference. To be fair, though, I could probably do a large pass of minor refactoring to reduce or eliminate a lot of waste in GC-Lua, where temporaries are created and collected unnecessarily, especially in inner loops and whatnot. Still, as I ease back into GC after one of my many hiatuses (hiatii?) it is something for me to think about.
## 1 Comment
I kind of been doing this with C++ and XML files lately, putting a bunch of stuff to control objects in a xml file and loading them after the interface is done in C++, neat isn't it.... Think that is what you are kind of doing....
## Create an account
Register a new account | 2018-01-17 15:05:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3129885494709015, "perplexity": 1800.6705673499728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886946.21/warc/CC-MAIN-20180117142113-20180117162113-00250.warc.gz"} |
https://stats.stackexchange.com/questions/263715/binary-classification-on-time-series-data | # Binary classification on time series data
I have a time-series data of air-pressure inside a room. The reading are the output of an physics experiment. The Predictor variable is binary flag which is coded as follows:
If (ending-reading = 0 then 1 else 0)
I have attached the snapshot of the data below. My objective is to predict the likelihood of the ending-reading being 0 for a future time period.
I understand that I can use time-series forecasting like ARIMA or ARIMAX to project the end-reading and then simply refresh the Predictor flag. But I am looking for other alternatives, either supervised or unsupervised methods.
I thought survival-analysis might work but I am not sure if it is applicable in this case since the end-reading can be 0 on multiple days. The experiment doesn't stop if the end-reading on a particular day is 0.
Would logistic regression work on time-series data?
Any help would be much appreciated. | 2021-05-11 11:20:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41985031962394714, "perplexity": 869.1942113151708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991982.8/warc/CC-MAIN-20210511092245-20210511122245-00514.warc.gz"} |
https://math.stackexchange.com/questions/1038826/the-burgers-vortex-in-2-dimension-solving-differential-equation/1038844 | The Burger's vortex in 2 Dimension - solving Differential equation
After simplifying the vortex equation, I get to this equation:
$$-\alpha y \partial_y \omega = \alpha \omega + \nu \partial_{yy} \omega$$
where the $\alpha$ and $\nu$ are constant values and $\omega(y)$. I want to solve the equation for $\omega(y)$. How can I do it? I appreciate in advance for kind helps.
Regards,
Ehsan
$$-\alpha y \dfrac{d\omega}{dy} = \alpha\omega + \nu \dfrac{d^2\omega}{dy^2}$$
this is the same as
$$\nu \dfrac{d^2\omega}{dy^2} + \alpha \dfrac{d}{dy}(y\omega) = 0$$ thus
$$\nu \omega ' + \alpha y \omega + C = 0$$
now if $C=0$ then the solution is separable but if not then we have to solve $$\frac{d\omega}{dy} + \frac{\alpha}{\nu} y \omega= -C$$ using integrating factor $$\omega \mathrm{e}^{\frac{\alpha}{\nu}\frac{y^2}{2}} = -\int_0^y C \mathrm{e}^{\frac{\alpha}{\nu}\frac{y'^2}{2}}dy'$$
or
$$\omega(y) = -\left(\int_0^y C \mathrm{e}^{\frac{\alpha}{\nu}\frac{y'^2}{2}}dy'\right)\mathrm{e}^{-\frac{\alpha}{\nu}\frac{y^2}{2}}$$
• Very well done indeed, plus one!!! – Robert Lewis Nov 27 '14 at 5:25
• @robertlewis Thanks! Too kind. – Chinny84 Nov 27 '14 at 7:18
• Well, I was wondering if there wasn't some clever way of solving this, and you found it! – Robert Lewis Nov 27 '14 at 7:32
Maple finds solutions $$\omega(y) = \left(c_1 + c_2\; \text{erfi}\left(\sqrt{-\alpha/(2\nu)} y\right)\right) e^{-\alpha y^2/(2 \nu)}$$ | 2019-06-24 23:36:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8687441349029541, "perplexity": 499.2134392936046}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999779.19/warc/CC-MAIN-20190624231501-20190625013501-00460.warc.gz"} |
https://cs.stackexchange.com/questions/112764/set-cover-with-multiple-covers | Set Cover with multiple covers
In the Set Cover problem we need to cover each element at least once. I'm considering the case where I want each element to be covered at least $$k$$ times with constant $$k$$.
I consider the classic LP for the problem and randomized rounding. Is it indeed the case that the modification of the LP from $$\geq 1$$ to $$\geq k$$ in the covering constraint and with the same rounding (up to the number of repetitions) works well for this variant as well?
It looks like it, but I'm not sure if I'm missing anything.
• Why not try to prove it? This is how we verify conjectures in mathematics. – Yuval Filmus Aug 15 '19 at 7:11
• @YuvalFilmus, It's the same proof, I just wanted a verification that I wasn't missing anything – Belgi Aug 15 '19 at 7:26
• If your proof works, then the statement is correct. There's no need for independent verification. – Yuval Filmus Aug 15 '19 at 9:25 | 2020-09-29 11:35:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8055858612060547, "perplexity": 385.4860866534964}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401641638.83/warc/CC-MAIN-20200929091913-20200929121913-00158.warc.gz"} |
https://mully.net/en/area_of_circle_en/ | # Area of Circle
## How to calculate the area of a circle
If you cut the circles and stick them together, they gradually become more rectangular. In this rectangle, the width is half of the circumference, and the height is the radius.
Therefore, The area of a circle can be calculated as shown below.
Area of circle = 1/2 of circumference x radius | 2023-03-24 13:38:26 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8583356738090515, "perplexity": 538.610632862959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00052.warc.gz"} |
https://blog.wtf.sg/2016/04/15/deciding-when-to-feedforward-or-wtf-gates/ | ## Deciding When To Feedforward (or WTF gates)
Another paper of mine, titled “Towards Implicit Complexity Control using Variable-Depth DNNs for ASR Systems” got accepted to the International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2016 in Shanghai, which happened not too long ago.
The idea behind this one was the intuition that in a classification task, some instances should be simpler than others to classify. Similarly, the problem of deciding when to stop in an RNN setting is also an important one. If we take the bAbI task for example, and go an extra step and assume the number of logical steps to arrive at the answer is not provided for you, then you need to know when the network is ‘ready’ to give an answer.
Being in a lab that mostly does work related to DNN acoustic modelling for speech recognition, I figured I could pitch it as a way for reducing computation time during runtime. I figured, sil frames must be pretty straightforward to classify, and a huge proportion of the frames in speech recognition are silence.
If we consider each layer in the network as a representation of the original input that is being ‘untangled’ to perform the final task of discrimination at the final layer, then it might be possible to perform the classification using any of the representations. Further, some of the lower representations might even be useful/better for certain classes, if we could only find a way to dynamically decide when to use them, and when to continue the feedforward process.
I was thinking about how some kind of gating system could work for this that would decide at every layer: ‘Do I output, or do I feedforward?’. Eventually Professor Sim and I arrived at a kind of cascading mechanism that satisfied 2 important properties:
• You had to be able to evaluate the value of the gate as a probability at the current layer without seeing the subsequent ones, or you’d defeat the purpose
• The gates should be a distribution that sums to 1
This gave me a way to frame it as a kind of marginalisation over a conditional probablity (probability of phoneme given stop signal ($s_l$) and frame):
$$P(y|x) = \sum_{l=1}^L P(y|s_l,x)P(s_l|x),$$
where $L$ is the number of layers and $P(s_l|x)$ is given by,
$$P(s_l|x) = g_l \prod_{l’=1}^{l-1} (1-g_{l’})$$
In order to make this a distribution, $g_L$ is always 1. Graphically,
(the variable names are different here, this is a diagram I drew early on in the paper writing process)
We called this model VDNN, instead of the very cool WTF gates I’d have used if I only thought of it sooner (though I’m pretty sure it’d never fly).
Indeed, there were differences when you look at the average layers used and the class of the frame that is being classified:
But silence was not where I expected it to be!
Incidentally, Alex Graves at DeepMind had a very similar idea (that he actually got to work much better, and on RNNs too) which he named Adaptive Computation Time (ACT). The method we used is nothing but a footnote of failure in his paper:
Page 5:
However experiments show that networks trained to minimise expected rather than total halting time learn to ‘cheat’…
Oh well. | 2018-09-25 07:56:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7169150114059448, "perplexity": 721.5216562970296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161214.92/warc/CC-MAIN-20180925063826-20180925084226-00106.warc.gz"} |
https://electronics.stackexchange.com/questions/523034/wifi-over-wires | # WIFI over wires [closed]
So, guys I have made wifi gun a but it seems not working as there are a lot of trees between my friend's wifi and my home ( 1km distance ), So is there a method to convert wifi to wire and transmit over here, So I was thinking about connecting a wire to my wifi antenna and dragging that to his house.
Any suggestions or advice?
• Yes. It's called "Ethernet". – Puffafish Sep 24 at 12:27
• Hahaha next thing you'll be wanting phones that work over wires too. – Brian Drummond Sep 24 at 12:30
• There's a reason why that YouTube video has gotten so many negative votes. – rdtsc Sep 24 at 12:43
• What in the name of little green boogers makes people think that videos are a good way to distribute technical information? To build a copy, you need detailed plans (drawings,) written descriptions of how to make and assemble the parts, and a list of troubleshooting steps for when something goes wrong (and something always goes wrong.) Videos offer none of those things. – JRE Sep 24 at 12:50
• Sorry, but this isn't a "gimme the circuitz" site. You've asked on an electronics engineering site for questions and answers on technical design so you can expect to have to have some knowledge on the subject to phrase a decent technical question. – Transistor Sep 24 at 13:09
This has got off to a bad start, but the answers are really almost there: WiFi and Ethernet are closely related through both the technology and the IEEE standards body that defines them.
The absolute simplest wifi-wire-to-wifi would be unscrewing both antennas on the APs and putting a length of coaxial cable between them. For short links this would probably need an attenuator, but a 1km piece of cable should attenuate quite effectively itself (-30dB maybe?). I'm not sure if the latency would prevent this from working effectively but I don't immediately see why it would.
Someone claims to have actually done this. In their case they already had the cable installed and needed to build adapters.
The "correct" solution is ethernet-over-coax or ethernet-over-fiber, but both of those require specialised conversion equipment.
You might be able to make "twinax" (twin coaxial, used for satellite TV dishes) work instead of ethernet cable with appropriate baluns, but this seems like a lot of hassle and I wouldn't try it unless the link was already there.
What will almost certainly not work is a long piece of unshielded cable, like bell wire, speaker wire, or household mains cable. You can do Ethernet over junk for short distances but not long ones.
• from my experience, only -30dB of attenuation over 1km would be an exceptional good cable... I doubt its more. Cat 5e allows just some ~25db insertion loss for 100MHz. And this is roughly what Cat5e twisted pair offers - for 100m. (of course 6,7,8 are a bit better, but not in magnitutes) – schnedan Sep 24 at 13:34
• I was thinking of maybe,stepping down the frequency of the wave and transmit it over cables and do the other stuff at the other end,.....I don't have enough experience to tell wether it will work @Transistor that is why I posted here,take this technical shit,LOL – ElementX Sep 24 at 13:40
• Yes, it'd have to be really low-loss. LMR-1700 (lowest loss standard product that Times Microwave sells, a very large 1.7 inch diameter coaxial cable) does about 5.7 dB/100 m @ 2.5 GHz, or about 57 dB at 1 km. Something more sane like LMR-400 (Times version of RG-8) is 22 dB / 100 m. There's a reason that DOCSIS cable internet stays mostly below 1 GHz, the attenuation is much more friendly there. – Peter Sep 24 at 13:41
• Guys if i am going to buy coax,It would take me more time than downloading Forza horion 4 via GSM – ElementX Sep 24 at 13:43
• @Peter, and the prices! Virtual Air heliax coax for this kind of length and frequency will cost $500-$1000. Better to go with RG58 and a DSL modem. This is a great "youtube" idea, if the OP can add some fire & explosions, a few sprints through the forest, a social experiment, and some drone aerial shots. Extra points for a Call-Of-Duty theme. – P2000 Sep 24 at 14:50 | 2020-10-20 06:41:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24237333238124847, "perplexity": 2411.25394884385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107869933.16/warc/CC-MAIN-20201020050920-20201020080920-00003.warc.gz"} |
https://mathoverflow.net/questions/284684/finite-non-commutative-ring-with-few-invertible-unit-elements | # Finite non-commutative ring with few invertible (unit) elements
for a ring $R$ with unity , let $U(R)$ denote the group of units of $R$ . Now there are lots of finite commutative rings, of arbitrarily high order, with exactly one unit ; indeed $U(R)=1$ for a finite commutative ring $R$ iff $a^2=a , \forall a \in R$ . Incidentally , I couldn't find any finite non-commutative ring with exactly one unit ; matrix rings doesn't seem to work.
So my question is : Does there exist a finite non-commutative ring with unity having exactly one invertible (unit) element ?
Small remark : Note that such a ring must have characteristic $2$
• It should be noted that such rings cannot have non-zero nilpotents, because if $u\in R$ is a such a nilpotent, then $1+u$ has $1-u+u^2-\ldots\in R$ as its inverse. The question is then, whether non-commutative reduced rings in char 2 exist.. (IDK) – kneidell Oct 29 '17 at 7:46
• @kneidell : true indeed ... – user111524 Oct 29 '17 at 7:51
• @kneidell : I have posted an answer – user111524 Oct 29 '17 at 8:26
• Late comment: the question was answered in arxiv.org/pdf/1302.3192.pdf, 2013. – Luc Guyot Feb 27 '18 at 21:12
• @LucGuyot : I had a look at the paper; it doesn't prove anything more general, and the techniques are same as that of mine. In addition I also noticed in my answer that a finite non-commutative ring with zero Jacobson radical has at least 6 invertible elements. – user111524 Feb 28 '18 at 15:33
This answer presents an alternate proof of users' negative answer by proving directly that a finite ring whose only unit is its identity must be a Boolean ring, hence commutative. The proof given below is based on a result by Melvin Henriksen. It doesn't rely on the Artin-Wedderburn Theorem and turns out to be fully elementary.
Following Melvin Henriksen, we call $R$ a UI-ring if $R$ has an identity element $1$ and $ab = ba = 1$ for $a,b \in R$ implies $a = b = 1$.
We have
Claim. A finite ring $R$ with identity is a UI-ring if and only if $R$ consists only of idempotent elements, i.e., $R$ is a Boolean ring. In particular, a finite UI-ring is commutative.
Proof. Assume that $R$ is a UI-ring. Then $R$ is reduced and $2x = 0$ for every $x \in R$. As $R$ is a finite dimensional vector space over $\mathbb{Z}/2\mathbb{Z}$, every element of $R$ is algebraic over $\mathbb{Z}/2\mathbb{Z}$ by the Cayley-Hamilton theorem. Thus $R$ is a Boolean ring by [2, Corollary 2.10], which shows that $R$ is commutative. Assume now that $R$ is a Boolean ring. As any element $x \neq 1$ satisfies $x(1 - x) = 0$, the identity $1$ is the only unit of $R$.
The commutative case mentioned in OP's question was solved by P. M. Cohn [2, Theorem 3], should $R$ be finite or infinite:
Cohn's Theorem Let $R$ be an algebra over a field $F$ without nontrivial units, i.e., the units of $R$ are those of $F$. Then $R$ is a subdirect product of extension fields of $F$, and every element $x$ of $R$ which is not in $F$ is transcendental over $F$, unless $F = GF(2)$ and $x$ is idempotent. If, moreover, $R$ has finite dimension over $F$, then either $R = F$ or $R$ is a Boolean algebra.
Addendum. I discovered this preprint of Rodney Coleman (2013) in which OP's question was both asked and answered.
[1] P. M. Cohn, "Rings of zero-divisors", 1984.
[2] M. Henriksen, "Rings with a unique regular element", 1989.
• I like your argument . Note that the commutative case , for which you quote Cohn's theorem , is very easy to prove as follows : Let $R$ be a finite commutative ring with exactly one unit . Then $J(R)=0$ . Let $m_1,...,m_n$ be the distinct maximal ideals of $R$ ; then by CRT ; $R \cong R/J(R) \cong \prod_{i=1}^n R/m_i$ ; so $R$ is a finite direct product of fields and since $R$ has exactly one unit , so do all those fields , hence $R \cong \mathbb Z_2^n$ , where $n=|Spec(R)|$ – user111524 Oct 30 '17 at 14:12
I think I have it ; there can be no such non-commutative ring.
Let $x \in J(R)$ , then $1-x$ is a unit of $R$ , so $x=0$ i.e. $J(R)=0$ . Thus $R$ is an artinian semisimple ring , so by Artin-Wedderburn , $R \cong \prod_{i=1}^m M(n_i , D_i)$ , where $D_i$'s are division rings . But $R$ is finite , hence so are $D_i$'s , hence by Wedderburn little theorem, $D_i$'s are fields , so $R \cong \prod_{i=1}^m M(n_i , k_i)$ , where $k_i$'s are fields . Now since $R$ is not-commutative , at least one $n_i$ is more than $1$ , say w.l.o.g. $n_1 \ge 2$ , but then $M(n_1 , k_1)$ has at least $q^{n_1}-1 \ge q^2-1 >1$ many units (where $q=|k_1|$) , so $R$ has more than one unit .
In fact , since $M(n_1,k_1)$ has $\prod_{j=0}^{n_1-1}(q^{n_1} - q^j)$ many units , where $q=|k_1|$ and for $n_1 \ge 2$ , $\prod_{j=0}^{n_1-1}(q^{n_1} - q^j)\ge (2^2-1)(2^2-2)=6$ ; so we get that :
Any finite non-commutative ring with unity and with zero Jacobson radical has at least $6$ units .
• We note that this answers the question " Is every ring with unity with a unique regular element commutative ? " , of Melvin Henriksen as described in one of the answers here mathoverflow.net/questions/100265/… and which Henriksen comments on the introduction of his paper math.hmc.edu/~henriksen/publications/… , in the affirmative for finite rings with unity , since for finite rings , an element is regular iff it is a unit . – user111524 Oct 29 '17 at 14:48 | 2019-10-16 23:26:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9716221690177917, "perplexity": 198.5857730433453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986670928.29/warc/CC-MAIN-20191016213112-20191017000612-00376.warc.gz"} |
http://cmj.math.cas.cz/online1st/CMJ.2017.0458-16.html | Czechoslovak Mathematical Journal, online first, 10 pp.
# Representations of the general linear group over symmetry classes of polynomials
## Yousef Zamani, Mahin Ranjbari
#### Received August 28, 2016. First published May 4, 2017.
Yousef Zamani (corresponding author), Mahin Ranjbari, Department of Mathematics, Faculty of Sciences, Sahand University of Technology, P.O. Box 51335/1996, Tabriz, East Azerbaijan, Iran, e-mail: zamani@sut.ac.ir, m_ranjbari@sut.ac.ir
Abstract: Let $V$ be the complex vector space of homogeneous linear polynomials in the variables $x_1, \ldots, x_m$. Suppose $G$ is a subgroup of $S_m$, and $\chi$ is an irreducible character of $G$. Let $H_d(G,\chi)$ be the symmetry class of polynomials of degree $d$ with respect to $G$ and $\chi$. For any linear operator $T$ acting on $V$, there is a (unique) induced operator $K_{\chi} (T)\in{\rm End}(H_d(G,\chi))$ acting on symmetrized decomposable polynomials by $K_{\chi}(T)(f_1\ast f_2\ast\ldots\ast f_d)=Tf_1\ast Tf_2\ast\ldots\ast Tf_d.$ In this paper, we show that the representation $T\mapsto K_{\chi} (T)$ of the general linear group $GL(V)$ is equivalent to the direct sum of $\chi(1)$ copies of a representation (not necessarily irreducible) $T\mapsto B_{\chi}^G(T)$.
Keywords: symmetry class of polynomials; general linear group; representation; irreducible character; induced operator
Classification (MSC 2010): 20C15, 15A69, 05E05
DOI: 10.21136/CMJ.2017.0458-16
Full text available as PDF.
References:
[1] E. Babaei, Y. Zamani: Symmetry classes of polynomials associated with the dihedral group. Bull. Iran. Math. Soc. 40 (2014), 863-874. MR 3255403 | Zbl 1338.05271
[2] E. Babaei, Y. Zamani: Symmetry classes of polynomials associated with the direct product of permutation groups. Int. J. Group Theory 3 (2014), 63-69. MR 3213989 | Zbl 1330.05159
[3] E. Babaei, Y. Zamani, M. Shahryari: Symmetry classes of polynomials. Commun. Algebra 44 (2016), 1514-1530. DOI 10.1080/00927872.2015.1027357 | MR 3473866 | Zbl 1338.05272
[4] I. M. Isaacs: Character Theory of Finite Groups. Pure and Applied Mathematics 69, Academic Press, New York (1976). MR 0460423 | Zbl 0337.20005
[5] R. Merris: Multilinear Algebra. Algebra, Logic and Applications 8, Gordon and Breach Science Publishers, Amsterdam (1997). MR 1475219 | Zbl 0892.15020
[6] M. Ranjbari, Y. Zamani: Induced operators on symmetry classes of polynomials. Int. J. Group Theory 6 (2017), 21-35.
[7] K. Rodtes: Symmetry classes of polynomials associated to the semidihedral group and o-bases. J. Algebra Appl. 13 (2014), Article ID 1450059, 7 pages. DOI 10.1142/S0219498814500595 | MR 3225126 | Zbl 1297.05243
[8] M. Shahryari: Relative symmetric polynomials. Linear Algebra Appl. 433 (2010), 1410-1421. DOI 10.1016/j.laa.2010.05.020 | MR 2680267 | Zbl 1194.05162
[9] Y. Zamani, E. Babaei: Symmetry classes of polynomials associated with the dicyclic group. Asian-Eur. J. Math. 6 (2013), Article ID 1350033, 10 pages. DOI 10.1142/S1793557113500332 | MR 3130082 | Zbl 1277.05168
[10] Y. Zamani, E. Babaei: The dimensions of cyclic symmetry classes of polynomials. J. Algebra Appl. 13 (2014), Article ID 1350085, 10 pages. DOI 10.1142/S0219498813500850 | MR 3119646 | Zbl 1290.05156
[11] Y. Zamani, M. Ranjbari: Induced operators on the space of homogeneous polynomials. Asian-Eur. J. Math. 9 (2016), Article ID 1650038, 15 pages. DOI 10.1142/S1793557116500388 | MR 3486726 | Zbl 06580479 | 2017-11-24 20:09:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6300200819969177, "perplexity": 705.6286157981737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808935.79/warc/CC-MAIN-20171124195442-20171124215442-00546.warc.gz"} |
https://www.zbmath.org/?q=an%3A0969.14039 | # zbMATH — the first resource for mathematics
On quantum cohomology rings of partial flag varieties. (English) Zbl 0969.14039
In this article a unified description of the structure of the small cohomology rings for all projective homogeneous spaces $$SL_n(\mathbb C)/P$$ (with $$P$$ a parabolic subgroup) is given. First the results on the classical cohomology rings are recalled. Then the algebraic structure of the quantum cohomology ring is studied. Important results are the general quantum versions of the Giambelli and Pieri formulas of the classical cohomology (classical Schubert calculus). They are obtained via geometric computations of certain Gromov-Witten invariants, which are realized as intersection numbers on hyperquot schemes.
##### MSC:
14N35 Gromov-Witten invariants, quantum cohomology, Gopakumar-Vafa invariants, Donaldson-Thomas invariants (algebro-geometric aspects) 14M15 Grassmannians, Schubert varieties, flag manifolds 14N15 Classical problems, Schubert calculus 14M17 Homogeneous spaces and generalizations 53D45 Gromov-Witten invariants, quantum cohomology, Frobenius manifolds
Full Text:
##### References:
[1] Alexander Astashkevich and Vladimir Sadov, Quantum cohomology of partial flag manifolds $$F_ n_ 1\cdots n_ k$$ , Comm. Math. Phys. 170 (1995), no. 3, 503-528. · Zbl 0865.14027 [2] K. Behrend, Gromov-Witten invariants in algebraic geometry , Invent. Math. 127 (1997), no. 3, 601-617. · Zbl 0909.14007 [3] K. Behrend and Yu. Manin, Stacks of stable maps and Gromov-Witten invariants , Duke Math. J. 85 (1996), no. 1, 1-60. · Zbl 0872.14019 [4] I. N. Bernstein, I. M. Gelfand, and S. I. Gelfand, Schubert cells and cohomology of the spaces $$G/P$$ , Russian Math. Surveys 28 (1973), 1-26. · Zbl 0289.57024 [5] Aaron Bertram, Quantum Schubert calculus , Adv. Math. 128 (1997), no. 2, 289-305. · Zbl 0945.14031 [6] Armand Borel, Sur la cohomologie des espaces fibrés principaux et des espaces homogènes de groupes de Lie compacts , Ann. of Math. (2) 57 (1953), 115-207. JSTOR: · Zbl 0052.40001 [7] Ionuţ Ciocan-Fontanine, Quantum cohomology of flag varieties , Internat. Math. Res. Notices (1995), no. 6, 263-277. · Zbl 0847.14011 [8] I. Ciocan-Fontanine, The quantum cohomology ring of flag varieties , to appear in Trans. Amer. Math. Soc. JSTOR: · Zbl 0920.14027 [9] I. Ciocan-Fontanine and W. Fulton, Quantum double Schubert polynomials , Schubert Varieties and Degeneracy Loci, Lecture Notes in Math., vol. 1689, Springer-Verlag, New York, 1988, pp. 134-137. [10] Michel Demazure, Désingularisation des variétés de Schubert généralisées , Ann. Sci. École Norm. Sup. (4) 7 (1974), 53-88. · Zbl 0312.14009 [11] C. Ehresmann, Sur la topologie de certains espaces homogènes , Ann. of Math. (2) 35 (1934), 396-443. JSTOR: · Zbl 0009.32903 [12] Sergey Fomin, Sergei Gelfand, and Alexander Postnikov, Quantum Schubert polynomials , J. Amer. Math. Soc. 10 (1997), no. 3, 565-596. JSTOR: · Zbl 0912.14018 [13] William Fulton, Flags, Schubert polynomials, degeneracy loci, and determinantal formulas , Duke Math. J. 65 (1992), no. 3, 381-420. · Zbl 0788.14044 [14] William Fulton, Intersection theory , Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], vol. 2, Springer-Verlag, Berlin, 1998, 2d ed. · Zbl 0885.14002 [15] William Fulton, Universal Schubert polynomials , Duke Math. J. 96 (1999), no. 3, 575-594. · Zbl 0981.14022 [16] W. Fulton and R. Pandharipande, Notes on stable maps and quantum cohomology , Algebraic geometry-Santa Cruz 1995, Proc. Sympos. Pure Math., vol. 62, Amer. Math. Soc., Providence, RI, 1997, pp. 45-96. · Zbl 0898.14018 [17] Alexander Givental and Bumsig Kim, Quantum cohomology of flag manifolds and Toda lattices , Comm. Math. Phys. 168 (1995), no. 3, 609-641. · Zbl 0828.55004 [18] Bumsig Kim, Quantum cohomology of partial flag manifolds and a residue formula for their intersection pairings , Internat. Math. Res. Notices (1995), no. 1, 1-15 (electronic). · Zbl 0849.14019 [19] Bumsig Kim, On equivariant quantum cohomology , Internat. Math. Res. Notices (1996), no. 17, 841-851. · Zbl 0881.55007 [20] B. Kim, Gromov-Witten invariants for flag manifolds , thesis, University of California, Berkeley, 1996. [21] A. N. Kirillov and T. Maeno, Quantum double Schubert polynomials, quantum Schubert polynomials and Vafa-Intriligator formula , to appear in Discrete Math. · Zbl 0958.05137 [22] Steven L. Kleiman, The transversality of a general translate , Compositio Math. 28 (1974), 287-297. · Zbl 0288.14014 [23] Maxim Kontsevich, Enumeration of rational curves via torus actions , The moduli space of curves (Texel Island, 1994), Progr. Math., vol. 129, Birkhäuser Boston, Boston, MA, 1995, pp. 335-368. · Zbl 0885.14028 [24] M. Kontsevich and Yu. Manin, Gromov-Witten classes, quantum cohomology, and enumerative geometry , Comm. Math. Phys. 164 (1994), no. 3, 525-562. · Zbl 0853.14020 [25] Alain Lascoux and Marcel-Paul Schützenberger, Polynômes de Schubert , C. R. Acad. Sci. Paris Sér. I Math. 294 (1982), no. 13, 447-450. · Zbl 0495.14031 [26] Gérard Laumon, Un analogue global du cône nilpotent , Duke Math. J. 57 (1988), no. 2, 647-671. · Zbl 0688.14023 [27] Jun Li and Gang Tian, The quantum cohomology of homogeneous varieties , J. Algebraic Geom. 6 (1997), no. 2, 269-305. · Zbl 0909.14012 [28] Jun Li and Gang Tian, Virtual moduli cycles and Gromov-Witten invariants of algebraic varieties , J. Amer. Math. Soc. 11 (1998), no. 1, 119-174. JSTOR: · Zbl 0912.14004 [29] I. G. Macdonald, Notes on Schubert Polynomials , LaCIM, Départment de Mathématiques et d’Informatique, Université du Québec à Montréal, 1991. [30] Dusa McDuff and Dietmar Salamon, $$J$$-holomorphic curves and quantum cohomology , University Lecture Series, vol. 6, American Mathematical Society, Providence, RI, 1994. · Zbl 0809.53002 [31] D. Petersen, lecture , 1996, University of Washington, May. [32] A. Postnikov, On a quantum version of Pieri’s formula , to appear in Progress in Geometry, Birkhäuser, Boston. · Zbl 0944.14019 [33] Yongbin Ruan and Gang Tian, A mathematical theory of quantum cohomology , J. Differential Geom. 42 (1995), no. 2, 259-367. · Zbl 0860.58005 [34] Bernd Siebert and Gang Tian, On quantum cohomology rings of Fano manifolds and a formula of Vafa and Intriligator , Asian J. Math. 1 (1997), no. 4, 679-695. · Zbl 0974.14040 [35] Frank Sottile, Pieri’s formula for flag manifolds and Schubert polynomials , Ann. Inst. Fourier (Grenoble) 46 (1996), no. 1, 89-110. · Zbl 0837.14041 [36] Gang Tian, Quantum cohomology and its associativity , Current developments in mathematics, 1995 (Cambridge, MA), Internat. Press, Cambridge, MA, 1994, pp. 361-401. · Zbl 0877.58060 [37] Cumrun Vafa, Topological mirrors and quantum rings , Essays on mirror manifolds ed. S. T. Yau, Internat. Press, Hong Kong, 1992, pp. 96-119. · Zbl 0827.58073 [38] S. Veigneau, Calcul symbolique et calcul distribué en combinatoire algébrique , thesis, Université Marne-la-Vallée, 1996. [39] Edward Witten, Two-dimensional gravity and intersection theory on moduli space , Surveys in differential geometry (Cambridge, MA, 1990), Lehigh Univ., Bethlehem, PA, 1991, pp. 243-310. · Zbl 0757.53049 [40] Edward Witten, The Verlinde algebra and the cohomology of the Grassmannian , Geometry, topology, & physics, Conf. Proc. Lecture Notes Geom. Topology, IV, Internat. Press, Cambridge, MA, 1995, pp. 357-422. · Zbl 0863.53054
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-08-01 17:05:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7639258503913879, "perplexity": 1018.1734387614766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154214.63/warc/CC-MAIN-20210801154943-20210801184943-00606.warc.gz"} |
https://chemistry.stackexchange.com/questions/19069/which-ligands-would-be-exchanged-in-this-reaction | # Which ligands would be exchanged in this reaction?
I have this reaction and need to state what the products would be:
$\ce{(CF3)2PH + [W(CO)5(THF)] -> }$
THF stands for Tetrahydrofuran.
Would this just be a simple ligand exchange, and if so which ones would be exchanged? How would you know?
• Have a look at this tutorial which shows you how math and chemical formulae can be nicely formatted on this site. Furthermore, what are your own thoughts on the matter? What do you think will happen and why? – Philipp Nov 2 '14 at 16:46
• I think that the two CF3 groups will replace either two of the CO or one CO and the THF. But i don't know which one or how to work it out. – user5181 Nov 2 '14 at 17:00
• Why do you think the CF3 groups will replace a ligand? – Philipp Nov 2 '14 at 17:53
Complexes like $\ce{M(CO)5(solv)}$ are typically generated by photolysis ($\lambda$ = 350 nm) of $\ce{M(CO)6}$ in the presence of a non-inert solvent. This proceeds through a highly reactive "naked" $\ce{M(CO)5}$ intermediate.
$$\ce{W(CO)6 + solv ->[h\nu] W(CO)5(solv) + CO}$$
1. When offered another ligand $\mathrm{L}$, the weakest bond ligand in the $\ce{W(CO)5(solv)}$ complex is replaced. This is THF.
2. Taking the lone pair on the phosphorous atom into account, is is conceivable that $\ce{(CF3)2PH}$ will do the job:
$$\ce{W(CO)5(thf) + (CF3)2PH -> W(CO)5((CF3)2PH) + THF}$$ | 2021-05-09 19:05:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5340583324432373, "perplexity": 1246.8258979692155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00550.warc.gz"} |
https://zbmath.org/?q=an:0854.35015 | Analysis on local Dirichlet spaces. II: Upper Gaussian estimates for the fundamental solutions of parabolic equations.(English)Zbl 0854.35015
[Part I, cf. J. Reine Angew. Math. 456. 173-196 (1994; Zbl 0806.53041).]
The author studies the behaviour of solutions of the parabolic equation $$L_t u= \partial_t u$$ on $$\mathbb{R}\times X$$, where $$X$$ is a locally compact Hausdorff space and for each $$t\in \mathbb{R}$$, $$L_t$$ is the operator associated to a regular Dirichlet form on $$L^2(X, m)$$. The main results are an integral upper Gaussian estimate for the fundamental solution and a pointwise estimate of similar type. The conditions under which these results are proved are very general: among the operators $$L_t$$ included are the Laplace-Beltrami operator on a Riemannian manifold, weighted uniformly elliptic operators, and subelliptic operators.
[For part III, see the review below].
Reviewer: J.Urbas (Bonn)
MSC:
35B45 A priori estimates in context of PDEs 35A08 Fundamental solutions to PDEs 58J35 Heat and other parabolic equation methods for PDEs on manifolds
Keywords:
Dirichlet form; Gaussian estimate
Citations:
Zbl 0854.35016; Zbl 0806.53041 | 2022-05-20 10:36:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8574978709220886, "perplexity": 476.9984116549377}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531779.10/warc/CC-MAIN-20220520093441-20220520123441-00518.warc.gz"} |
http://quant.stackexchange.com/tags/binomial-tree/new | # Tag Info
I think the point of this approach is to model the firm value $V(t)$ using some appropriate probability distribution, then deduce the dustribution of the CB price. Thus the CB price depends on the firm value, but not vice versa. | 2016-07-25 22:05:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8543448448181152, "perplexity": 769.8793249333789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824395.52/warc/CC-MAIN-20160723071024-00014-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://cran.wustl.edu/web/packages/flowr/vignettes/flowr_install.html | # Installation
Requirements:
• R version > 3.1, preferred 3.2
## for a latest released version (from CRAN)
install.packages("flowr", repos = CRAN="http://cran.rstudio.com")
devtools::install_github("sahilseth/flowr", ref = "master")
After installation run setup(), this will copy the flowr’s helper script to ~/bin. Please make sure that this folder is in your $PATH variable. library(flowr) setup() Running flowr from the terminal should now show the following: Usage: flowr function [arguments] status Detailed status of a flow(s). rerun rerun a previously failed flow kill Kill the flow, upon providing working directory fetch_pipes Checking what modules and pipelines are available; flowr fetch_pipes Please use 'flowr -h function' to obtain further information about the usage of a specific function. If you interested, visit funr’s github page for more details From this step on, one has the option of typing commands in a R console OR a bash shell (command line). For brevity we will show examples using the shell. # Test Test a small pipeline on the cluster This will run a three step pipeline, testing several different relationships between jobs. Initially, we can test this locally, and later on a specific HPCC platform. ## This may take about a minute or so. flowr run x=sleep_pipe platform=local execute=TRUE ## corresponding R command: run(x='sleep_pipe', platform='local', execute=TRUE) If this completes successfully, we can try this on a computing cluster; where this would submit a few interconnected jobs. Several platforms are supported out of the box (torque, moab, sge, slurm and lsf), you may use the platform variable to switch between platforms. flowr run pipe=sleep_pipe platform=lsf execute=TRUE ## other options for platform: torque, moab, sge, slurm, lsf ## this shows the folder being used as a working directory for this flow. Once the submission is complete, we can test the status using status() by supplying it the full path as recovered from the previous step. flowr status x=~/flowr/runs/sleep_pipe-samp1-20150923-10-37-17-4WBiLgCm ## we expect to see a table like this when is completes successfully: | | total| started| completed| exit_status|status | |:--------------|-----:|-------:|---------:|-----------:|:---------| |001.sleep | 3| 3| 3| 0|completed | |002.create_tmp | 3| 3| 3| 0|completed | |003.merge | 1| 1| 1| 0|completed | |004.size | 1| 1| 1| 0|completed | ## Also we expect a few files to be created: ls ~/flowr/runs/sleep_pipe-samp1-20150923-10-37-17-4WBiLgCm/tmp samp1_merged samp1_tmp_1 samp1_tmp_2 samp1_tmp_3 ## If both these checks are fine, we are all set ! There are a few places where things may go wrong, you may follow the advanced configuration guide for more details. Feel free to post questions on github issues page. # Advanced Configuration ## HPCC Support Overview Support for several popular cluster platforms is built-in. There is a template, for each platform, which should work out of the box. Further, one may copy and edit them (and save to ~/flowr/conf) in case some changes are required. Templates from this folder (~/flowr/conf), would override defaults. Here are links to latest templates on github: Not sure what platform you have? You may check the version by running ONE of the following commands: msub --version ## Version: **moab** client 8.1.1 man bsub ##Submits a job to **LSF** by running the specified qsub --help Here are some helpful guides and details on the platforms: Comparison_of_cluster_software ## flowr configuration file This needs expansion flowr has a configuration file, with parameters regarding default paths, verboseness etc. flowr loads this default configuration from the package installation. In addition, to customize the parameters, simply create a tab-delimited file called ~/.flowr. An example of this file is available here Additional files loaded if available: • (flow installation)/flowr.conf • (ngsflows installation)/ngsflows.conf • ~/flowr/conf/flowr.conf • ~/.flowr # Troubleshooting & FAQs ## Errors in job submission 3. Use a custom flowdef We can copy an example flow definition and customize it to suit our needs. This a tab delimited text file, so make sure that the format is correct after you make any changes. cd ~/flowr/pipelines wget https://raw.githubusercontent.com/sahilseth/flowr/master/inst/pipelines/sleep_pipe.def ## check the format flowr as.flowdef x=~/flowr/pipelines/sleep_pipe.def Run the test with a custom flowdef: flowr run x=sleep_pipe execute=TRUE def=~/flowr/pipelines/sleep_pipe.def ## platform=lsf [optional, picked up from flowdef] 4. Use a custom submission template If you need to customize the HPCC submission template, copy the file for your platform and make your desired changes. For example the MOAB based cluster in our institution does not accept the queue argument, so we need to comment it out. Download the template for a specific HPCC platform into ~/flowr/conf cd ~/flowr/conf ## flowr automatically picks up a template from this folder. ## for MOAB (msub) wget https://raw.githubusercontent.com/sahilseth/flowr/master/inst/conf/moab.sh ## for Torque (qsub) wget https://raw.githubusercontent.com/sahilseth/flowr/master/inst/conf/torque.sh ## for IBM LSF (bsub) wget https://raw.githubusercontent.com/sahilseth/flowr/master/inst/conf/lsf.sh ## for SGE (qsub) wget https://raw.githubusercontent.com/sahilseth/flowr/master/inst/conf/sge.sh ## for SLURM (sbatch) [untested] wget https://raw.githubusercontent.com/sahilseth/flowr/master/inst/conf/slurm.sh Make the desired changes using your favourite editor and submit again. 1. Parsing job ids Flowr parses job IDs to keep a log of all submitted jobs, and also to pass them along as a dependency to subsequent jobs. This is taken care by the parse_jobids() function. Each job scheduler shows the jobs id, when you submit a job, but it may show it in a slightly different fashion. To accommodate this one can use regular expressions as described in the relevant section of the flowr config. For example LSF may show a string such as: Job <335508> is submitted to queue <transfer>. ## test if it parses correctly jobid="Job <335508> is submitted to queue <transfer>." set_opts(flow_parse_lsf = ".*(\<[0-9]*\>).*") parse_jobids(jobid, platform="lsf") [1] "335508" In this case 335508 was the job id and regex worked well ! Once we identify the correct regex for the platform you may update the configuration file with it. cd ~/flowr/conf wget https://raw.githubusercontent.com/sahilseth/flowr/master/inst/conf/flowr.conf ## flowr automatically reads from this location, if you prefer to put it elsewhere, use load_opts("flowr.conf") ## visit sahilseth.github.io/params for more details. Update the regex pattern and submit again. 2. Check dependency string After collecting job ids from previous jobs, flowr renders them as a dependency for subsequent jobs. This is handled by render_dependency.PLATFORM functions. Confirm that the dependency parameter is specified correctly in the submission scripts: wd=~/flowr/runs/sleep_pipe-samp1-20150923-11-20-39-dfvhp5CK ## path to the most recent submission cat$wd/002.create_tmp/create_tmp_cmd_1.sh
#### Flowr Configuration file
There are several verbose levels available (0, 1, 2, 3, …)
One can change the verbose levels in this file (~/flowr/conf/flowr.conf) and check verbosity section in the help pages for more details.
## Flowdef resource columns
The resource requirement columns of flow definition are passed along to the final (cluster) submission script. For example values in cpu_reserved column would be populated as {{{CPU}}} in the submission template.
The following table provides a mapping between the flow definition columns and variables in the submission templates:
flowdef variable submission template variable
nodes NODES
cpu_reserved CPU
memory_reserved MEMORY
email EMAIL
walltime WALLTIME
extra_opts EXTRA_OPTS
* JOBNAME
* STDOUT
* CWD
* DEPENDENCY
* TRIGGER
** CMD
* These are generated on the fly and ** This is gathered from flow mat
## Adding a new platform
Adding a new platform involves a few steps, briefly we need to consider the following steps where changes would be necessary.
1. job submission: One needs to add a new template for the new platform. Several examples are available as described in the previous section.
2. parsing job ids: flowr keeps a log of all submitted jobs, and also to pass them along as a dependency to subsequent jobs. This is taken care by the parse_jobids() function. Each job scheduler shows the jobs id, when you submit a job, but each shows it in a slightly different pattern. To accommodate this one can use regular expressions as described in the relevant section of the flowr config.
3. render dependency: After collecting job ids from previous jobs, flowr renders them as a dependency for subsequent jobs. This is handled by render_dependency.PLATFORM functions.
4. recognize new platform: Flowr needs to be made aware of the new platform, for this we need to add a new class using the platform name. This is essentially a wrapper around the job class
Essentially this requires us to add a new line like: setClass("torque", contains = "job").
1. killing jobs: Just like submission flowr needs to know what command to use to kill jobs. This is defined in detect_kill_cmd function.
There are several job scheduling systems available and we try to support the major players. Adding support is quite easy if we have access to them. Your favourite not in the list? re-open this issue, with details on the platform: adding platforms
## outfiles end with .out, and are placed in a folder like 00X.<jobname>/
## here is one example:
cat $wd/002.create_tmp/create_tmp_cmd_1.out ## final script: cat$wd/002.create_tmp/create_tmp_cmd_1.sh
## Installation Error (Github)
devtools:::install_github(“sahilseth/flowr”)
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
Solution:
This is basically a issue with httr (link) Try this:
install.packages("RCurl")
devtools:::install_github("sahilseth/flowr")
If not then try this: install.packages(“httr”);
library(httr);
set_config( config( ssl.verifypeer = 0L ) )
devtools:::install_github("sahilseth/flowr") | 2022-01-26 22:34:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1830165982246399, "perplexity": 4670.978016741931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00262.warc.gz"} |
https://www.nag.com/numeric/py/nagdoc_latest/naginterfaces.library.tsa.multi_inputmod_update.html | # naginterfaces.library.tsa.multi_inputmod_update¶
naginterfaces.library.tsa.multi_inputmod_update(sttf, mr, mt, para, xxyn, kzef)[source]
multi_inputmod_update accepts a series of new observations of an output time series and any associated input time series, for which a multi-input model is already fully specified, and updates the ‘state set’ information for use in constructing further forecasts.
The previous specification of the multi-input model will normally have been obtained by using multi_inputmod_estim() to estimate the relevant transfer function and ARIMA parameters. The supplied state set will originally have been produced by multi_inputmod_estim() (or possibly multi_inputmod_forecast()), but may since have been updated by multi_inputmod_update.
For full information please refer to the NAG Library document for g13bg
https://www.nag.com/numeric/nl/nagdoc_27.1/flhtml/g13/g13bgf.html
Parameters
sttffloat, array-like, shape
The values in the state set before updating as returned by multi_inputmod_estim() or multi_inputmod_forecast(), or a previous call to multi_inputmod_update.
mrint, array-like, shape
The orders vector of the ARIMA model for the output noise component.
, , and refer respectively to the number of autoregressive , moving average , seasonal autoregressive and seasonal moving average parameters.
, and refer respectively to the order of non-seasonal differencing, the order of seasonal differencing, and the seasonal period.
mtint, array-like, shape
The transfer function model orders , and of each of the input series. The data for input series are held in column . Row 1 holds the value , row 2 holds the value and row 3 holds the value . For a simple input, .
Row 4 holds the value , where for a simple input and or for a transfer function input.
When any nonzero contents of rows 1, 2 and 3 of column are ignored.
The choice of or is an option for use in model estimation and does not affect the operation of multi_inputmod_update.
parafloat, array-like, shape
Estimates of the multi-input model parameters as returned by multi_inputmod_estim(). These are in order, firstly the ARIMA model parameters: values of parameters, values of parameters, values of parameters and values of parameters. These are followed by the transfer function model parameter values , for the first of any input series and similarly for each subsequent input series. The final component of is the value of the constant .
xxynfloat, array-like, shape
The new observation sets being used to update the state set. Column contains the values of input series , for . Column contains the values of the output series. Consecutive rows correspond to increasing time sequence.
kzefint
Must not be set to , if the values of the input component series and the values of the output noise component are to overwrite the contents of on exit, and must be set to if is to remain unchanged on exit.
Returns
sttffloat, ndarray, shape
The state set values after updating.
xxynfloat, ndarray, shape
If , remains unchanged.
If , the columns of hold the corresponding values of the input component series and the output noise component in that order.
resfloat, ndarray, shape
The values of the residual series corresponding to the new observations of the output series.
Raises
NagValueError
(errno )
On entry, .
Constraint: , and must be consistent.
(errno )
On entry, .
Constraint: , and must be consistent.
(errno )
On entry, the orders vector is invalid.
(errno )
On entry, and .
Constraint: , or .
Notes
The multi-input model is specified in Notes for multi_inputmod_estim. The form of these equations required to update the state set is as follows:
the transfer models which generate input component values from one or more inputs ,
which generates the output noise component from the output and the input components, and
the ARIMA model for the output noise which generates the residuals .
The state set (as also given in Notes for multi_inputmod_estim) is the collection of terms
for up to the maximum lag associated with each of these series respectively, in the above model equations. is the latest time point of the series from which the state set has been generated.
The function accepts further values of the series , , for , and applies the above model equations over this time range, to generate new values of the various model components, noise series and residuals. The state set is reconstructed, corresponding to the latest time point , the earlier values being discarded.
The set of residuals corresponding to the new observations may be of use in checking that the new observations conform to the previously fitted model. The components of the new observations of the output series which are due to the various inputs, and the noise component, are also optionally returned.
The parameters of the model are not changed in this function.
References
Box, G E P and Jenkins, G M, 1976, Time Series Analysis: Forecasting and Control, (Revised Edition), Holden–Day | 2021-09-18 04:20:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6890717148780823, "perplexity": 1506.4941473825018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056297.61/warc/CC-MAIN-20210918032926-20210918062926-00128.warc.gz"} |
https://www.coursehero.com/file/26306273/CHEM111F16-DQ-week7entropykeydocx/ | CHEM111F16_DQ_week7entropykey.docx - CHEM111 W7 Entropy Fall 2015 1 Coal is converted into a cleaner more transportable fuels by burning it with oxygen
# CHEM111F16_DQ_week7entropykey.docx - CHEM111 W7 Entropy...
This preview shows page 1 - 3 out of 5 pages.
CHEM111 W7 Entropy Fall 2015 1) Coal is converted into a cleaner, more transportable fuels by burning it with oxygen to produce carbon monoxide. The carbon monoxide then is reacted with hydrogen using a catalyst to produce methane and water. Is the reaction between CO and H 2 exothermic or endothermic, and what is the change in enthalpy for it? The enthalpies of formation of the reactants and products are given below. Molecule o f ΔH CO -110 H 2 0 CH 4 -75 H 2 O -242 CO + 3 H 2 CH 4 + H 2 O H rxn = (-75+-242)-(-110+0) = -207 kJ/mol exothermic reaction 2) Methane gas is used as a fuel source for mechanical devices. Hydrogen gas reacts with graphite to produce this fuel source. Use the following thermochemical equations to determine the o rxn ΔH the reaction to produce methane gas. C (s, graphite) + O 2(g) CO 2(g) H o = -393.5 kJ/mol CH 4(g) + 2O 2(g) CO 2(g) + 2H 2 O (l) H o = -890.3 kJ/mol H 2(g) + 1 2 O 2(g) → H 2 O (l) H o = -285.8 kJ/mol C (s, graphite) + O 2(g) CO 2(g) H o = -393.5 kJ/mol CO 2(g) + 2H 2 O (l) CH 4(g) + 2O 2(g) -1* H o = 890.3 kJ/mol 2H 2(g) + O 2(g) → 2H 2 O (l) 2* H o = -571.6 kJ/mol _________________________________________________ C (s, graphite) + 2 H 2(g) → CH 4(g) o rxn ΔH = -74.8 kJ/mol of reaction 3) Determine the o ΔH rxn for C(diamond) → C(graphite) based on enthalpy change of the following reactions: (1) C(diamond) + O 2 (g) CO 2 (g) H 395.4 kJ /mol (2) 2CO 2 (g) 2CO(g) + O 2 (g) H 566.0 kJ /mol (3) C (graphite) + O 2 (g) CO 2 (g) H 393.5 kJ /mol (4) 2CO(g) C(graphite) + CO 2 (g) H 172.5 kJ /mol
CHEM111 W7 Entropy Fall 2015 4) Use bond enthalpy values to calculate the enthalpy of reaction.
#### You've reached the end of your free preview.
Want to read all 5 pages?
• Spring '08
• Kenney | 2020-10-27 09:27:07 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8019633293151855, "perplexity": 14469.925139802403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893845.76/warc/CC-MAIN-20201027082056-20201027112056-00043.warc.gz"} |
https://www.fixedpoint.nl/ | # Welcome to FixedPoint
A fixed point is a point that does not change upon application of a map, system of differential equations, etc. In particular, a fixed point of a function $f(x)$ is a point $x_0$ such that
$$f(x_0) = x_0$$ | 2020-04-05 10:00:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.843235194683075, "perplexity": 102.76994350321249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371576284.74/warc/CC-MAIN-20200405084121-20200405114121-00470.warc.gz"} |
https://boombeach.fandom.com/wiki/Imitation_Game | ## FANDOM
661 Pages
"I've copied the most formidable mercenary bases of the Archipelago! You'll never defeat them!"
## Summary
• The Imitation Game is an event that occurs on Fridays in the event cycle. Like the other events, it lasts for 21 hours, appearing at 6 a.m. in your time zone. Note that Daylight Saving Time (DST) will also affect the Imitation Game event times.
• It is unlocked by defeating Lt. Hammerman's level 30 base, which in turn requires a level 9 Radar.
• In this event, Lt. Hammerman has copied some of the best bases in the Archipelago, and it's up to you to destroy these bases.
## Event
• The event consists of 7 stages of increasing difficulty, similar to Dr. T's bases. Each stage is a copy of an existing player base in the Archipelago.
• The bases that are copied have HQ levels that generally increase as the stages get harder. At stage 1, Hammerman usually copies a base of HQ level 10 or 11. At stage 2, Hammerman usually copies a base of HQ level 12 or 13, and the pattern continues.
• You are given 8 attacks in all to defeat these 7 stages.
• The stages comprise of buildings that a Mercenary Base would have, including Prototype Defenses. It is possible that one of the buildings on the base may be upgrading at the time you attack it, including defenses.
• The username of the original base that was copied will be displayed when attacking.
• Statues of all types and sizes can appear on these stages, as they are copies of other players' bases.
• Unlike Dr. T, Ice Statues do not necessarily get stronger. They may not even appear at all on a certain stage. For instance, it is possible that stage 2 has 2 Ice Statues but stage 3 has none.
• The XP levels of each base are constant, regardless of the actual XP level of the base that was copied. For instance, the XP level of stage 1 is 8, even though bases at HQ levels 10 and 11 would have a minimum XP level of 21.
• Defeating a stage in the Imitation Game will reward you with Gold, Wood, Stone and Iron. Magma Power Stones, Intel and Prototype Modules can also be won from each stage.
• Similar to Dr. T's events, the rewards are based around the amount of Wood, and the Gold reward in this event is 1.5 times that of Wood plus your attack cost. Also, the Stone reward is 5/6th that of Wood, and Iron reward is 2/3rd that of Wood.
• The amount of Wood for the n-th stage is $2400n^2+12000n+70000$.
## Statistics
Imitation Game Rewards
Stage XP Level HQ Level* Gold Reward^
Wood Reward
Stone Reward
Iron Reward
1 8 10-11 126,600 84,400 70,333 56,266
2 15 12-13 155,400 103,600 86,333 69,066
3 30 14-15 191,400 127,600 106,333 85,066
4 42 16-17 234,600 156,400 130,333 104,266
5 53 18-19 285,000 190,000 158,333 126,666
6 60 20-21 342,600 228,400 190,333 152,266
7 60 22 407,400 271,600 226,333 181,066
Total loot 1,743,000 1,162,000 968,331 774,662
^a is your Attack Cost. It is added to your Gold reward, so it is also increased by Resource Reward Statues.
*There may be certain exceptions to this rule, such as Hammerman copying a HQ20 base on stage 5, but on most events the HQ level will be in this range. | 2018-11-21 13:07:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42254242300987244, "perplexity": 3476.548406382559}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039748315.98/warc/CC-MAIN-20181121112832-20181121134832-00339.warc.gz"} |
https://eanswers.in/math/question5518255 | , 17.10.2019 23:00, sahini99
# Two no are in the ratio of 5: 8, if 12 is added in each , are in the ratio of 3: 4. find the sum of two no.
### Other questions on the subject: Math
Math, 19.08.2019 03:00, nikhil3810rhmschool
After 12 years suman will be 3 times as old as she was 4 years ago. find her present age
Math, 19.08.2019 05:00, yachna18
If 3 to the power of x-1=9 and 4 to the power of y+2= 64, find the value of x/y | 2020-11-28 22:26:19 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9118759036064148, "perplexity": 1509.6708769413087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195929.39/warc/CC-MAIN-20201128214643-20201129004643-00650.warc.gz"} |
https://socratic.org/questions/is-it-possible-to-factor-y-3x-2-11x-6-if-so-what-are-the-factors | # Is it possible to factor y=3x^2-11x+6? If so, what are the factors?
Jun 12, 2018
$\left(x - 3\right) \left(3 x - 2\right)$
#### Explanation:
$\text{using the a-c method to factor the quadratic}$
$\text{the factors of the product } 3 \times 6 = 18$
$\text{which sum to " - 11" are " - 9 " and } - 2$
$\text{split the middle term using these factors}$
$3 {x}^{2} - 9 x - 2 x + 6 \leftarrow \textcolor{b l u e}{\text{factor by grouping}}$
$= \textcolor{red}{3 x} \left(x - 3\right) \textcolor{red}{- 2} \left(x - 3\right)$
$\text{take out the "color(blue)"common factor } \left(x - 3\right)$
$= \left(x - 3\right) \left(\textcolor{red}{3 x - 2}\right)$
$3 {x}^{2} - 11 x + 6 = \left(x - 3\right) \left(3 x - 2\right)$ | 2021-01-17 13:21:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9870519638061523, "perplexity": 4352.412502230257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703512342.19/warc/CC-MAIN-20210117112618-20210117142618-00396.warc.gz"} |
https://math.meta.stackexchange.com/tags/new-users/hot | # Tag Info
45
This is THE place. I call this place a "super computer". And it is. It is true that maybe it is a harsh place for newbies, but that is not 100% true. It is harsh only if you ask something that clearly is intended to solve your homework, and your homework is "easy". I think it helps keeping this place orbiting around the genuine, difficult ...
44
Here are some of my experiences talking about math.se in the real world. I knew a well-meaning but struggling sophomore math major who felt too afraid to post a question here. She used the site frequently as a resource by finding questions through Google, but viewed the community as standoffish, with opaque expectations that she was bound to mess up in some ...
37
Originally intended to be a comment, but I find that it is more of an answer than a comment. I would suggest modifying the website to force newcomers to read the "Asking a good question" (and possibly making it a bit more detailed than it is now) and a small quiz to verify that they have read the rules. This method will reduce the number of new ...
32
Stackexchange sites are all different. For me, math.SE is the one SE site that's so frequently hostile that I've essentially stopped asking questions or giving answers. I don't know if I would fall in the category of "more casual users" that you have in mind. I double-majored in math and physics as an undergrad, but my graduate degree is in physics,...
31
This is supposed to be an all-purpose math site, and obviously not everyone in the world who has a math question knows how to use latex. Although learning latex is not so very hard, it is not trivial either, and assuming that people must have this skill in order to get continued service seems like a clear violation of the intended scope of the site. Also,...
27
It is impossible to notify downvoters; the possibility to do so comes up from time to time as a feature request (that might have some merits but is also open to abuse). The situation was handled quite well. You took the time to help the users that asked a question in quite a bad way (I mean especially the first case) and were told so be others via the ...
22
I want to draw your attention to these two questions: https://math.stackexchange.com/questions/3000195/first-countable-space-is-sequentially-compact-iff-countably-compact https://math.stackexchange.com/questions/3000227/definite-integral-of-function-given-constraint It seems that 4 users with the names MSc Roberto, Robert PhD, Monsieur_Bobert and Bachelor ...
22
I am a person who was "good" at math in high school, but I would definitely consider myself an "enthusiast" rather than a "mathematician". (Somehow I feel that "mathematician" is a serious title reserved for those who do research, while I am merely a student.) For me, Math.SE is the only place where I can ask and ...
20
The motivation behind your idea is good; after all, the lack of MathJax editing can stir up confusion for both the asker and the helper. We all know MathJax's benefits. However, I'm not quite sure that adding a "tutorial" of some sort would really help out- it's easy to skip through the tutorial steps, especially when someone posts their homework question ...
19
Begin by reading tour, and help. Next, on this page (we being in meta), in the upper right hand corner search box, type "how to ask", and "how to answer", then read through those results as it benefits your mind. You will have much greater understanding after that. We use MathJaX to typeset our math here, it is just $\LaTeX$ in the browser. We are, as an ...
16
A couple of things Votes on meta don't mean the same as on the main site. A downvote can mean that people disagree with your position, it doesn't necessarily mean that people don't value your post. IMO a downvote should always be accompanied by a comment. The comment should explain what is wrong with the post. If a comment already exists, I think it is ...
15
A custom flag explaining the situation seems like a good idea when other initiatives are continually not taken up by the poster. Perhaps a note from a moderator is more effective than comments by regular users. Commenting on the answers is important as well; highly upvoted comments can indicate community opinion and may provide some "peer pressure" to ...
14
Yes there is! Just right click on the equation you want to view, click "Show Math As" and then click "Tex Commands"
14
14
The comment history on the answer doesn't exactly look good for anyone involved in the discussion. I might suggest, when you feel a little calmer, that you flag it for moderator attention and ask them to delete all the comments -- they certainly add nothing to the discussion. I can understand your frustration that the "answer" written was ...
13
This may be a controversial answer, but I think the low investment of users in non homework cases maybe due to the following: The mathematical maturity of the user is not enough to understand the content of the comment and they are shy to ask clarification. The user is either preparing for an exam/ a course, so they can't spend more than some reasonable ...
12
They tend to just up vote the answer. Users cannot vote until they have 15 points (and are registered). There is a good chance that the vote you see came from someone other than the new user. But every user is able to accept answers to their question (unless they lost access to their account). If you think they are unaware of the feature, you can ...
12
It depends on the context. If the answer ends with "Cheapp j0rdan shoes! yg59drhey@yahoo.com" then you should certainly flag. But in this case, the reason appears to be benign: the answerer welcomes email in case of follow-up questions. Maybe s/he does not plan on visiting SE again (which would be sad, but it's their choice). I would let it be.
12
It appears to me that this is definitely behaviour in the spirit of the review system. Therefore you are definitely not doing something wrong. If this were the Low Quality queue or the Close Votes queue, I would have nothing to add. However, I would like to add that the First Posts review queue is a bit special, in that it provides us with an opportunity ...
12
In the linked posts from the main meta site, it says: The new indicator works by the age of a user's first visible post. This could be a question or answer, and the association bonus won't influence the behavior. While you might not be new to our engine, everyone is new when they first join a new community, so the indicator is shown. A literal reading of ...
12
I agree that the First Posts review queue calls upon Reviewers to do more than simply downvote and close. The instructions at the top of that page say: This is the first question asked by a new user. Help them learn to use the site by reviewing their post. An anonymous downvote or vote-to-close without feedback is particularly unlikely to "help them ...
11
This is live; I went with a variation on Normal Human's proposed guidance: To improve the chances of your question getting an answer, make sure that it: Uses MathJax formatting for math formulas Has an interesting, specific title that summarizes the question Describes what you know and what you don't understand (don't just copy a textbook problem!)Here are ...
11
I think amWhy's and Isa's comments above point out THE major problem with this for new users: a user needs 20 reputation to participate in chat, and most users who would benefit from talking to someone in a Welcome to MathSE chatroom don't have that much, are usually not asking the sorts of questions that will get them that crumb of reputation. Furthermore, ...
10
Ananth, I do wonder what you mean by "reputed" users. If you mean the experienced ones on Math.SE, of course they should be role models for new users. Yes, if no one answers a question asked by a new user, one should step in and seek clarification. It was really nice for you to be so welcoming to the new user when she posted that other question and told her ...
10
We already had users posting pornographic pictures, or stuff like that. I think that's a good enough reason. At the very least require someone to add something of possible value to the site before adding something very terrible.
10
Of the people I know—and some of them are mathematically inclined—Math SE is not super well known. Most of the people are not surprised it exists, but have not specifically used it. I don't know that I've ever heard specifically about it being unwelcoming. Of the people who have heard of it, some pages just come up in a search and they get what they want, ...
10
I would like to share my view on the two Stack Exchange (SE) sites that I — a high school student — use the most: Mathematics SE and Physics SE. I agree that both sites are vast repositories of quality scientific and mathematical knowledge. However, this view is based on how new users are treated. My personal experience with Mathematics SE was so good ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2021-05-15 06:02:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40164703130722046, "perplexity": 821.6267510601372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00007.warc.gz"} |
https://plainmath.net/precalculus/102335-how-to-find-a-unit-vector-norm | Jaelyn Mueller
2023-02-18
How to find a unit vector normal to the surface ${x}^{3}+{y}^{3}+3xyz=3$ ay the point(1,2,-1)?
Jayden Landry
$f\left(x,y,z\right)={x}^{3}+{y}^{3}+3xyz-3=0$
The gradient of $f\left(x,y,z\right)$ at point $x,y,z$ is a vector normal to the surface at this point.
Following is how the gradient is obtained
$\nabla f\left(x,y,z\right)=\left({f}_{x},{f}_{y},{f}_{z}\right)=3\left({x}^{2}+yz,{y}^{2}+xz,xy\right)$ at point
$\left(1,2,-1\right)$ has the value
$3\left(-1,3,2\right)$ and the unit vector is
$\frac{\left\{-1,3,2\right\}}{\sqrt{1+{3}^{2}+{2}^{2}}}=\left\{-\frac{1}{\sqrt{14}},\frac{3}{\sqrt{14}},\sqrt{\frac{2}{7}}\right\}$
Do you have a similar question? | 2023-04-01 16:24:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6390834450721741, "perplexity": 495.27070956626693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00499.warc.gz"} |
https://ncertmcq.com/ncert-solutions-for-class-10-maths-chapter-12-ex-12-3/ | NCERT Solutions for Class 10 Maths Chapter 12 Areas Related to Circles Ex 12.3 are part of NCERT Solutions for Class 10 Maths. Here we have given NCERT Solutions for Class 10 Maths Chapter 12 Areas Related to Circles Ex 12.3.
Board CBSE Textbook NCERT Class Class 10 Subject Maths Chapter Chapter 12 Chapter Name Areas Related to Circles Exercise Ex 12.3 Number of Questions Solved 16 Category NCERT Solutions
## NCERT Solutions for Class 10 Maths Chapter 12 Areas Related to Circles Ex 12.3
Question 1.
Find the area of the shaded region in the given figure, if PQ = 24cm, PR = 7cm and O is the centre of the circle.
Solution:
Question 2.
Find the area of the shaded region in the given figure, if radii of the two concentric circles with centre O are 7 cm and 14 cm respectively and ∠AOC = 400.
Solution:
∠AOC = 40° (given)
Radius of the sector AOC = 14 cm
Question 3.
Find the area of the shaded region in the given figure, if ABCD is a square of side 14 cm and APD and BPC are semicircles.
Solution:
ABCD is a square
Given: side of the square = 14 cm
∴ Area of the square = (side)² = (14)² = 196 cm²
Radius of the semicircle APD = $$\frac { 1 }{ 2 }$$(side of square) = $$\frac { 1 }{ 2 }$$ x 14 = 7 cm
Area of the semicircle APD = $$\frac { 1 }{ 2 }$$ πr² = $$\frac { 1 }{ 2 }$$ × $$\frac { 22 }{ 7 }$$ × 7 × 7 = 11 × 7 = 77cm²
Similarly, area of the semicircle BPC = 77 cm²
Total area of both the semicircles = 77 + 77 = 154 cm²
Area of the shaded region = Area of square – area of both semicircles
= 196 – 154 = 42 cm²
Question 4.
Find the area of the shaded region in the figure, where a circular arc of radius 6 cm has been drawn with vertex O of an equilateral triangle OAB of side 12 cm as centre.
Solution:
Area of the equilateral triangle OAB
Question 5.
From each corner of a square of side 4 cm a quadrant of a circle of radius 1 cm is cut and also a circle of diameter 2 cm is cut as shown in the figure. Find the area of the remaining portion of the square.
Solution:
Given: side of the square ABCD = 4 cm
Area of the square ABCD = 4 x 4 = 16 cm²
∴ Area of the remaining portion = Area of the square – Area to be cut from square
= 16 – ($$\frac { 44 }{ 7 }$$) = 16 – $$\frac { 44 }{ 7 }$$
= $$\frac { 112-44 }{ 7 }$$ = $$\frac { 68 }{ 7 }$$cm²
Question 6.
In a circular table cover of the radius 32 cm, a design is formed leaving an equilateral triangle ABC in the middle as shown in the figure. Find the area of the design (shaded region).
Solution:
Radius of the circle(r) = 32cm
Area of the circle = πr²
= $$\frac { 22 }{ 7 }$$ × 32 × 32 = $$\frac { 22528 }{ 7 }$$cm²
∴ An equilateral triangle is formed in the circle as shown
Angle subtended by et ch side at centre
Question 7.
In the figure, ABCD is a square of side 14 cm. With centres A, B, C and D, four circles are drawn such that each circle touch externally two of the remaining three circles. Find the area of the shaded region.
Solution:
Side of the square ABCD = 14 cm
Area of the squat e = (side)² = 14 x 14 = 196²
Question 8.
The given figure depicts a racing track whose left and right ends are semicircular. The distance between the two inner parallel line segments is 60 m and they are each 106 m long. If the track is 10 m wide, find:
(i) the distance around the track along its inner edge.
(ii) the area of the track.
Solution:
Area of the tracks at both semicircular ends = 2 x 1100 = 2200 cm²
Area of the 2 rectangular portions = 2 x l x h = 2 x 106 x 10 = 2120 cm²
Total area of the track = area of the track at semicircular ends + area of the rectangular portions
= 2200 + 2120 = 4320 cm²
Question 9.
In the figure, AB and CD are two diameters of a circle (with centre O) perpendicular to each other and OD is the diameter of the smaller circle. If OA = 7 cm, find the area of the shaded region.
Solution:
Given: OA = 7 cm
Radius of the semicircle ABC = OA = 7 cm
Area of the semicircle ABC = $$\frac { 1 }{ 2 }$$πr² = $$\frac { 1 }{ 2 }$$ x $$\frac { 22 }{ 7 }$$ x 7 x 7 = 11 x 7 = 77 cm²
Diameter AB = 2(OA) = 2 x 7 = 14 and OA = OC = 7 cm (radius)
Question 10.
The area of an equilateral triangle ABC is 17320.5 cm². With each vertex of the triangle as centre, a circle is drawn with radius equal to half the length of the side of the triangle (see figure). Find the area of the shaded region.
(Use π = 3.14 and $$\sqrt{3}$$ = 1.73205).
Solution:
Given: area of an equilateral triangle ABC = 17320.5 cm²
Let side of the triangle AB’C be ‘a’
∴ Area of the ∆ABC = $$\frac { \sqrt { 3 } }{ 4 } { a }^{ 2 }$$
$$\frac { \sqrt { 3 } }{ 4 } { a }^{ 2 }$$ = 17320.5
Now,
area of the all 3 sectors = $$\frac { 3×31400 }{ 6 }$$ = 15700 cm²
∴ Area of the shaded portion = Area of the equilateral triangle
– Area of the three sectors formed at each vertex)
= 17320.5 – 15700 = 1620.5 cm²
Question 11.
On a square handkerchief, nine circular designs each of the radius 7 cm are made (see figure). Find the area of the remaining portion of the handkerchief.
Solution:
Radius of the one circular design = 7 cm
Area of the one circular design = πr² = $$\frac { 22 }{ 7 }$$ x 7 x 7 = 154 cm²
Now, area of the 9 circular designs = 9 x 154 = 1386 cm²
Diameter of the circular design = 7 x 2 = 14 cm
Side of the square = 3(diameter of one circle) = 3 x 14 = 42 cm
Area of the square = 42 x 42 = 1764 cm²
Area of the remaining portion of handkerchief
= Area of the square – (Area of the 9 circular designs)
= 1764 – 1386
= 378 cm²
Question 12.
In the figure, OACB is a quadrant of a circle with centre O and radius 3.5 cm. If OD = 2 cm, find the area of the (i) quadrant OACB, (ii) shaded region.
Solution:
(ii) OD = 2cm and OB = 3.5 cm
Question 13.
In the figure, a square OABC is inscribed in a quadrant OPBQ. If OA = 20 cm, find the area of the shaded region. (Use re = 3.14)
Solution:
Given: side of the square OABC = OA = 20 cm
Area of the square = 20 x 20 = 400 cm²
(Diagonal of the square)² = (side of the square)² + (side of the square)² (By pythagoras theorem)
Diagonal of the square = $$\sqrt{2}$$ x (side of the square)
= $$\sqrt{2}$$ x (20) = 20$$\sqrt{2}$$cm
Radius of the quadrant of circle = Diagonal of square = 20$$\sqrt{2}$$
Question 14.
AB and CD are respectively arcs of two concentric circles of radii 21 cm and 7 cm and centre O (see figure). If ∠AOB=30°, find the area of the shaded region.
Solution:
Given: ∠AOB = 30°
Radius of the sector AOB = 21 cm
Question 15.
In the figure, ABC is a quadrant of a circle of radius 14 cm and a semicircle is drawn with BC as diameter. Find the area of the shaded region.
Solution: | 2021-12-02 00:11:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7151752710342407, "perplexity": 712.519102723485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.58/warc/CC-MAIN-20211201234046-20211202024046-00372.warc.gz"} |
https://ftp.aimsciences.org/article/doi/10.3934/cpaa.2012.11.229 | Article Contents
Article Contents
# Stability of nonconstant stationary solutions in a reaction-diffusion equation coupled to the system of ordinary differential equations
• In this paper we study pattern formation arising in a system of a single reaction-diffusion equation coupled with subsystem of ordinary differential equations, describing spatially-distributed growth of clonal populations of precancerous cells, whose proliferation is controlled by growth factors diffusing in the extracellular medium and binding to the cell surface. We extend the results on the existence of nonhomogenous stationary solutions obtained in [9] to a general Hill-type production function and full parameter set. Using spectral analysis and perturbation theory we derive conditions for the linearized stability of such spatial patterns.
Mathematics Subject Classification: Primary: 35K57, 35J57; Secondary: 92B99.
Citation:
• [1] V. I. Arnold, "Ordinary Differential Equations," MIT Press, Cambridge, 1978. [2] K. I. Chueh, C. Conley and J. Smoller, Positively invariant regions for systems of nonlinear diffusion equations, Ind. Univ. Math. J., 26 (1977), 373-392.doi: 10.1512/iumj.1977.26.26029. [3] A. Doelman, R. A. Gardner and T. J. Kaper, Stability analysis of singular patterns in the 1-D Gray-Scott model: A matched asymptotics approach, Phys. D, 122 (1998), 1-36.doi: 10.1016/S0167-2789(98)00180-8. [4] A. Doelman, R. A. Gardner and T. J. Kaper, Large stable pulse solutions in reaction-diffusion equations, Indiana Univ. Math. J., 50 (2001), 443-507.doi: 10.1512/iumj.2001.50.1873. [5] D. Henry, "Geomertic Theory of Semilinear Parabolic Equations," Springer-Verlag, 1981. [6] T. Kato, "Perturbation Theory for Linear Operators," Springer-Verlag, New York Inc, 1966. [7] A. Marciniak-Czochra and M. Kimmel, Modelling of early lung cancer progression: Influence of growth factor production and cooperation between partially transformed cells, Math. Mod. Meth. Appl. Sci., 17 (2007), 1693-1719.doi: 10.1142/S0218202507002443. [8] A. Marciniak-Czochra and M. Kimmel, Dynamics of growth and signalling along linear and surface structures in very early tumours, Comp. Math. Meth. Med., 7 (2006), 189-213.doi: 10.1080/10273660600969091. [9] A. Marciniak-Czochra and M. Kimmel, Reaction-diffusion model of early carcinogenesis: The effects of influx of mutated cells, Math. Model. Nat.Phenom., 3 (2008), 90-114.doi: 10.1051/mmnp:2008043. [10] J. D. Murray, "Mathematical Biology," Springer-Verlag, 2003. [11] P. K. Maini, In "On Growth and Form. Spatio-Temporal Pattern Formation in Biology," John Wiley & Sons, 1999. [12] F. Rothe, "Global Solutions of Reaction-Diffusion Systems," Springer-Verlag, Berlin, 1994. [13] J. Smoller, "Shock-Waves and Reaction-Diffusion Equations," Springer-Verlag, New York Heidelberg Berlin, 1994. [14] A. M. Turing, The chemical basis of morphogenesis, Phil. Trans. Roy. Soc. B, 237 (1952), 37-72.doi: 10.1098/rstb.1952.0012. [15] J. Wei, On the interior spike layer solutions for some singular perturbation problems, Proc. Royal Soc. Edinb., 128A (1998), 849-874. | 2023-03-31 00:26:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.4716675579547882, "perplexity": 3308.891282151801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00797.warc.gz"} |
https://mattkretz.github.io/2019/01/18/optimizing-hypot-for-simd.html | # Optimizing std::hypot for simd arguments
Do you know about std::hypot(a, b) and std::hypot(a, b, c)? (The 3-argument overload exists since C++17, sometimes referred to as hypot3.) Why do the C and C++ standard libraries provide a function that is as simple as sqrt(a*a + b*b)? It doesn’t save enough characters to warrant its existence, right? Have you ever considered what happens if the input values are “very” small or large? Or if an input is an IEEE754 special value such as infinity or NaN? Have you ever considered how precise the calculation is, especially if an exact answer is obvious if one of the inputs is 0?
## Pythagoras — it’s complicated
The standard function is specified to avoid overflow and underflow in intermediate calculations. Consider the following example:
float a = 0x1p70f; // = 2⁷⁰ (~1.2e21)
float b = 0;
float r = std::sqrt(a * a + b * b); // = inf
a * a thus is 0x1p140f (~1.4e42) which overflows the 8-bit exponent (subnormal, -126 – 127, inf/nan) for single precision. However, mathematically it’s obvious that r = a is the right answer, not r = inf. The naïve sqrt(a*a + b*b) thus doesn’t work for inputs greater than 0x1.fffffep63f (~1.8e19) (if the other input is 0, otherwise the cut-off is even lower) as well as for inputs smaller than (let’s ignore subnormals for now) 0x1p-63f (~1e-19).
Regarding precision, sqrt(a*a + b*b) isn’t really bad since std::sqrt is required by IEEE754 to have .5 ULP precision and can thus potentially reduce the error of a*a + b*b (where each operation has .5 ULP precision). I’d expect a total error within 1 ULP (if anyone can give me a good pointer to a formalism of fp error analysis, please do). The precision issue becomes relevant when you want to support the full range of possible input values without over-/underflow.
## How to avoid over-/underflow
The idea to support the full range of input values is a simple transformation $$\sqrt{a^2 + b^2} = |b|\cdot\sqrt{(\frac{a}{b})^2+1}$$ where $$|b|\ge|a|$$. Consequently $$\frac{a}{b} \le 1$$ and thus $$(\frac{a}{b})^2$$ won’t overflow. If $$(\frac{a}{b})^2$$ underflows, we didn’t have to care about its value anyway because the addition with 1 would have discarded all bits anyway.
### 1. Optimization idea
Instead of factoring out $$|b|$$, we can use a value that is of the same magnitude but might reduce the errors introduced through intermediate calculations. The most obvious candidate is to round $$|b|$$ down to the next power-of-2 value, which I’ll call $$b'$$. This is a trivial bitwise and operation, setting all mantissa bits to 0. The resulting floating point value will only scale the exponent bits when used in a multiplication or division. Our hypot implementation thus calculates $$b'\cdot\sqrt{(\frac{a}{b'})^2+(\frac{b}{b'})^2}$$. The final multiplication ($$b'\cdot\sqrt{\cdot}$$) and both fractions ($$\frac{a}{b'}$$ and $$\frac{b}{b'}$$) are without loss of precision. Thus, the resulting error is equivalent to the error of $$\sqrt{a^2+b^2}$$.
### 2. Division is slow
Division and square root instructions run on the divider port (pipeline) of the CPU and will therefore dominate the execution time of the hypot implementation. We cannot avoid the square root, but we can avoid the division. The first idea is to turn the two divisions into one division and two multiplications: $$b'\cdot\sqrt{(\frac{a}{b'})^2+(\frac{b}{b'})^2} = b'\cdot\sqrt{(a\cdot\hat{b})^2+(b\cdot\hat{b})^2}$$, with $$\hat{b} = \frac{1}{b'}$$.
The $$\frac{1}{b'}$$ division is a trivial operation, though, because it just needs to flip the sign of the exponent. After all, $$b'$$ was constructed to be a power-of-2 value: $$b'=2^n \Rightarrow \hat{b} = 2^{-n}$$. If you recall how the exponent is stored in an IEEE754 floating point value, you might realize that flipping all exponent bits almost produces the correct result. It’s just off by one. And if the exponent is off by one, the resulting floating point value is off by a factor of 2.
$$\Rightarrow \hat{b} = 2b'$$^$$∞$$ (or in code: b_hat = ((b & infinity) ^ infinity) * .5; the xor operator is not defined for float, but you get the idea). Note that using an addition instead of multiply by two is faster for two reasons (An optimizing compiler will turn 2*x into x+x by itself, so feel free to forget about this optimization.):
1. it doesn’t require loading a constant value (0x4000'0000 in this case);
2. addition instructions typically have a lower latency than multiplication instructions.
$$\frac{b}{b'} = b\cdot\hat{b}$$ is another trivial bit operation if we look closely. $$b'$$ was constructed to store the exponent of $$b$$. Thus $$\frac{b}{b'}$$ is $$b$$ with its exponent set to $$2^0 = 1$$. Using a bitwise-and and bitwise-or operation, we can easily overwrite the exponent of $$b$$ to avoid the multiplication $$b\cdot\hat{b} =$$(b & (min - denorm_min)) | 1. The CPU can therefore schedule $$(b\cdot\hat{b})^2$$ earlier and thus also execute the following FMA and square root earlier, reducing the total latency of the hypot function.
### Subnormal, NaN, Zero, and Infinity
There are corner cases. A proper implementation must care for the Annex F requirements (C Standard, but imported into C++). As is often the case with floating point, the .01% corner cases lead to significant effort in the implementation, and if done carelessly to unfortunate slowdown. I’ll let you figure it out from the implementation below.
## I was talking about std::experimental::simd<float>
You probably didn’t notice other than by the title, but everything I said I didn’t write for plain float but for std::experimental::simd<T, Abi> (last draft). Here’s my implementation, targeting libstdc++:
template <typename T, typename Abi>
simd<T, Abi> hypot(const simd<T, Abi>& x, const simd<T, Abi>& y)
{
if constexpr (simd<T, Abi>::size() == 1) {
return std::hypot(T(x[0]), T(y[0]));
} else if constexpr (is_fixed_size_abi_v<Abi>) {
return fixed_size_apply<simd<T, Abi>>(
[](auto a, auto b) { return hypot(a, b); },
x, y);
} else {
using namespace __proposed::float_bitwise_operators;
using Limits = std::numeric_limits<T>;
using V = simd<T, Abi>;
V absx = abs(x);
V absy = abs(y);
V hi = max(absx, absy);
V lo = min(absy, absx);
constexpr V inf(Limits::infinity());
// if hi is subnormal, avoid scaling by inf & final mul by 0 (which
// yields NaN) by using min()
V scale = 1 / Limits::min();
// invert exponent w/o error and w/o using the slow divider unit:
where(hi > Limits::min(), scale) = ((hi & inf) ^ inf) * T(.5);
// adjust final exponent for subnormal inputs
V hi_exp = Limits::min();
where(hi > Limits::min(), hi_exp) = hi & inf;
constexpr V mant_mask = Limits::min() - Limits::denorm_min();
V h1 = (hi & mant_mask) | V(1);
where(hi < Limits::min(), h1) -= V(1);
V l1 = lo * scale;
V r = hi_exp * sqrt(h1 * h1 + l1 * l1);
#ifdef STDC_IEC_559__
// fixup for Annex F requirements
V fixup = hi;
where(isunordered(x, y), fixup) = Limits::quiet_NaN();
where(isinf(absx) || isinf(absy), fixup) = inf;
where(!(lo == 0 || isunordered(x, y) || isinf(absx) ||
isinf(absy)), fixup) = r;
r = fixup;
#endif
return r;
}
}
## Does it fly?
First of all, I made sure it is conforming and within 1 ULP of a precise implementation.
I tried to measure both latency and throughput. Measurements details:
• Intel Xeon W-2123 @ 3.60GHz
• GCC 9
• -O2 -march=native (i.e. using AVX512VL instructions for xmm and ymm vectors)
• Intel pstate set to no_turbo and scaling_governor set to performance
• Ubuntu 18.10
### Throughput
T hypot(T, T) TSC cycles/call Speedup per value relative to T
float 17.1208
simd<float, scalar> 17.218 0.994355
simd<float, __sse> 15.5264 4.41074
simd<float, __avx> 15.9294 8.59831
simd<float, __avx512> 18.5829 14.7411
double 30.0783
simd<double, scalar> 30.2174 0.995398
simd<double, __sse> 15.5236 3.87517
simd<double, __avx> 16.9682 7.09053
simd<double, __avx512> 26.4416 9.1003
### Latency
T hypot(T, T) TSC cycles/call Speedup per value relative to T
float 37.6397
simd<float, scalar> 38.4039 0.980102
simd<float, __sse> 41.2072 3.6537
simd<float, __avx> 41.4811 7.25916
simd<float, __avx512> 53.6256 11.2304
double 51.806
simd<double, scalar> 51.8265 0.999604
simd<double, __sse> 42.0938 2.46145
simd<double, __avx> 42.6324 4.86072
simd<double, __avx512> 57.4461 7.21455
I can also disable the Annex F requirements via -ffast-math. Let’s see how that changes the results:
### Throughput (-ffast-math)
T hypot(T, T) TSC cycles/call Speedup per value relative to T
float 12.0874
simd<float, scalar> 11.572 1.04453
simd<float, __sse> 11.0433 4.37815
simd<float, __avx> 11.6043 8.33306
simd<float, __avx512> 13.6978 14.1189
double 27.1595
simd<double, scalar> 26.0718 1.04172
simd<double, __sse> 11.6566 4.65993
simd<double, __avx> 13.2002 8.23001
simd<double, __avx512> 25.5888 8.49107
### Latency (-ffast-math)
T hypot(T, T) TSC cycles/call Speedup per value relative to T
float 36.8928
simd<float, scalar> 37.1042 0.994301
simd<float, __sse> 41.357 3.56822
simd<float, __avx> 41.7432 7.07042
simd<float, __avx512> 52.2087 11.3062
double 51.3563
simd<double, scalar> 50.9219 1.00853
simd<double, __sse> 42.4579 2.41916
simd<double, __avx> 42.326 4.85341
simd<double, __avx512> 56.3338 7.29314
Discuss on Reddit | 2022-09-26 16:24:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7577651143074036, "perplexity": 7388.232435873082}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00675.warc.gz"} |
https://madhavamathcompetition.com/ | # Pre RMO (PRMO) practice questions, specially selected: PRMO 2018
The following set of problems tinkers, kindles some basic mathematical concepts as well as algebraic manipulations, so I suggest you try them out:
Try to decide by yourself which set of points are defined by these relations:
(a) $|x| = |y|$
(b) $\frac{x}{|x|} = \frac{y}{|y|}$
(c) $|x| + x = |y| + y$
(d) $[x] = [y]$.
Note: The symbol $[x]$ denotes the whole part of the number x, that is, the largest whole number not exceeding x. For example, $[3.5] = 3$, $[5] = 5$, $[-2.5] = -3$.
(e) $x - [x] = y - [y]$
(f) $x - [x] > y - [y]$.
Good luck.
Nalin Pithwa.
# Pre RMO August 2018: some practice problems selected
Question 1:
Can the product of 31256 and 8427 be 263395312? Give reasons (of course, brute force long calculation will not be counted as an answer ! :-)).
Solution 1:
Use the rule “casting out the nines”: a number divided by 9 will leave the same remainder as the sum of its digits divided by nine.
In this particular case, the sums of the digits of the multiplicand, multiplier, and product are 17, 21, and 34 respectively, again, the sums of the digits of these three numbers are 8, 3, and 7, hence, 8 times 3 is 24 and, which has 6 for the sum of the digits; thus, we have two different remainders, 6 and 7, and the multiplication is incorrect.
Question 2:
Prove that 4.41 is a square number in any scale of notation whose radix is greater than 4.
Solution 2:
Let r be the radix; then, $4.41 = 4 + \frac{4}{r} + \frac{1}{r^{2}}=(2 + \frac{1}{r})^{2}$;
thus, the given number is the square of 2.1
Question 3:
In what scale is the decimal number 2.4375 represented by 2.13?
Solution 3:
Let r be the radix; then, $2 + \frac{}{} + \frac{}{} = 2.4375= 2 \frac{7}{16}$
hence, $7r^{2}-16r-48=0$
that is, $(7r+12)(r-4)=0$.
More later,
Nalin Pithwa
# Solution: Intel Pentium P5 floating point unit error (1994): RMO problem !!!
Finally, the much awaited solution is here:
(I re-state the problem from a previous blog, almost a month old):
Two number theorists bored in a chemistry lab, played a game with a large flask containing 2 litres of a colourful chemical solution and an ultra-accurate pipette. The game was that they would take turns to recall a prime number p such that $(p+2)$ is also a prime number. Then, the first number theorist would pipette out $1/p$ litres of chemical and the second $\frac{1}{p+2}$ litres. How many times do they have to play this game to empty the flask completely?
Solution:
It is easy to play this game initially even for ordinary people : one could guess p to be 3 because 5 is a prime number, then 5 and 7, 11 and 13, 17 and 19, 29 and 31, and so on. These are called twin primes. Number theorists need to be there to recall large twin primes. The emptied amount of liquid in litres is given by the twin prime harmonic series $H_{P}^{TP}$:
$H_{P}^{TP} = (\frac{1}{3} + \frac{1}{5}) + (\frac{1}{5} + \frac{1}{7}) + (\frac{1}{11} + \frac{1}{13}) + (\frac{1}{17}+\frac{1}{19}) + \ldots$
This series is known to converge to 1.902160583104…which is known as Brun’s constant, named after Viggo Brun, who proved it in 1919. It is a curious result because it is not known if infinitely many twin primes exist, refer for example,
http://www.math.sjsu.edu/~goldston/twinprimes.pdf
even though it is known that infinitely many primes exist (a result proved by Euclid in 300 BC!) and the harmonic series diverges (a result proved by Euler in the eighteenth century). Had the series $H_{P}^{TP}$ diverged, then one could say that infinite twin primes exist. But, as the series converges (must converge with finitely many twin primes or may converge even with infinitely many twin primes), the question of infinitude of twin primes is still an open one. (there is a recent famous result of Prof. Yitang Zhang also regarding this). Anyway, the point is that the two number theorists would not be able to empty 2 litres even if they play the game for infinitely long period. So, they are not bored and can keep themselves busy in the chemistry lab forever.
Another curious fact about Brun’s constant is that its computation in a computer revealed a floating point division arithmetic error in Intel’s Pentium P5 Floating Point Unit in 1994. This bug was discovered by Thomas Nicely while evaluating the reciprocals of twin primes 824633702441 and 824633702443. Consequently, Intel incurred USD 475 million to fix this bug. For a while in 1995, number theory and Brun’s constant took the centre stage in popular media.
For curious minds, there also exist prime triplets, prime quadruples etc. If four number theorists play the game, they will not be able to empty even 1 litre because the harmonic series of prime quadruples is estimated to be around 0.8705883800.
Reference:
Popular Problems and Puzzles in Mathematics, Asok Kumar Mallik, IISc Press, Foundation Books:
https://www.amazon.in/Popular-Problems-Puzzles-Mathematics-Mallik/dp/938299386X/ref=sr_1_1?s=books&ie=UTF8&qid=1530628680&sr=1-1&keywords=popular+problems+and+puzzles+in+mathematics
# AMS Menger Awards 2018
(shared from the AMS website for motivational purposes)
The AMS presented the Karl Menger Memorial Awards at the 2018 Intel International Science and Engineering Fair (Intel ISEF), May 13-18, 2018 in Pittsburgh, PA. The First Place Award of US$2000 was given to Ryusei Sakai, Sota Kojima, and Yuta Yokohama, Shiga Prefectural Hikone Higashi High School, Japan, for “Extension of Soddy’s Hexlet: Number of Spheres Generated by Nested Hexlets.” [Photo: bottom row (left to right): Dr. Keith Conrad (committee chair), Rachana Madhukara, Yuta Yokohama, Sota Kojima, Ryusei Sakai; top row (left to right): Chavdar Lalov, Gianfranco Cortes-Arroyo, Gopal Goel, Savelii Novikov, Boris Baranov. Not pictured: Muhammad Abdulla. Photo by the Society for Science & the Public.] The Menger Awards Committee also presented the following awards: • Second Award of$1,000: Gopal Krishna Goel (Krishna Homeschool, OR), “Discrete Derivatives of Random Matrix Models and the Gaussian Free Field” and Rachana Madhukara, Canyon Crest Academy, CA, “Asymptotics of Character Sums”
• Third Award of \$500: Chavdar Tsvetanov Lalov, Geo Milev High School of Mathematics, Bulgaria, “Generating Functions of the Free Generators of Some Submagmas of the Free Omega Magma and Planar Trees”; Gianfranco Cortes-Arroyo, West Port High School, FL, Generalized Persistence Parameters for Analyzing Stratified Pseudomanifolds”; Muhammad Ugur Oglu Abdulla, West Shore Junior/Senior High School, FL, “A Fine Classification of Second Minimal Odd Orbits”; Boris Borisovich Baranov and Savelii Novikov, School 564, St. Petersburg, Russian Federation, “On Two Letter Identities in Lie Rings”
• Certificate of Honorable Mention: Dmitrii Mikhailovskii, School 564, St. Petersburg, Russian Federation, “New Explicit Solution to the N-Queens Problem and the Millennium Problem”; Chi-Lung Chiang and Kai Wang, The Affiliated Senior High School of National Taiwan Normal University, Chinese Taipei, “’Equal Powers Turn Out’ – Conics, Quadrics, and Beyond”; Kayson Taka Hansen, Twin Falls High School, ID, “From Lucas Sequences to Lucas Groups”; Gustavo Xavier Santiago-Reyes and Omar Alejandro Santiago-Reyes, Escuela Secundaria Especializada en Ciencias, Matematicas y Tecnología, Puerto Rico, “Mathematics of Gene Regulation: Control Theory for Ternary Monomial Dynamical Systems”; Karthik Yegnesh, Methacton High School, PA, “Braid Groups on Triangulated Surfaces and Singular Homology”
A booklet on Karl Menger was also given to each winner. This is the 28th year of the presentation of the Karl Menger Memorial Awards. The Society’s participation in the Intel ISEF is supported in part by income from the Karl Menger Fund, which was established by the family of the late Karl Menger. For more information about this program or to make contributions to this fund, contact the AMS Development Office.
Cheers to the winners,
Nalin Pithwa.
# Solution to a “nice analysis question for RMO practice”
The question from a previous blog is re-written here for your convenience.
Question:
How farthest from the edge of a table can a deck of playing cards be stably overhung if the cards are stacked on top of one another? And, how many of them will be overhanging completely away from the edge of the table?
Solution:
The figure below shows how two and three cards can be stacked so that the mass of cards is equal on either side of the vertical line passing through the corner of table’s edge in order to just balance them under gravity:
the set of first two cards are arranged as follows (the horizontal lines represents the cards):
$xxxxxxxxxxxxxxxx\line(5,0){170}$
$\line(5,0){150}xxxxxxxxxxxxxxxxxxx$
the set of three cards are arranged as follows:
$xxxxxxxxxxxxxxxxxxxxxx\line(5,0){150}$
$xxxxxxxxxxxxx\line(5,0){150}$
$\line(5,0){150}xxxxxxxxxxxxxxxxxxxxxx$
We can see that the length of the overhand is a harmonic series of even numbers multiplied by the length of one card, L.
Overhand distance is $(\frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \ldots + \frac{1}{52 \times 2})L$ for 52 cards.
It may be noted that the series if continued to infinity leads to $H_{\infty}^{E}$.
That is, $H_{\infty}^{E}=\frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \frac{1}{8} + \ldots$
This series is known to diverge as proved below:
First consider, $H_{\infty}=1+ \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8}+ \ldots$, which is, greater than
$1+ \frac{1}{2} + \frac{1}{4} + \frac{1}{4} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8}+ \ldots$, which is greater than $1+ \frac{1}{2} + \frac{1}{2} + \frac{1}{2} + \ldots$. Hence, $H_{\infty}$ diverges as we go on adding 1/2 indefinitely.
Now, let $H_{E}=\frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \frac{1}{8} + \ldots = \frac{1}{2}(1+ \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \ldots)=\frac{1}{2}H_{\infty}$
Since $H_{\infty}$ diverges, $H_{E}$also diverges.
Hence, the “overhang series” also diverges.
This means that the cards can be stacked indefinitely and the overhang distance can reach infinity. However, this will happen very slowly as shown in the table below:
$\begin{array}{cc} n^{E} & H_{n}^{E}\\ 2 & 0.5 \\ 10 & 1.46 \\ 100 & 2.59 \\ 1000 & 3.74 \\ 10000 & 4.89 \\ 100000 & 6.05 \end{array}$
Computing the number of cards that completely overhang off the table needs information about the overhang distance for different number of cards. As shown below in the figure, four cards are required to have one card completely away from the edge of the table. This is because $(\frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \frac{1}{8}=1.0417 >1)$.
(the set of four cards are arranged as follows:)
$xxxxxxxxxxxxxxxxxxxxxxxxxxxx\line(5,0){150}$
$xxxxxxxxxxxxxxxx\line(5,0){150}$
$xxxxxxxxxx\line(5,0){150}$
$xxxxx\line(5,0){150}$
We can see that the length of the overhang is a harmonic series of even numbers multiplied by the length of one card, L:
Overhang distance = $(\frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \ldots + \frac{1}{52 \times 2})L$ for 52 cards
It may be noted that the series if continued to infinity, leads to $H_{\infty}^{E}$
$H_{\infty}^{E} = \frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \frac{1}{8} + \ldots$
This series is known to diverge. This means that the cards can be stacked indefinitely and the overhang can reach infinity. However, this will happen very slowly as shown in the table below:
$\begin{array}{cc} n^{E} & H_{\infty}^{E} \\ 2 & 0.5 \\ 10 & 1.46 \\ 100 & 2.59 \\ 1000 & 3.74 \\ 10000 & 4.89 \\ 100000 & 6.05 \end{array}$
Computing the number of cards that completely overhang off the table needs information about the overhang distance for different numbers of cards. As shown in the above schematic figures of cards with overhangs, four cards are required to have one card completely away from the edge of the table. This is because
$(\frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \frac{1}{8})=1.0417>1$
For the second card to overhang completely, leaving the first card (and hence one half) that is already completely overhung, it is now necessary that
$(\frac{1}{4} + \frac{1}{6} + \frac{1}{8} + \ldots + \frac{1}{2n})>1$, or
$(\frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \frac{1}{8} + \ldots + \frac{1}{2n} )>1+ \frac{1}{2}$
where n needs to be found out. By generating some more data, we can find the value of n to be 11.
For third overhanging card, we need
$(\frac{1}{6} + \frac{1}{8} + \ldots + \frac{1}{2n})>1$ or
$(\frac{1}{2} + \frac{1}{4} + \frac{1}{6}+\frac{1}{8}+ \ldots + \frac{1}{2n})>1+\frac{1}{2} + \frac{1}{4}$
Thus, for m completely overhanging cards, we find n such that $H_{2n}^{E} > 1+ H_{2(m-1)}^{E}$
The table below shows these values wherein we see an approximate pattern of arithmetic progression by 7.
$\begin{array}{cccc} m & n & m & n \\ 1 & 4 & 11 & 78 \\ 2 & 11 & 12 & 85 \\ 3 & 19 & 13 & 92 \\ 4 & 26 & 14 & 100 \\ 5 & 33 & 15 & 107 \\ 6 & 41 & 16 & 115 \\ 7 & 48 & 17 & 122 \\ 8 & 55 & 18 & 129 \\ 9 & 63 & 19 & 137 \\ 10 & 70 & 20 & 144 \end{array}$
By examining the pattern in the table, we can get a simple rule to estimate the number of completely overhanging number of cards m, with an error of utmost one, for n cards stacked.
$m = round(\frac{n}{7.4})=round(\frac{10n}{74})$.
Reference:
Popular Problems and Puzzles in Mathematics by Asok Kumar Mallik, IISc Press, Foundation Books.
Hope you enjoyed the detailed analysis…
More later,
Nalin Pithwa | 2018-08-15 03:52:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 53, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7330207824707031, "perplexity": 1361.512821398234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209856.3/warc/CC-MAIN-20180815024253-20180815044253-00569.warc.gz"} |
http://mathhelpforum.com/algebra/17096-exponential-equation.html | # Math Help - Exponential equation
1. ## Exponential equation
I can simply do the problem by guess checking but because it is taking far too long, what is the simplest way to do these problems? Examples
27^4x = 9^x+1
Thanks~
2. Originally Posted by JonathanEyoon
I can simply do the problem by guess checking but because it is taking far too long, what is the simplest way to do these problems? Examples
27^4x = 9^x+1
Thanks~
get both bases in terms of the same base.
i assume you mean, $27^{4x} = 9^{x + 1}$ ?....use parentheses! you should have typed 27^(4x) = 9^(x + 1) if that is the case
note that $27 = 3^3$ and $9 = 3^2$
so we have:
$\left( 3^3 \right)^{4x} = \left(3^2 \right)^{x + 1}$
$\Rightarrow 3^{12x} = 3^{2x + 2}$
can you take it from here?
3. Originally Posted by Jhevon
get both bases in terms of the same base.
i assume you mean, $27^{4x} = 9^{x + 1}$ ?....use parentheses! you should have typed 27^(4x) = 9^(x + 1) if that is the case
note that $27 = 3^3$ and $9 = 3^2$
so we have:
$\left( 3^3 \right)^{4x} = \left(3^2 \right)^{x + 1}$
$\Rightarrow 3^{12x} = 3^{2x + 2}$
can you take it from here?
OHhhhhhhhhhh I totally forgot about base!! Thanks alot and I will use parentheses next time~ | 2014-10-23 13:53:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352620840072632, "perplexity": 1266.0391622006841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558066654.17/warc/CC-MAIN-20141017150106-00246-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://brilliant.org/problems/if-you-want-to-die-this-problem-will-help-you/ | # Compliment me
Find the number of positive integers $n\leq 1991$ such that $6 | (n^{2} + 3n + 2)$.
× | 2021-02-28 07:34:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4032731354236603, "perplexity": 2061.9960141367646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360293.33/warc/CC-MAIN-20210228054509-20210228084509-00551.warc.gz"} |
https://tex.stackexchange.com/questions/108667/newcommand-and-boolean-statements | # \newcommand and boolean statements
I'm getting better at creating \newcommand instances but I've just reached a part where my knowledge is just not enough for it and also reading this site or the manual doesn't help.
Basically I'd like to create a newcommand that sets a vertical rectangle split in 3 parts, so far so good, but where I can control whether each part gets filled with a boolean. The command would look like (as I've envisioned it but I'm not sure it's feasible):
\command{1, 0, 1}; Here the first and third part would be filled with a color (black or whatever), the second would stay white. Alternatively, it could be \command{1}{0}{1}; for the same result. This would use three arguments but I don't know how to separate arguments with the comma as shown above.
The result would be something like this:
Note that I'd like this to be dynamic, i.e. if I say 0,1,1, the filled parts would be the second and third, and so on. I didn't manage to reach any level so I don't have a code to provide but here is a starting example so you don't need to rewrite all the code (ignore the settings the seem to be unnecessary here, they're used in other parts of the document):
\documentclass[10pt]{article}
\usepackage[a4paper, margin=3mm]{geometry}
\usepackage[utf8]{inputenc}
\usepackage{rotating}
\usepackage{amsmath}
\usepackage{pgfplots}
\usepackage{tikz}
\usetikzlibrary{fit, arrows,backgrounds,patterns,shapes,shapes.multipart,positioning,calc,decorations.markings}
\begin{document}
\begin{tikzpicture}
\centering
\end{tikzpicture}
\end{document}
• Can you add at least the code for getting three filled rectangles? – egreg Apr 14 '13 at 11:12
• @egreg Sorry, I've seen your comment now, what do you mean by that? Do you mean like \node[rectangle, fill=black] at (0,0) {};? – Alenanno Apr 14 '13 at 13:01
• @Alenanno: I answered your question with a simple TikZ-Code? At the moment we don't know your intention and so we can't provide a good tikz-part. – Marco Daniel Apr 14 '13 at 13:33
• @MarcoDaniel I've seen it thanks, I'm playing with it at the moment. I don't get your comment though. My intention is exactly what I've explained and I think you've answered fine! Now that we're here, why can't I apply minimum width/height or inner sep to the rectangles in your code? It complains when I try to typeset. – Alenanno Apr 14 '13 at 13:49
• @Alenanno: Options like inner sep or width etc. haven't been required. Maybe you can update your question and as egreg mentioned please show us your tikz-code. – Marco Daniel Apr 14 '13 at 14:03
Here a way using xparse and l3prop. The usage of l3prop allows more modifications.
Maybe the tikz-part can be improved ;-)
\documentclass[10pt]{article}
\usepackage{tikz}
\usepackage{xparse,expl3}
\ExplSyntaxOn
\prop_new:N \l_alenanno_color_prop
\prop_put:Nnn \l_alenanno_color_prop {0} {white}
\prop_put:Nnn \l_alenanno_color_prop {1} {black}
\NewDocumentCommand {\command} { > { \SplitArgument { 2 } { , } } m }
{
\alenanno_command_aux:nnn #1
}
\cs_new:Npn \alenanno_command_aux:nnn #1 #2 #3
{
\begin{tikzpicture}
\draw[fill=\prop_get:Nn \l_alenanno_color_prop { #1 }] (0,0) rectangle (1,1);
\draw[fill=\prop_get:Nn \l_alenanno_color_prop { #2 }] (0,1) rectangle (1,2);
\draw[fill=\prop_get:Nn \l_alenanno_color_prop { #3 }] (0,2) rectangle (1,3);
\end{tikzpicture}
}
\ExplSyntaxOff
\begin{document}
\command{1,0,1} \command{0,0,1} \command{0,1,1}
\end{document}
Here a solution using \node:
\documentclass[10pt]{article}
\usepackage{tikz}
\tikzset{mynodestyle/.style={minimum height=1cm,minimum width=1cm,outer sep=0pt,rectangle,draw=black}}
\usepackage{xparse,expl3}
\ExplSyntaxOn
\prop_new:N \l_alenanno_color_prop
\prop_put:Nnn \l_alenanno_color_prop {0} {white}
\prop_put:Nnn \l_alenanno_color_prop {1} {black}
\NewDocumentCommand {\command} { > { \SplitArgument { 2 } { , } } m }
{
\alenanno_command_aux:nnn #1
}
\cs_new:Npn \alenanno_command_aux:nnn #1 #2 #3
{
\begin{tikzpicture}
\node[mynodestyle, fill=\prop_get:Nn \l_alenanno_color_prop { #2 }] (P) {};
\node[mynodestyle, fill=\prop_get:Nn \l_alenanno_color_prop { #1 },anchor=south] at (P.north) {};
\node[mynodestyle, fill=\prop_get:Nn \l_alenanno_color_prop { #3 },anchor=north] at (P.south) {};
\end{tikzpicture}
}
\ExplSyntaxOff
\begin{document}
\command{1,0,1} \command{0,0,1} \command{0,1,1} \command{0,0,0}
\end{document}
It's important to know that inside \ExplSyntaxOn ... \ExplSyntaxOff all spaces are ignored. This is explained here: What do ExplSyntaxOn and ExplSyntaxOff do?. So you can't use TikZ options like minimum with You can temporally disable this behaviour as described her: Text within ExplSyntaxOn/Off or do the setting outside.
As mentioned by egreg: If you want to use some TikZ options which require a space you can use the symbol ~. That means:
\draw[fill=\prop_get:Nn \l_alenanno_color_prop { #3 },rounded~corners] (0,2) rectangle (1,3);
• You can use ~ in the special environment when a space is required by the syntax rules of TikZ. – egreg Apr 14 '13 at 15:06
• @egreg: really? I thought this symbol would be passed as a token and so you get an unknown option. I will try it. – Marco Daniel Apr 14 '13 at 15:07
• @MarcoDaniel Thanks, the second solution you provided worked very nicely. :) I'll be accepting yours, thanks again! – Alenanno Apr 14 '13 at 15:16
I'd have needed more help in the MWE to make that with Tikz, but here's the testing part:-)
\documentclass[10pt]{article}
\def\command#1{\xcommand#1\relax}
\def\xcommand#1,#2,#3\relax{%
\begin{tabular}{|l|}
\end{tabular}}
\begin{document}
\command{1,0,1}
\bigskip
\command{0,1,0}
\end{document}
Why waste electrons and use a comma list? 010 conveys the same information. Here is a short solution. If you insist on commas, change @tfor to @for. Solution is extensible, use as many 0s or 1s you wish.
\documentclass{article}
\usepackage{xcolor}
\fboxsep0pt
\makeatletter
\def\roll#1{%
\def\boxblack{\rule{1cm}{1cm}}%
\def\boxwhite{{\color{white}\rule{1cm}{1cm}}}%
\fbox{\parbox{1cm}{%
\@tfor\next:=#1\do{%
\ifnum\next=0\boxblack\else\boxwhite\fi%
\par
}}}}
\makeatother
\begin{document}
\roll{01010}
\end{document}
Posted a solution to illustrate that it is always best to generalize the problem. It is also best IMHO to avoid heavyweight libraries when simpler solutions exist. | 2020-07-07 09:31:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7612296938896179, "perplexity": 2464.788099347635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891884.11/warc/CC-MAIN-20200707080206-20200707110206-00370.warc.gz"} |
https://www.lessonplanet.com/teachers/frames-of-reference-the-basics | # FRAMES OF REFERENCE: THE BASICS
##### This FRAMES OF REFERENCE: THE BASICS lesson plan also includes:
High schoolers examine the concept of frames of reference in physics: that two frames of reference, each moving with respect to the other with a constant velocity v, observe the same accelerations and therefore Newton's laws are the same in both. | 2020-07-07 07:33:46 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8091232776641846, "perplexity": 570.8239040191155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891654.18/warc/CC-MAIN-20200707044954-20200707074954-00083.warc.gz"} |
https://plainmath.net/72123/i-have-the-following-samples-data | # I have the following 2 samples data 1 : 59.09 59.17 59.27 59.13 59.10 59.14 59.54 59.
I have the following 2 samples
data 1 : 59.09 59.17 59.27 59.13 59.10 59.14 59.54 59.90
data 2: 59.06 59.40 59.00 59.12 59.01 59.25 59.23 59.564
And I need to check whether there is differentiation regarding the mean and the variance between the 2 data samples at significance level a=0.05
I think that the first thing I need to do is to check whether the samples come from a normal distribution in order to infer whether i should proceed using parametric or non marametric tests...
However using lillietest in matlab returned that both samples do not follow the normal distribution...
Any ideas on how should I proceed with checking the differentiation tests ? Should I perform ttest ? Or should I proceed by using something like Wilcoxon ? (p.s please confirm that that both data samples do not follow normal distribution...)
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Raiden Williamson
For a t-test, you need the samples to follow a normal distribution, so you're right to check this assumption first. I did a Shapiro-Wilk to test the normality, and it is rejected for the first sample, but it is not for the second sample. Thus you can't use a t-test.
The alternative is to use the Wilcoxon test, which is non-parametric.
Here is the code I have used with R :
data1 <- c(59.09, 59.17, 59.27, 59.13, 59.1, 59.14, 59.54, 59.9)
data2 <- c(59.06, 59.4, 59, 59.12, 59.01, 59.25, 59.23, 59.564)
shapiro.test(data1)
# P-value = 0.007987. Normality is rejected.
shapiro.test(data2)
# P-value = 0.3873. Normality is not rejected.
wilcox.test(data1, data2)
# P-value = 0.5054. No significative difference. | 2022-05-22 14:01:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6503083109855652, "perplexity": 698.2480942619753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545548.56/warc/CC-MAIN-20220522125835-20220522155835-00401.warc.gz"} |
https://stacks.math.columbia.edu/tag/00ZC | Lemma 7.48.1. Let $\mathcal{C}$ be a site with coverings $\text{Cov}(\mathcal{C})$. For every object $U$ of $\mathcal{C}$, let $J(U)$ denote the set of sieves $S$ on $U$ with the following property: there exists a covering $\{ f_ i : U_ i \to U\} _{i\in I} \in \text{Cov}(\mathcal{C})$ so that the sieve $S'$ generated by the $f_ i$ (see Definition 7.47.3) is contained in $S$.
1. This $J$ is a topology on $\mathcal{C}$.
2. A presheaf $\mathcal{F}$ is a sheaf for this topology (see Definition 7.47.10) if and only if it is a sheaf on the site (see Definition 7.7.1).
Proof. To prove the first assertion we just note that axioms (1), (2) and (3) of the definition of a site (Definition 7.6.2) directly imply the axioms (3), (2) and (1) of the definition of a topology (Definition 7.47.6). As an example we prove $J$ has property (2). Namely, let $U$ be an object of $\mathcal{C}$, let $S, S'$ be sieves on $U$ such that $S \in J(U)$, and such that for every $V \to U$ in $S(V)$ we have $S' \times _ U V \in J(V)$. By definition of $J(U)$ we can find a covering $\{ f_ i : U_ i \to U\}$ of the site such that $S$ the image of $h_{U_ i} \to h_ U$ is contained in $S$. Since each $S'\times _ U U_ i$ is in $J(U_ i)$ we see that there are coverings $\{ U_{ij} \to U_ i\}$ of the site such that $h_{U_{ij}} \to h_{U_ i}$ is contained in $S' \times _ U U_ i$. By definition of the base change this means that $h_{U_{ij}} \to h_ U$ is contained in the subpresheaf $S' \subset h_ U$. By axiom (2) for sites we see that $\{ U_{ij} \to U\}$ is a covering of $U$ and we conclude that $S' \in J(U)$ by definition of $J$.
Let $\mathcal{F}$ be a presheaf. Suppose that $\mathcal{F}$ is a sheaf in the topology $J$. We will show that $\mathcal{F}$ is a sheaf on the site as well. Let $\{ f_ i : U_ i \to U\} _{i\in I}$ be a covering of the site. Let $s_ i \in \mathcal{F}(U_ i)$ be a family of sections such that $s_ i|_{U_ i \times _ U U_ j} = s_ j|_{U_ i \times _ U U_ j}$ for all $i, j$. We have to show that there exists a unique section $s \in \mathcal{F}(U)$ restricting back to the $s_ i$ on the $U_ i$. Let $S \subset h_ U$ be the sieve generated by the $f_ i$. Note that $S \in J(U)$ by definition. In stead of constructing $s$, by the sheaf condition in the topology, it suffices to construct an element
$\varphi \in \mathop{\mathrm{Mor}}\nolimits _{\textit{PSh}(\mathcal{C})}(S, \mathcal{F}).$
Take $\alpha \in S(T)$ for some object $T \in \mathcal{U}$. This means exactly that $\alpha : T \to U$ is a morphism which factors through $f_ i$ for some $i\in I$ (and maybe more than $1$). Pick such an index $i$ and a factorization $\alpha = f_ i \circ \alpha _ i$. Define $\varphi (\alpha ) = \alpha _ i^* s_ i$. If $i'$, $\alpha = f_ i \circ \alpha _{i'}'$ is a second choice, then $\alpha _ i^* s_ i = (\alpha _{i'}')^* s_{i'}$ exactly because of our condition $s_ i|_{U_ i \times _ U U_ j} = s_ j|_{U_ i \times _ U U_ j}$ for all $i, j$. Thus $\varphi (\alpha )$ is well defined. We leave it to the reader to verify that $\varphi$, which in turn determines $s$ is correct in the sense that $s$ restricts back to $s_ i$.
Let $\mathcal{F}$ be a presheaf. Suppose that $\mathcal{F}$ is a sheaf on the site $(\mathcal{C}, \text{Cov}(\mathcal{C}))$. We will show that $\mathcal{F}$ is a sheaf for the topology $J$ as well. Let $U$ be an object of $\mathcal{C}$. Let $S$ be a covering sieve on $U$ with respect to the topology $J$. Let
$\varphi \in \mathop{\mathrm{Mor}}\nolimits _{\textit{PSh}(\mathcal{C})}(S, \mathcal{F}).$
We have to show there is a unique element in $\mathcal{F}(U) = \mathop{\mathrm{Mor}}\nolimits _{\textit{PSh}(\mathcal{C})}(h_ U, \mathcal{F})$ which restricts back to $\varphi$. By definition there exists a covering $\{ f_ i : U_ i \to U\} _{i\in I} \in \text{Cov}(\mathcal{C})$ such that $f_ i : U_ i \in U$ belongs to $S(U_ i)$. Hence we can set $s_ i = \varphi (f_ i) \in \mathcal{F}(U_ i)$. Then it is a pleasant exercise to see that $s_ i|_{U_ i \times _ U U_ j} = s_ j|_{U_ i \times _ U U_ j}$ for all $i, j$. Thus we obtain the desired section $s$ by the sheaf condition for $\mathcal{F}$ on the site $(\mathcal{C}, \text{Cov}(\mathcal{C}))$. Details left to the reader. $\square$
There are also:
• 2 comment(s) on Section 7.48: The topology defined by a site
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | 2021-09-22 14:58:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9823316335678101, "perplexity": 53.065893815615944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057366.40/warc/CC-MAIN-20210922132653-20210922162653-00048.warc.gz"} |
https://www.nature.com/articles/s41378-022-00405-y | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Assessment of the electrical penetration of cell membranes using four-frequency impedance cytometry
## Abstract
The electrical penetration of the cell membrane is vital for determining the cell interior via impedance cytometry. Herein, we propose a method for determining the conductivity of the cell membrane through the tilting levels of impedance pulses. When electrical penetration occurs, a high-frequency current freely passes through the cell membrane; thus, the intracellular distribution can directly act on the high-frequency impedance pulses. Numerical simulation shows that an uneven intracellular component distribution can affect the tilting levels of impedance pulses, and the tilting levels start increasing when the cell membrane is electrically penetrated. Experimental evidence shows that higher detection frequencies (>7 MHz) lead to a wider distribution of the tilting levels of impedance pulses when measuring cell populations with four-frequency impedance cytometry. This finding allows us to determine that a detection frequency of 7 MHz is able to pass through the membrane of Euglena gracilis (E. gracilis) cells. Additionally, we provide a possible application of four-frequency impedance cytometry in the biomass monitoring of single E. gracilis cells. High-frequency impedance (≥7 MHz) can be applied to monitor these biomass changes, and low-frequency impedance (<7 MHz) can be applied to track the corresponding biovolume changes. Overall, this work demonstrates an easy determination method for the electrical penetration of the cell membrane, and the proposed platform is applicable for the multiparameter assessment of the cell state during cultivation.
## Introduction
The biomass assessment of single cells plays a vital role in many areas, including the analysis of the cell state1 and cell growth mechanism2, as well as environmental and energy issues3,4. To date, several techniques, including live-cell imaging5, Raman flow cytometry6, and chemical probes7, have been successfully applied for the high-throughput assessment of intracellular biomass in single cells. However, most of these optical-based approaches are time-consuming and labor intensive, and the tight requirements for maintaining and calibrating beam-focusing points limits their robustness and portability. In this work, we proposed a more effective and convenient method for characterizing biomass through the magnitudes of high-frequency impedance signals.
As an alternative, impedance cytometry has been demonstrated to be applicable for single-cell characterization in a label-free and cost-effective manner8. To date, impedance cytometry has been successfully employed to analyze the morphology9, stiffness10, and states11 of single cells. The magnitude and morphology of impedance pulses have been shown to be dependent on the volume12 and shape13 of single cells, respectively. In addition, research has found that high-frequency impedance detection is applicable for characterizing membrane properties14,15. For example, the conductivity of the cell membrane increases with increasing detection frequency above 1 MHz14, and the membrane conductivity is related to the cell viability. Sui et al.16 and Zhong et al.17 have shown that a detection frequency of 5–8 MHz is sufficient to allow current to pass through the membranes of the living cells of mammals. This conclusion is drawn from the differences in membrane conductivity between inactivated and living cells. Without benchmarking against inactivated cells, there are few reports of applicable methods for directly determining whether the membrane is conductive.
When measuring intracellular biomass with impedance cytometry, it is necessary to determine the detection frequency that can penetrate the cell membrane. Our solution is to quantify the tilt level of impedance pulses as a tilt index13 and then assess the conductivity of the cell membrane through the tilt index of the cell population at different detection frequencies. Based on our previous work, the intracellular component distribution was found to affect the tilting level of high-frequency impedance pulses (6 MHz)18,19 because a high-frequency current can propagate inside single cells between nonconductive intracellular components. In contrast, a low-frequency current (500 kHz) cannot penetrate the cell membrane, and it mainly propagates around the cell18,19. This feature facilitates a novel method for determining the detection frequency of the cell interior and exterior based on the tilt index of impedance pulses. Cell interiors are more heterogeneous than their morphologies in a population. When the detection frequency is high enough to penetrate the cell membrane, the tilt index of the impedance pulses for a cell population will be more varied. To our knowledge, this is the first time that the tilting level of impedance pulses has been used to determine the detection frequency of the cell interior.
In this work, four-frequency impedance cytometry was employed to analyze the conductivity of single Euglena gracilis (E. gracilis) cells, as shown in Fig. 1a. First, impedance detection at different frequencies of single E. gracilis cells was used to determine at which detection frequency the current could pass through the cell membrane. When a high-frequency electrical field penetrates the cell membrane, the uneven intracellular distribution tilts the impedance pulses to the left or right (see Fig. 1b), which is a phenomenon that has been verified in simulations and experiments. Additionally, the proposed four-frequency impedance cytometry technique (i.e., 500 kHz, 4 MHz, 7 MHz, and 10 MHz) was applied to monitor the biomass of single E. gracilis cells from four-day cultures under various conditions based on the ability of high-frequency impedance to detect intracellular biomass. The electrical scanning of E. gracilis cells internally and externally clearly showed cellular responses to different cultivation media with organic sources or inorganic ions, in which cells exhibited significant differences in multiplication, volume, and opacity. The volume of single cells was monitored via low-frequency impedance magnitudes, and their biomass changes were tracked by high-frequency impedance magnitudes. The impedance detection system was built on a field-programmable gate array (FPGA) board (see Fig. 1c) with a homemade transimpedance amplifier (see Fig. 1d)20. We envision that the tilt index can facilitate an alternative method for determining the frequency of electrical penetration for cell membranes. In addition, the proposed impedance-based platform can be adopted to evaluate cellular states and biomass, which is critical in practical applications involving continuous cell cultures21,22.
## Results
### Tilt index
To determine the penetration frequency of single cells, we performed 2D numerical simulation via the AC/DC Module of COMSOL 5.6 Multiphysics software (COMSOL Inc., Burlington, MA, USA). Herein, E. gracilis cells were simulated as single-shell ellipses (radius: 30 μm long-axis, 10 μm short-axis), and the cell membrane (10 nm) was modeled using contact impedance approximation23. Intracellular components were simplified as 2D circles (membrane thickness of 20 nm and diameter of 1 μm) and were closely placed in the left interior of the cell18,19, as shown in Fig. 2a. Other parameters of cells18 used in the simulation are listed in Table S1 in the Supplementary information.
In impedance detection, the conductivity of the cell membrane is frequency dependent, and as shown in Fig. 2a, the current density inside the cell gradually increases with increasing detection frequency. The seamless passage of high-frequency current through the cell membrane shows the applicability of high-frequency impedance detection for characterizing intracellular components. In the simulation, the increasing current density inside the cell indicates the strengthening capability of the high-frequency current to pass through the cell membrane as the detection frequency increases.
Figure 2b shows the impedance pulses induced the same cell model at four detection frequencies. As the detection frequency increases, the resistance of the cell model against current propagation decreases, resulting in smaller magnitudes of the impedance pulses. In addition, the impedance pulses corresponding to cells with symmetric shapes at a low detection frequency (500 kHz) are symmetric. In contrast, the right-hollow cell interior is responsible for inducing asymmetrical impedance pulses at high detection frequencies. The right half of the impedance pulses lasts longer than the left half. This phenomenon can be quantified by the tilt index, which is defined as the ratio of the time spans on either side of the impedance pulse minus one (see Fig. 2b). Specifically, the tilt index is 0.009 for symmetric impedance pulses at 500 kHz, while it is −0.201 for asymmetric impedance pulses at 10 MHz. More detailed information about the tilt index is provided in Fig. S1 in the supplementary information.
Figure 2c shows the dependency of the tilt index on the detection frequency from 100 kHz to 100 MHz. All tilt indices are benchmarked against the value (zero) at a low detection frequency of 100 kHz. An increasing tilt index indicates the increasing impact of the intracellular component distribution on the tilting level of the impedance pulses. As the detection frequency increases, the interior structure of the right-side of the entire cell gradually tilts the impedance pulses to the right. At approximately 10 MHz, the tilt index reaches an extreme value, and after that, the intracellular components start being electrically penetrated (see Fig. 2a). In comparison, the width and magnitude of impedance pulses (see Fig. 2d, e) contain little information about the intracellular component distribution. Both values decrease with increasing detection frequency because of the decreasing resistance of the cell membrane, as has been previously demonstrated in various studies8,24.
### Electrical penetration
Cell interiors are more heterogeneous than their morphologies in a population. Heterogeneous internal structures cause the tilt index to become increasingly decentralized when the electrical field starts penetrating the cell membrane. The simulation results of the frequency dependence of the tilt index measured using four different models, including 10 μm beads, hollow cells, left-hollow cells and right-hollow cells, are presented in Fig. 3a–c. At detection frequencies ranging from 100 kHz to 100 MHz, the current cannot penetrate the nonconductive beads. Thus, the impedance pulses of the beads are always symmetric in shape, and the tilt indices remain zero (see Fig. 3b). In contrast, the shape of the impedance pulses for hollow cells is asymmetric once the current penetrates the cell membrane. For example, at detection frequencies above approximately 100 kHz, the tilt index starts increasing from zero, and it reaches a maximum value at approximately 10 MHz. According to this phenomenon, it is possible that the propagation of current inside the cell could cause the asymmetric shape of the impedance pulses.
Additionally, when there are components inside the cell membrane, the tilting levels of impedance pulses are more noticeable than when the cell interior is hollow (see Fig. 3c). Thus, the tilt index caused by the cell membrane can be ignored for normal cells in practical detection since there are always intracellular components, such as organelles or macromolecules, within the cell membrane. In addition, the tilt index shows a dependency on the intracellular component distribution with increasing detection frequency. There was almost no difference in the values of the tilt index induced by the right-hollow or left-hollow cell model, except for the sign. The tilt index measured for a left-hollow cell model is always positive when the detection frequency is sufficient to penetrate the membrane, since the left-hollow structure causes the impedance pulses to tilt to the left. For right-hollow cells, the tilt indices are always negative. Before intracellular components are polarized by high-frequency electrical fields, the difference in tilt index induced by the left-hollow or right-hollow cell models gradually increases. In addition, this difference occurring indicates that the cell membrane is electrically penetrated.
Figure 3d, e illustrate the tilt indices induced by E. gracilis cells and 10 μm beads, respectively. Four-frequency impedance cytometry was used to measure single cells or particles at 12 different detection frequencies. Three independent measurements were made, whereby we applied the first set of detection frequencies within 500 kHz (i.e., 100 kHz, 200 kHz, 300 kHz, 400 kHz), the second set between 500 kHz and 10 MHz (i.e., 500 kHz, 4 MHz, 7 MHz, and 10 MHz), and the third set above 10 MHz (11.5 MHz, 13 MHz, 14.5 MHz, and 16 MHz). In the case of nonconductive polystyrene beads, their tilt index varied within a stable range when frequencies lower than 13 MHz were applied. After that, a higher detection frequency resulted in a more decentralized distribution of the tilt index. This may be because the detection frequency (>13 MHz) exceeds the upper limit of our impedance detection system. For the E. gracilis cells, the tilt index starts decentralizing from 7 MHz; thus, the electric penetration of the cell membrane occurs. In comparison with the tilt index of polystyrene beads, device influence can be excluded when frequencies are lower than 13 MHz.
By comparing Fig. 3c and e, we can conclude that the increasing decentralization of the tilt index is indicative of the electrical penetration of the cell membrane. In this work, a frequency of 7 MHz is sufficient to penetrate the cell membrane for intracellular component detection. Our previous findings also support this conclusion18,19.
### Organic nutrients and biomass accumulation
After determining the electrical penetration frequency of the cell membrane, we employed low-frequency impedance metrics (i.e., 500 kHz and 4 MHz) to track the volume changes in E. gracilis cells during photomixotrophic cultivation, as well as high-frequency impedance metrics (i.e., 7 MHz and 10 MHz) to monitor biomass accumulation. We cultured E. gracilis cells in Koren-Hutner (KH) medium for four days, and impedance signals were used to determine the biomass accumulation of E. gracilis cells grown photomixotrophically. The impedance detection of E. gracilis cells is shown in Movie S1 and Fig. S2 in the supplementary information. In this work, a maximum detection frequency of 10 MHz was used, which worked well within our system and is also commonly used for cell interior analysis8. The lowest detection frequency (500 kHz) was utilized in our previous work to characterize the volume and shape of single cells18,19. The two middle frequencies, namely 4 MHz and 7 MHz, were selected based on a 3 MHz spacing.
E. gracilis cells can proliferate rapidly and accumulate paramylon in photomixotrophic cultivation by either photosynthesis or digesting organic carbon sources in the cultivation medium (KH medium). The E. gracilis cells cultivated in KH medium over four days are illustrated in Fig. 4a). As shown in Fig. 4b, the number of E. gracilis cells continued to increase over four days of cultivation, from approximately 241 cells/μL to 1936 cells/μL. Additionally, the biomass of E. gracilis cells increased rapidly from 2.4 mg/mL to 8.5 mg/mL. The sudden drop at Day 3 may be due to measurement error.
For the impedance characterization of single cells, shown in Fig. 4c, all dielectric properties of E. gracilis cells were calibrated using the dielectric properties of 10 μm beads. Over a four-day cultivation period, the electrical diameter of cells at 500 kHz increased from approximately 10.89–11.46. This is because the low-frequency impedance value depends on the cell volume: a rise in low-frequency electrical diameters indicates an increase in cell volume8,18,19,24,25. At the highest detection frequency (10 MHz), current can freely penetrate the cell membrane and propagate in the cytoplasm between intracellular components (i.e., paramylon and chloroplasts), allowing the high-frequency electrical diameter to be related to the intracellular nonconductive biomass of individual cells. Therefore, increases in both the low- and high-frequency diameters indicate that there were slight increases in the volume and biomass during the first 2 days.
In Fig. 4d, the electrical opacity of the cells (Days 1–4) is nearly identical to that of the precultures (Day 0) since the new cultivation conditions are the same as the preculture conditions. However, a rise in the high-frequency electrical diameter (i.e., 7–10 MHz) occurred on the first day, earlier than the increase in the low-frequency (i.e., 500 kHz) electrical diameter that occurred on the second day (see Fig. 4c). This may be because when the E. gracilis cells were transferred to a fresh medium, adequate organic nutrients and inorganic ions induced the generation of intracellular components prior to cell multiplication. In detail, the proliferation rate of E. gracilis cells can be accelerated by the ions Mg2+, Ca2+, Mn2+, Cu2+, Co2+, and Ni2+ in the medium26, and cell multiplication occurs slightly later than chloroplast multiplication. For E. gracilis cells, the number of chloroplasts in each cell is relatively stable, varying from 10 to 2027. When there are 60 or more chloroplasts per cell, cell multiplication usually occurs28,29. Thus, E. gracilis cells may have increasing intracellular biomass prior to their multiplication, which results in a slightly earlier increase in the high-frequency electrical diameters compared to that of the low-frequency electrical diameters.
### Inorganic ions and cell multiplication
Although some inorganic metal ions are required for E. gracilis cell growth and are stabilized during biomass synthesis26,30, the organic supplies may be insufficient in the natural environment. Thus, E. gracilis cells have to grow photoautotrophically, and most of their biomass has to be produced by photosynthesis using carbon dioxide from the air as the carbon source31,32. To analyze the effects of inorganic ions on cell growth and biomass accumulation, E. gracilis cells were cultured in a 1× PBS solution as a control. The dielectric properties, cell multiplication, and biomass accumulation of the E. gracilis cells cultured in PBS and Cramer-Myers (CM) medium are compared and shown in Fig. 5.
Without organic carbon sources in the growth medium, the E. gracilis cells cultivated in PBS and CM medium over four days are shown in Fig. 5a. Because of the limited carbon sources in the air, the resulting paramylon synthesis was restricted. Thus, when the E. gracilis cells were transferred into CM medium and PBS solution, the cells started to consume stored energy (paramylon), leading to a reduction in biomass. Additionally, the effect of inorganic ions on cell growth is shown in Fig. 5b. Despite insufficient carbon sources, the E. gracilis cells in CM medium divided more frequently than those in PBS solution. This result also supported several research conclusions regarding the promotive effects of inorganic ions on E. gracilis cell multiplication33,34,35.
Although there were more cells in the CM medium than in the PBS medium, the biomasses of the cells were almost the same in both cases. In other words, the individual cells cultured in CM medium might have less biomass than the cells cultured in PBS solution. This conclusion was further confirmed by impedance detection, as illustrated in Fig. 5c. After four days of cultivation, the electrical diameters of the E. gracilis cells in PBS solution were larger than those of the cells in CM medium at four detection frequencies. This indicated that the E. gracilis cells in PBS solution had a larger volume due to a larger low-frequency electrical diameter and had denser intracellular components due to a higher electrical opacity (see Fig. 5d) compared to the cells cultured in CM medium.
Additionally, the electrical diameters and electrical opacities of cells are also good indicators of the change in cultivation medium. When the E. gracilis cells were transferred to fresh cultivation medium, the electrical diameter and opacity of the cells, especially in PBS solution, declined rapidly on the first day of cultivation, which can be related to the changes in the osmosis and pH value of the cultivation medium. After two days of adaptation to these new environments, the electrical opacity and diameter of the E. gracilis cells returned to normal.
Considering the growth conditions of the E. gracilis cells in CM medium, KH medium and PBS solution, we concluded that some inorganic ions may contribute to cell multiplication. Especially when E. gracilis cells are grown in an environment with sufficient organic and inorganic sources, their proliferation rate and cell biomass productivity reach their maximum values. Inorganic ions can accumulate in the cells36,37, and the resultant biomass is valuable as a source of biodiesel.
## Discussion
In this work, we proposed a method employing the distribution of the tilting level of impedance pulses to determine if the detection frequency is sufficient to penetrate cell membranes. At high detection frequencies, the intracellular component distribution could tilt the impedance pulses, which has been verified in simulations and experiments. The experimental results showed that when the electrical penetration of the cell membrane occurs, the distribution of the tilt index is gradually decentralized as the detection frequency increases. Research has found that a detection frequency of 7 MHz is sufficient to penetrate the cell membrane of E. gracilis cells.
In living E. gracilis cells, there are two types of intracellular components that account for most of the biomass, namely, paramylon and chloroplasts. Paramylon, as a biodiesel source38,39, accounts for more than 50% (w/w) of the dry weight of individual E. gracilis cells40. Another source of biodiesel is the membrane of chloroplasts, which accounts for approximately 22% of the biomass. As an environmentally benign and sustainable alternative, biomass energy has sparked considerable interest within both scientific and industrial communities4,41. In Japan, for example, biodiesel made from microalga Euglena gracilis (E. gracilis) cells has been used in shuttle buses and commercial airplanes as a “next-generation renewable fuel”42. Since microalgal cells are heterogenous and their biomass productivity varies with their growth conditions, efficient techniques are needed for monitoring biomass accumulation during cultivation processes. In this work, our study showed that impedance cytometry is applicable for analyzing the cell morphology as well as the intracellular components of E. gracilis cells. Additionally, mammalian cells and other types of cells that are important in the fields of biomedicine and with respect to the environment will be analyzed in the future.
For the intracellular components of E. gracilis cells, individual paramylon molecules are encased by a biomembrane and often exhibit high degrees of crystallinity, which theoretically contributes to the high current resistance33,43. Chloroplasts are double-membrane organelles that result in a higher current resistance at the same detection frequency compared to that of a single membrane cell. For the assessment of intracellular biomass, it is necessary to determine the detection frequency at which the cell interior can be detected. In this work, we reported that 7 MHz is sufficient to penetrate the cell membrane of E. gracilis cells. However, the dielectric properties of various organelles and biomolecules are still unknown. According to the simplified numerical model, intracellular components can be polarized at 10 MHz, but in experiments, this frequency is insufficient. Consequently, the dielectric parameters of intracellular components require additional study for a more accurate simulation.
Multifrequency impedance cytometry has been widely employed in single-cell detection and analysis5,8,44. In addition, it is well known that high-frequency impedance values are related to intracellular components, as high-frequency currents can propagate through the cell membrane24. Since current propagation is not visible in cells, we lack a direct method for determining if the detection frequency is applicable for measuring cell interior components. In this work, we proposed that the distribution of intracellular components can be used to determine the electrical penetration of the cell membrane. Specifically, when the electric penetration of the membrane occurs, the decentralization of the tilt index distribution increases with increasing detection frequency. This is because cell interiors are more heterogeneous than their morphologies in a population. Heterogeneous internal structures would cause the tilt index to become increasingly decentralized.
Herein, we employed low-frequency impedance to conduct a performance analysis of the morphology and volume of single cells and high-frequency impedance to characterize intracellular biomass. The detection mechanism is supported by numerous previous works. Our previous work has shown that low-frequency detection is applicable for analyzing cell morphology, as the low-frequency current mainly propagates around the cell13,18,19. The dependence of low-frequency impedance on cell volume has also been verified8. Additionally, it has been demonstrated that high-frequency impedance can be used to analyze the amount19, distribution18, and density19,45 of intracellular components. Herein, impedance-based biomass analysis is based the applicability of high-frequency impedance for monitoring the amount and density of intracellular components, which has also been proven by experimental results.
Last, the proposed impedance-based platform has been shown to be applicable for evaluating the effects of the culture conditions on E. gracilis cell growth (volume, opacity and number) and biomass accumulation. High-frequency impedance magnitudes (≥7 MHz) can be applied to characterize biomass accumulation, and low-frequency impedance magnitudes (≤4 MHz) allow the quantification of the volume of single cells. The changes in biomass accumulation and cultivation in different media were successfully monitored over four days. In the future, we suggest extending the application of the tilt index to mammalian cells to track changes in membrane properties regarding cell aging, carcinogenesis, or lysis.
## Materials and methods
### Sample preparation
Experiments were performed on E. gracilis NIES-48 cells provided by the Microbial Culture Collection at the National Institute for Environmental Studies (NIES, Japan). The cultures were grown in culture tubes each with a working volume of 13 mL under continuous illumination (warm white, 130–150 μmol/m2/s) at 28 °C. To study the effects of organic nutrients on the biomass and metabolization of individual cells, E. gracilis cells were grown photomixotrophically using KH medium (pH: 3.5)46. To study the effects of inorganic ions on cell growth, E. gracilis cells were cultivated photoautotrophically using CM medium (pH: 3.9)47. E. gracilis cells were cultivated using 1× phosphate-buffered saline solution (PBS, pH: 6.9) as the control group. The detailed components of the CM and KH media were described by Wang et al.35. Briefly, the CM medium does not include any organic carbon sources, whereas the KH medium contains glucose and various organic acids and amino acids as carbon sources48. Both KH and CM media contain high concentrations of inorganic ions, such as Zn2+, Mn2+, Fe3+, Cu2+, Co2+, and Ni2+, some of which can promote the biomass accumulation and multiplication of E. gracilis cells36,49.
For calibration, 10 μm polystyrene beads (Polysciences, USA) were utilized as reference particles due to their frequency-independent physical properties, which allow them to be considered as perfect insulators. Theoretically, the magnitudes of the four-frequency impedances of beads should be identical44.
All samples were transferred to 1×PBS and injected into microfluidic devices using a syringe pump (NIHON KOHDEN CFV-3200). The sample flow rate was 4.5 μL/min, resulting in a throughput of approximately 1250 samples/s for impedance detection in this work. Three replicates of each experiment were generally performed, i.e., in Fig. S3, there were three cell culture groups for each cell culture state.
### Growth detection of E. gracilis
Experiments were conducted over a 4-day period (i.e., 0–4 days) for the CM-medium, KH-medium, and 1×PBS solution, with each group containing three independent cultures of E. gracilis for robust characterization. The growth of E. gracilis cells was analyzed according to cell number, dry weight, cell volume, and opacity. Herein, the dry weight of E. gracilis cells was determined using 0.4 mL of cultures that had been dried at 100 °C for more than 4 h50. The volume of the cells was determined using low-frequency electrical diameters ($$\left| {Z_{LF}} \right|^{1/3}$$), and the volume of intracellular components was determined using high-frequency electrical diameters ($$\left| {Z_{HF}} \right|^{1/3}$$)18. The color changes in the culture tubes over four days are shown in Fig. S3 in the supplementary information, and the green color in the culture tubes becomes darker as the number of cells grows.
### Impedance microfluidic devices
The polydimethylsiloxane (PDMS) microchannel in the detection area is 40 μm wide and 35 μm deep. The channel is placed over two pairs of coplanar electrodes, each pair consisting of one source and one detection electrode (dimensions: 30 μm wide, 30 μm edge-to-edge span, and 80 μm pair span). Each electrode is coated with a 70 nm thick layer of gold (Au) over a 70 nm thick layer of chromium (Cr). The complete fabrication procedure was described in detail in our previous work13,18.
### Impedance detection and calibration
As shown in Fig. 1c, a field-programmable gate array (FPGA)-based lock-in amplifier (Diligent Eclypse Z7, USA) was used to generate a 1 V alternative current (AC) signal at four detection frequencies (i.e., 500 kHz, 4 MHz, 7 MHz, and 10 MHz) and perform the real-time processing of the impedance signals13,20 from the detection area. The current signals from two detection electrodes were converted into voltage signals with self-developed transimpedance amplifiers (I/V converters) and then compared with differential amplifiers (Diff). The resulting differential voltage was further processed on the FPGA board to obtain the corresponding impedance signals. All impedance signals were recorded at a sampling rate of 62.5 kHz using a data collection device (USB-6363 BNC, National Instruments, USA) and displayed in real time on a computer through NI DAQExpress (National Instruments, USA). The experimental steps for using this device for impedance detection are shown in Fig. S4 in the supplementary information. In Fig. 1d, the frequency response of the I/V converter shows its ability to stably convert current signals of up to 18.48 MHz. When the detection frequency was greater than 10 MHz, the amplification factor (gain) began to decline.
### Processing and characterization of impedance signals
The impedance signals were processed using custom scripts written in MATLAB (version 2021b, MathWorks, USA). The impedance (|Z|) of each bead or E. gracilis cell was determined using a single-peak Gaussian fit to extract the peak signal amplitude for each applied frequency. The mean impedance magnitudes of the 10 μm beads at four frequencies were determined automatically and then used to calibrate the electrical diameters of the E. gracilis cells. The mean electrical opacity (|ZHF|/|Z500kH|) and diameter ($$\left| Z \right|^{1/3}$$) of E. gracilis cells were normalized using single linear multipliers to ensure that the mean values of both of the impedance parameters of the beads were at opacity = 1 and diameter = 10 at each frequency. Additionally, the morphology and intracellular distribution of the E. gracilis cells can be assessed at low and high frequencies, respectively, using the tilt index (TLeftTRight − 1) by comparing the time span of the left and right half (TLeftTRight) of the impedance pulses18.
## References
1. Zangle, T. A., Burnes, D., Mathis, C., Witte, O. N. & Teitell, M. A. Quantifying biomass changes of single CD8+ T cells during antigen specific cytotoxicity. PLOS ONE 8, e68916 (2013).
2. Rhind, N. Cell-size control. Curr. Biol. 31, R1414–R1420 (2021).
3. Petrus, L. & Noordermeer, M. A. Biomass to biofuels, a chemical perspective. Green. Chem. 8, 861–867 (2006).
4. Sindhu, R. et al. Biofuel Production From Biomass: Toward Sustainable Development. Current Developments in Biotechnol. and Bioeng: Waste Treat. Processes for Energy Generation 79–92 (2019) https://doi.org/10.1016/B978-0-444-64083-3.00005-1.
5. Mikami, H. et al. Virtual-freezing fluorescence imaging flow cytometry. Nat. Commun. 11, 1162 (2020).
6. Gala de Pablo, J., Lindley, M., Hiramatsu, K. & Goda, K. High-throughput raman flow cytometry and beyond. Acc. Chem. Res. 54, 2132–2143 (2021).
7. Ota, N. et al. Isolating single euglena gracilis cells by glass microfluidics for raman analysis of paramylon biogenesis. Anal. Chem. 91, 9631–9639 (2019).
8. Honrado, C., Bisegna, P., Swami, N. S. & Caselli, F. Single-cell microfluidic impedance cytometry: From raw signals to cell phenotypes using data analytics. Lab on a Chip (2021) https://doi.org/10.1039/d0lc00840k.
9. Zhong, J., Liang, M. & Ai, Y. Submicron-precision particle characterization in microfluidic impedance cytometry with double differential electrodes. Lab on a Chip (2021) https://doi.org/10.1039/D1LC00481F.
10. Kim, J., Li, B., Scheideler, O. J., Kim, Y. & Sohn, L. L. Visco-node-pore sensing: a microfluidic rheology platform to characterize viscoelastic properties of epithelial cells. iScience 13, 214–228 (2019).
11. Haandbæk, N., Bürgel, S. C., Rudolf, F., Heer, F. & Hierlemann, A. Characterization of single yeast cell phenotypes using microfluidic impedance cytometry and optical imaging. ACS Sens. 1, 1020–1027 (2016).
12. Petchakup, C., Li, H. & Hou, H. W. Advances in single cell impedance cytometry for biomedical applications. Micromachines (Basel) 8, (2017).
13. Tang, T. et al. Microscopic impedance cytometry for quantifying single cell shape. Biosens. and Bioelectron. 113521 (2021) https://doi.org/10.1016/j.bios.2021.113521.
14. Xu, Y. et al. A review of impedance measurements of whole cells. Biosens. Bioelectron. 77, 824–836 (2016).
15. Zhao, Y. et al. Development of microfluidic impedance cytometry enabling the quantification of specific membrane capacitance and cytoplasm conductivity from 100,000 single cells. Biosens. Bioelectron. 111, 138–143 (2018).
16. Sui, J., Foflonker, F., Bhattacharya, D. & Javanmard, M. Electrical impedance as an indicator of microalgal cell health. Sci. Rep. 10, 1–9 (2020).
17. Zhong, J., Li, P., Liang, M. & Ai, Y. Label-free cell viability assay and enrichment of cryopreserved cells using microfluidic cytometry and on-demand sorting. Adv. Mater. Technol. 2100906 (2021) https://doi.org/10.1002/ADMT.202100906.
18. Tang, T. et al. Dual-frequency impedance assays for intracellular components in microalgal cells. Lab on a Chip (2022) https://doi.org/10.1039/D1LC00721A.
19. Tang, T. et al. Impedance-based tracking of the loss of intracellular components in microalgae cells. Sens. Actuators B: Chem. 358, 131514 (2022).
20. Tang, T. et al. FPGA-assisted nonparallel impedance cytometry as location sensor of single particle. in 2021 21st Int Conf. on Solid-State Sensors, Actuators and Microsyst. (Transducers) 727–730 (IEEE, 2021). https://doi.org/10.1109/Transducers50396.2021.9495657.
21. Fernandez-de-Cossio-Diaz, J. & Mulet, R. Maximum entropy and population heterogeneity in continuous cell cultures. PLOS Computational Biol. 15, e1006823 (2019).
22. Thuronyi, B. W. et al. Continuous evolution of base editors with expanded target compatibility and improved activity. Nat. Biotechnol. 2019 37:9 37, 1070–1079 (2019).
23. Gong, L. et al. Direct and label‐free cell status monitoring of spheroids and microcarriers using microfluidic impedance cytometry. Small 17, 2007500 (2021).
24. Sun, T. & Morgan, H. Single-cell microfluidic impedance cytometry: a review. Microfluidics Nanofluidics 8, 423–443 (2010).
25. Han, X., van Berkel, C., Gwyer, J., Capretto, L. & Morgan, H. Microfluidic lysis of human blood for leukocyte analysis using single cell impedance cytometry. Anal. Chem. 84, (2012).
26. Ochi, H. Requirement and accumulation of inorganic ions in Euglena gracilis Z. J. Jpn. Soc. Nutr. Food Sci. 43, 54–57 (1990).
27. Nigon, V. & Heizmann, P. Morphology, Biochemistry, and Genetics of Plastid Development in Euglena gracilis. in 211–290 (1978). https://doi.org/10.1016/S0074-7696(08)62243-3.
28. Cook, J. R. Unbalanced growth and replication of chloroplast populations in Euglena gracilis. J. Gen. Microbiol. 75, 51–60 (1973).
29. DAVIS, E. & EPSTEIN, H. Some factors controlling step-wise variation of organelle number in. Exp. Cell Res. 65, 273–280 (1971).
30. Khatiwada, B. et al. Probing the role of the chloroplasts in heavy metal tolerance and accumulation in Euglena gracilis. Microorg. 8, 115 (2020).
31. Yan, R., Zhu, D., Zhang, Z., Zeng, Q. & Chu, J. Carbon metabolism and energy conversion of Synechococcus sp. PCC 7942 under mixotrophic conditions: comparison with photoautotrophic condition. J. Appl. Phycol. 4, 657–668 (2012).
32. Beardall, J. & Raven, J. A. Limits to phototrophic growth in dense culture: CO2 supply and light. Algae for Biofuels and Energy. 91–97 (2013) https://doi.org/10.1007/978-94-007-5479-9_5.
33. Gissibl, A., Sun, A., Care, A., Nevalainen, H. & Sunna, A. Bioproducts from Euglena gracilis: synthesis and applications. Front. in Bioengineering and Biotechnol. 7, 2296–4185 (2019).
34. Einicker-Lamas, M. et al. Euglena gracilis as a model for the study of Cu2+ and Zn2+ toxicity and accumulation in eukaryotic cells. Environ. Pollut. 120, 779–786 (2002).
35. Wang, Y., Seppänen-Laakso, T., Rischer, H. & Wiebe, M. G. Euglena gracilis growth and cell composition under different temperature, light and trophic conditions. PLoS ONE 13, 1–17 (2018).
36. Matsumoto, T., Inui, H., Miyatake, K., Nakano, Y. & Murakami, K. Comparison of nutrients in Euglena with those in other representative food sources. Eco-Eng. 21, 81–86 (2009).
37. Guedes, A. C. & Malcata, F. X. Nutritional value and uses of microalgae in aquaculture. Aquaculture 10, 59–78 (2012).
38. Matos, Â. P. The impact of microalgae in food science and technology. J. Am. Oil Chemists’ Soc. 94, 1333–1350 (2017).
39. Furuhashi, T. et al. Wax ester and lipophilic compound profiling of Euglena gracilis by gas chromatography-mass spectrometry: toward understanding of wax ester fermentation under hypoxia. Metabolomics 11, 175–183 (2014).
40. Sun, A., Hasan, M. T., Hobba, G., Nevalainen, H. & Te’o, J. Comparative assessment of the Euglena gracilis var. saccharophila variant strain as a producer of the β‐1,3‐glucan paramylon under varying light conditions. J. Phycol. 54, 529–538 (2018).
41. Tilman, D., Hill, J. & Lehman, C. Carbon-negative biofuels from low-input high-diversity grassland biomass. Science (1979) 314, 1598–1600 (2006).
42. Kottuparambil, S., Thankamony, R. L. & Agusti, S. Euglena as a potential natural source of value-added metabolites. A review. Algal Res. 37, 154–159 (2019).
43. Barsanti, L., Passarelli, V., Evangelista, V., Frassanito, A. M. & Gualtieri, P. Chemistry, physico-chemistry and applications linked to biological activities of β-glucans. Nat. Prod. Rep. 28, 457 (2011).
44. Spencer, D. & Morgan, H. High-speed single-cell dielectric spectroscopy. ACS Sens. 5, 423–430 (2020).
45. Salahi, A., Honrado, C., Rane, A., Caselli, F. & Swami, N. S. Modified red blood cells as multimodal standards for benchmarking single-cell cytometry and separation based on electrical physiology. Anal. Chem. 94, 2865–2872 (2022).
46. Koren, L. E. High-yield media for photosynthesizing Euglena gracilis Z. J. Portozool 14, 17 (1967).
47. Cramer, M. & Myers, J. Growth and photosynthetic characteristics of euglena gracilis. Arch. f.ür. Mikrobiologie 17, 384–402 (1952).
48. Muramatsu, S. et al. Isolation and characterization of a motility-defective mutant of Euglena gracilis. PeerJ 8, e10002 (2020).
49. Cousins, R. J. A role of zinc in the regulation of gene expression. Proc. Nutr. Soc. 57, 307–311 (1998).
50. Kamalanathan, M., Chaisutyakorn, P., Gleadow, R. & Beardall, J. A comparison of photoautotrophic, heterotrophic, and mixotrophic growth for biomass production by the green alga Scenedesmus sp. (Chlorophyceae). https://doi.org/10.2216/17-82.157, 309–317 (2019).
## Acknowledgements
This work is supported by JSPS Core-to-Core program; JSPS Grant-in-Aid for Scientific Research (No. 20K15151); Amada Foundation, Japan; Sasakawa Scientific Research Grant, Japan; NSG Foundation, Japan; White Rock Foundation, Japan; Australian Research Council (ARC) Discovery Project (DP200102269), Australia; and the Nara Institute of Science and Technology Support Foundation, Japan; JST Support for Pioneering Research Initiated by the Next Generation program and Nara Institute of Science and Technology Touch stone program.
## Author information
Authors
### Contributions
Conceptualization: T.T., Y.Y.; Methodology: T.T., Y.Y.; Simulation: T.T., X.L.; Experiments: T.T.; Data analysis: T.T., X.L.; Device fabrication: T.T., Y.Y., Y.T., Y.Y.; Resource: T.T., R.K., T.Z., K.S., Y.H. and Y.Y.; Writing original draft: T.T.; Writing review & editing: T.T., Y.Y., M.L., Y.Y., Y.T., Y.H.; Funding acquisition: T.T., Y.Y., M.L.
### Corresponding author
Correspondence to Yaxiaer Yalikun.
## Ethics declarations
### Conflict of interest
The authors declare no competing interests.
## Rights and permissions
Reprints and Permissions
Tang, T., Liu, X., Yuan, Y. et al. Assessment of the electrical penetration of cell membranes using four-frequency impedance cytometry. Microsyst Nanoeng 8, 68 (2022). https://doi.org/10.1038/s41378-022-00405-y | 2022-08-15 23:19:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4434836208820343, "perplexity": 5465.120162744533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00771.warc.gz"} |
http://nerderati.com/2014/08/19/bartering-for-beers-with-approximate-subset-sums/ | My favourite days are the ones where I get to solve a seemingly difficult everyday problem with mathematics. A few weeks ago, my friend Andrei came to me via IRC with a question, which I have heavily paraphrased:
I've got a list of the beers in my cellar, and each beer has a price associated with it. I'd like to figure out how to generate a list of fair trades between myself and another person who has their own list of beers and prices, for any combination of bottles between the two of us.
Or, more generally: given two lists of items, each item with a fixed value, how may we calculate the possible combinations of “equal” groups of one or more items for each list when comparing to the other list in question?1
## A Computational Complexity Problem
As it turns out, the above question may be neatly encapsulated by the subset sum problem. And, as with most interesting problems in computability, exact solutions to SUBSET-SUM are NP-Complete.
We are in luck, however. SUBSET-SUM may be solved approximately via a deterministic, fully polynomial-time approximation scheme, or FPTAS2. For NP-Complete problems that don't require exact solutions, that's about as good as it gets.
First, let's describe how we can arrive at the approximate subset sum algorithm3, and then we'll look at some python code that will generate a set of results for us.
## The Decision Problem
The decision problem for subset sum is to determine if there exists any subset of a set that adds up exactly to a given value. The naïve, brute-force method of arriving at a solution would be to generate the powerset of our given item list, and then scan through all generated subsets and their respective sums until we find a match (if any). Generating the powerset of a set of N items has a time complexity of O(2N), which indicates that this naïve method will quickly become unreasonable for modern processors once we go above ~35 input items.
## The Optimization Problem
A somewhat obvious improvement to the above Python code is that, when generating the subsets for the powerset, we calculate the partial sums of each as they are created. If the partial sum of a subset goes above our target total at any point, we may discard that subset.
This does not change the worst-case time complexity for the given decision problem, but it does provide us with a framework on which we can build the associated optimization problem4 of finding a subset sum that is within a given tolerance of our target total.
### Pseudocode
The next observation is that we don't really care about the how the partial sums are constructed (we will later, but for the purposes of exposition we'll forget about that for the moment). Thus, we don't really need to keep track of all the elements within the subset, only the relevant partial sum for each. This leads us to the EXACT-SUBSET-SUM algorithm, which simply computes the list of sums of all subsets that do not exceed a specified target total for a given input list:
EXACT-SUBSET-SUM(S, t)
L = [0]
for i = 1 to LENGTH(S)
Li = MERGE-LISTS(Li-1, Li-1 + xi)
for j = 1 to LENGTH(L)
if Lj > t then DELETE Lj
return MAX(L)
See 5 for an explanation of MERGE-LISTS.
This does not improve the worst-case time complexity when compared to the brute-force generation of the entire powerset, but it does set us up more readily for the approximation algorithm, where we introduce a TRIM procedure that scans the intermediate lists for values that are within δ of each other and combines them into a single value. With a little bit of work6, this can be shown to be an FPTAS:
APPROXIMATE-SUBSET-SUM(S, t, ε)
L = [0]
n = LENGTH(S)
for i = 1 to n
Li = MERGE-LISTS(Li-1, Li-1 + xi)
Li = TRIM(Li, ε/2n)
for j = 1 to LENGTH(L)
if Lj > t then DELETE Lj
return MAX(L)
## The Implementation, in Python
Since the impetus for this discussion was finding a computable solution to a real-world problem, we would be remiss to not include something a bit more tangible than pseudocode. We've also made some slight modifications to the classical solution, in that internally we use a list of hashes instead of simply a list of integers that correspond to the partial sums.
These hashes allow us to track both the current partial sum for that list entry in addition to tracking which list elements were used to construct that particular partial sum. In the context of computing possible groupings of beers to trade, knowing which beers belong in a group is just as essential as their combined total value.
The hashes (or dictionaries, as they're called in Python) are composed of two keys, where value tracks the current partial sum total and partials keeps track of which values were used to arrive at said value:
Note: This is by no means the only way of tracking which list item elements are used to construct a partial sum, nor is it the most efficient (e.g. we trade memory for a slightly smaller constant in the time complexity due to the caching of the partial sum total instead of computing the total on each iteration)</p>
First, we implement the TRIM procedure, which merges list elements when they are within a given δ of each other. Note that for the below implementation to work, we must ensure that the input data list is sorted:
Next, we implement MERGE-LISTS to combine two lists and sort the result using the most excellent itertools library that leverages generators:
And finally, we combine TRIM and MERGE-LISTS as described in our APPROXIMATE-SUBSET-SUM algorithm pseudocode, ensuring that we properly track the computed partial sums and their corresponding elements.
The above must be invoked with a list of integers for data, a target total, and an approximation parameter epsilon which must be between 0 and 1 (e.g. epsilon=0.2 indicates that the target total must be within 20% of the optimal answer).
A Gist of the code has been provided convenience.
## A Test Run
Let's test this out with a bit of real-world data. Assume that I have a cellar consisting of the following beers and their value7:
• Bottle A: $6.50 • Bottle B:$12.00
• Bottle C: $13.50 • Bottle D:$4.50
• Bottle E: $28.75 • Bottle F:$16.25
• Bottle G: $15.00 • Bottle H:$18.75
I've decided that I'd like to make a trade with my friend Andrei for some of his beers, and we've both agreed a priori that we'd like to keep the trade at less than or equal to \$34.50 in value. Since SUBSET-SUM and APPROXIMATE-SUBSET-SUM are integer algorithms (think for a moment as to why this is), we multiply all dollar amounts by 100 and put them in a list. Additionally, Andrei and I have determined that a 20% delta on the target total is acceptable (we are friends, after all):
We can discard the trivial 0 item that is an artifact of the above implementation (which could easily be removed in the approximate_subset_sum function itself), and our solution of bottles [A, B, G]8 is well within 20% of the target total. Now all Andrei needs to do is run the same script with the same target but using his own inventory as input data, and we have ourselves a trade.
Or he could just do what normal people do, and pick out a few bottles that he thinks I would enjoy. But where's the fun in that?
1. Incidentally, this kind of question is one that people have been asking themselves for much of recorded human history in the context of bartering: I've got things that you want, you've got things that I want. Let's figure out a fair trade of items based on their perceived value.
2. The difference between a FPTAS and a PTAS is subtle, but an important one: A PTAS is an approximation whose time complexity is polynomial in the input size. An FPTAS guarantees that the time complexity is polynomial in $$\frac 1\epsilon$$ in addition to being polynomial in the input size. Practically, this means that the time complexity for a PTAS may increase dramatically as the approximation parameter $$\epsilon$$ decreases, while an FPTAS will not.
3. For a more in-depth discussion on the approximation algorithm for subset sum optimization problems, please refer to Cormen et. al, 3rd Edition, p.1128-1134
4. Informally, a decision problem is one where we only wish to determine if a solution exists (a yes/no question). An optimization problem is one where we wish to find the best solution.
5. MERGE-LISTS returns sorted merge of both input lists with duplicates removed. A Python code example is provided later on in the post.
6. Cormen et. al., p.1132-1133
7. Note that the "value" of the items might have nothing to do with their actual cost, as is often the case with beer, wine and other aged spirits. As long as both parties agree on the abstract value assigned to all items, then all is well.
8. One small enhancement to our Python implementation would be to allow the tracking of additional item metadata other than the value, for the sake of convenience. | 2017-04-25 22:08:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5839609503746033, "perplexity": 581.984436507325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120881.99/warc/CC-MAIN-20170423031200-00506-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://authorea.com/doi/full/10.22541/au.158316291.17581913 | Global dynamics of an nonautonomous SEIRS epidemic model with vaccination and nonlinear incidence
• Long Zhang,
• Xiaolin Fan,
• Zhidong Teng
Long Zhang
Xinjiang University
Author Profile
Xiaolin Fan
Xinjiang Institute of Engineering
Author Profile
Zhidong Teng
Xinjiang University
Author Profile
## Abstract
In this paper, a class of nonautonomous SEIRS epidemic models with vaccination and nonlinear incidence is investigated. Under some quite weak assumptions, a couple of new threshold values in the form of integral, i.e., $R_1$, $R^*_1$, $R_2$ and $R^*_2$ on the extinction and permanence of disease for the model are established. As special cases of our model, the autonomous, periodic and almost periodic circumstances are discussed respectively. The nearly necessary and sufficient criteria of threshold on the extinction and permanence of disease for above cases are obtained as well. Numerical examples and simulations are presented to illustrate the analytic results.
#### Peer review status:ACCEPTED
25 Feb 2020Submitted to Mathematical Methods in the Applied Sciences
01 Mar 2020Submission Checks Completed
01 Mar 2020Assigned to Editor
01 Mar 2020Reviewer(s) Assigned
07 Feb 2021Review(s) Completed, Editorial Evaluation Pending
08 Feb 2021Editorial Decision: Revise Minor | 2021-03-07 05:20:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4205327332019806, "perplexity": 7350.779237412133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376144.64/warc/CC-MAIN-20210307044328-20210307074328-00179.warc.gz"} |
http://math.stackexchange.com/questions/215472/how-to-prove-abk-leq-2k-1-akbk?answertab=active | # How to prove $|a+b|^k \leq 2^{k-1} (|a|^k+|b|^k)$?
Assume that you have any two real numbers $a$ and $b$, and $1\leq k <\infty$, $k \in \mathbb{R}$.
How would you prove the inequality $|a+b|^k \leq 2^{k-1} (|a|^k+|b|^k)$?
-
What about $k$? Is it a positive integer? A positive real? – Patrick Da Silva Oct 17 '12 at 7:16
Edited, $1 \leq k < \infty$. – Rojas Azules Oct 17 '12 at 7:17
Sorry, $k \in \mathbb{R}$. – Rojas Azules Oct 17 '12 at 7:20
Method $1$:
The function $f(x) = x^k$ for $x\geq 0$ is a convex function for $k \geq 1$. Now apply Jensen's inequality.
Method $2$:
Let $\lvert a \rvert \geq \lvert b \rvert$. Let $t = \dfrac{b}a \implies 0 \leq \vert t \vert \leq 1$. We then want to prove that $$\vert 1 + t \vert^k \leq 2^{k-1} \left(1+\vert t \vert^k \right)$$ $$\left \vert \dfrac{1+t}2 \right \vert^k \leq \dfrac{1+\left \vert t \right \vert^k}2$$ $$\left \vert \dfrac{1+t}2 \right \vert^k \leq \left(\dfrac{1+ \vert t \vert}{2} \right)^k$$ Setting $y = \vert t \vert \in [0,1]$, we want to prove that $$\left(\dfrac{1+ y}{2} \right)^k \leq \dfrac{1+y^k}2$$ $$f(y) = \dfrac{1+y^k}2 - \left(\dfrac{1+ y}{2} \right)^k$$ $$f'(y) = \dfrac{ky^{k-1}}2 - \dfrac{k}{2^k}(1+y)^{k-1} = \dfrac{k}2 \left(y^{k-1} - \left(\dfrac{1+y}2 \right)^{k-1} \right)$$ For $y \in [0,1]$, $y \leq \dfrac{1+y}2$, and hence $y^k \leq \left( \dfrac{1+y}2 \right)^k$. This means that $f'(y) < 0$ and hence $f(y)$ is decreasing. Hence, $$f(y) > f(1)$$ which implies $$\left(\dfrac{1+ y}{2} \right)^k \leq \dfrac{1+y^k}2$$ | 2014-10-23 08:20:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9655604958534241, "perplexity": 244.23600599419564}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507452681.5/warc/CC-MAIN-20141017005732-00241-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://tutorme.com/tutors/11342/interview/ | TutorMe homepage
Subjects
PRICING
COURSES
Start Free Trial
Kailey C.
College Student, Five Years Tutoring Experience
Tutor Satisfaction Guarantee
Chemistry
TutorMe
Question:
Consider a sample of calcium carbonate $$(CaCO_3)$$ in the form of a cube measuring 2.005 inches on each side. If the sample has a density of $$2.71 g/cm^3$$, how many oxygen atoms does it contain?
Kailey C.
We can begin by finding the mass of the cube. Since density is provided in cubic centimeters, we must find the volume in these units. The volume of the cube in inches is $$2.005^3 in ^3$$, or $$8.06015$$ $$in^3$$. We know that $$1 in =2.54cm$$. Therefore, we can find that $$1^3 in^3 = 2.54^3 cm^3 = 16.387 cm^3$$. We multiply this by the volume of the cube in cubic inches to reach the volume in $$cm^3$$ and can then multiply by the density to find the mass. We find that $$8.06015 in^3 \cdot \frac{16.387 cm^3}{1 in^3}\cdot \frac{2.71 g}{1 cm^3} = 357.943 g$$. Next, we must find the molar mass of $$CaCo_3$$. We add the atomic mass of each atom to find $$Ca (40.078) + C (12.011) + O_3(15.99 \cdot 3) = 100.086 g$$. Now we can divide the mass of the cube by the molar mass of $$CaCO_3$$ to find the number of moles present. With this information, we can multiply by Avogadro's number to find the number of molecules of $$CaCO_3$$. We find $$357.943 \cdot \frac{1\,mol\,CaCO_3}{100.086 g\,CaCO_3} \cdot \frac{6.022 \cdot 10^{23}\,molecules}{1\,mol}=2.15\cdot10^{24}\,molecules$$. Finally, we know that there are three atoms of oxygen for every mole of $$CaCO_3$$. So we must multiply the number of molecules by the number of oxygen atoms per molecule, and we find our final answer to be $$2.15 \cdot 10^{24}\,molecules\,CaCO_3 \cdot \frac{3\,oxygen\,atoms}{1\,molecule\,CaCO_3}=6.46\cdot10^{24}\,atoms\,of\,oxygen$$.
Biology
TutorMe
Question:
How does lysozyme protect your eyes from infection? Precisely what does it do? Which bacteria is more likely to cause an eye infection, Gram positive or Gram negative and why?
Kailey C.
Lysozyme is found in tears; it protects your eyes from infection by breaking the $$\beta$$ $$1, 4$$ bonds that lend structure to the peptidoglycan in bacteria. Gram positive bacteria rely heavily on a thick wall of peptidoglycan, but Gram negative bacteria also have a protective membrane. This makes Gram negative bacteria better at causing eye infections.
Calculus
TutorMe
Question:
Find the equation of the tangent plane to $$z=2e^{x}+3sin(y)$$ at the point $$(0, 0, 2)$$
Kailey C.
We know that $$z-z_{0}=f_{x}(x_{0}, y_{0}) + f_0 (x_0, y_0)(y-y_0)$$. We derive to find that $$f_x=2e^x$$ and $$f_y=2y$$. At the point $$(0, 0, 2)$$, $$f_{{x}_{0}}=2$$ and $$f_{{y}_{0}}=3$$. By plugging into the original equation, we find that $$z-2=z(x-0)+3(y-0)$$. With some simplification, this becomes $$z=2x+3y+2$$.
Send a message explaining your
needs and Kailey will reply soon.
Contact Kailey | 2018-12-13 00:11:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6983767151832581, "perplexity": 429.3311812779256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824180.12/warc/CC-MAIN-20181212225044-20181213010544-00160.warc.gz"} |