url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://unapologetic.wordpress.com/2011/09/16/stokes-theorem-on-manifolds/?like=1&source=post_flair&_wpnonce=5b12459335
|
# The Unapologetic Mathematician
## Stokes’ Theorem on Manifolds
Now we come back to Stokes’ theorem, but in the context of manifolds with boundary.
If $M$ is such a manifold of dimension $n$, and if $\omega$ is a compactly-supported $n$-form, then as usual we can use a partition of unity to break up the form into pieces, each of which is supported within the image of an orientation-preserving singular $n$-cube. For each singular cube $c$, either the image $c([0,1]^n)$ is contained totally within the interior of $M$, or it runs up against the boundary. In the latter case, without loss of generality, we can assume that $c([0,1]^n)\cap M$ is exactly the face $c_{n,0}([0,1]^{n-1})$ of $c$ where the $n$th coordinate is zero.
In the first case, our work is easy:
$\displaystyle\int\limits_Md\omega=\int\limits_cd\omega=\int\limits_{\partial d}\omega=\int\limits_{\partial M}\omega$
since $\omega$ is zero everywhere along the image of $\partial c$, and along $\partial M$.
In the other case, the vector fields $\frac{\partial}{\partial u^i}$ — in order — give positively-oriented basss of the tangent spaces of the standard $n$-cube. As $c$ is orientation, preserving, the ordered collection $\left(c_*\frac{\partial}{\partial u^1},\dots,c_*\frac{\partial}{\partial u^n}\right)$ gives positively-oriented bases of the tangent spaces of the image of $c$. The basis $\left(c_*\left(-\frac{\partial}{\partial u^n}\right),c_*\frac{\partial}{\partial u^1},\dots,c_*\frac{\partial}{\partial u^{n-1}}\right)$ is positively-oriented if and only if $n$ is even, since we have to pull the $n$th vector past $n-1$ others, picking up a negative sign for each one. But for a point $(a,0)$ with $a\in[0,1]^{-1}$, we see that
$\displaystyle c_{*(a,0)}\left(\frac{\partial}{\partial u^i}\right)=(c_{n,0})_{*a}\left(\frac{\partial}{\partial u^i}\right)$
for all $1\leq i\leq n-1$. That is, these image vectors are all within the tangent space of the boundary, and in this order. And since $c_*\left(-\frac{\partial}{\partial u^n}\right)$ is outward-pointing, this means that $c_{n,0}:[0,1]^{n-1}\to\partial M$ is orientation-preserving if and only if $n$ is even.
Now we can calculate
\displaystyle\begin{aligned}\int\limits_Md\omega&=\int\limits_cd\omega\\&=\int\limits_{\partial c}\omega\\&=\int\limits_{(-1)^nc_{n,0}}\omega\\&=(-1)^n\int\limits_{c_{n,0}}\omega\\&=(-1)^n(-1)^n\int\limits_{\partial M}\omega\\&=\int\limits_{\partial M}\omega\end{aligned}
where we use the fact that integrals over orientation-reversing singular cubes pick up negative signs, along with the sign that comes attached to the $(n,0)$ face of a singular $n$-cube to cancel each other off.
So in general we find
\displaystyle\begin{aligned}\int\limits_{\partial M}\omega&=\sum\limits_{\phi\in\Phi}\int\limits_{\partial M}\phi\omega\\&=\sum\limits_{\phi\in\Phi}\int\limits_Md(\phi\omega)\\&=\sum\limits_{\phi\in\Phi}\int\limits_M\left(d\phi\wedge\omega+\phi d\omega\right)\\&=\sum\limits_{\phi\in\Phi}\int\limits_Md\phi\wedge\omega+\int\limits_Md\omega\end{aligned}
The last sum is finite, since on of the support of $\omega$ all but finitely many of the $\phi$ are constantly zero, meaning that their differentials are zero as well. Since the sum is (locally) finite, we have no problem pulling it all the way inside:
$\displaystyle\sum\limits_{\phi\in\Phi}\int\limits_Md\phi\wedge\omega=\int\limits_Md\left(\sum\limits_{\phi\in\Phi}\phi\right)\wedge\omega=\int\limits_Md\left(1\right)\wedge\omega=0$
so the sum cancels off, leaving just the integral, as we’d expect. That is, under these circumstances,
$\displaystyle\int\limits_Md\omega=\int\limits_{\partial M}\omega$
which is Stokes’ theorem on manifolds.
September 16, 2011 - Posted by | Differential Topology, Topology
1. […] Stokes’ Theorem on Manifolds […]
Pingback by The Fundamental Theorem of Line Integrals « The Unapologetic Mathematician | October 24, 2011 | Reply
2. […] Stokes’ theorem tells us […]
Pingback by The Divergence Theorem « The Unapologetic Mathematician | November 22, 2011 | Reply
3. […] last we come to the version of Stokes’ theorem that people learn with that name in calculus courses. Ironically, unlike the fundamental theorem […]
Pingback by The Classical Stokes Theorem « The Unapologetic Mathematician | November 23, 2011 | Reply
4. […] in the de Rham cohomology. But we know that it cannot also be exact, for if for some -form then Stokes’ theorem would tell us […]
Pingback by Compact Oriented Manifolds without Boundary have Nontrivial Homology « The Unapologetic Mathematician | November 24, 2011 | Reply
5. […] . The support of both and is contained in some large -dimensional parallelepiped , so we can use Stokes’ theorem to […]
Pingback by Compactly Supported De Rham Cohomology « The Unapologetic Mathematician | December 6, 2011 | Reply
6. […] curve can be written as the boundary of some surface . Then we take any closed -form with . Stokes’ theorem tells us […]
Pingback by Simply-Connected Spaces and Cohomology « The Unapologetic Mathematician | December 17, 2011 | Reply
7. […] for exactness: if for some -form , then Stokes’ theorem tells us […]
Pingback by A Family of Nontrivial Homology Classes (part 3) « The Unapologetic Mathematician | December 27, 2011 | Reply
|
2016-12-04 12:20:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 40, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9952408671379089, "perplexity": 499.3494257499795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00181-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://codegolf.meta.stackexchange.com/questions/2140/sandbox-for-proposed-challenges/5483
|
# Sandbox for Proposed Challenges
This "sandbox" is a place where Code Golf users can get feedback on prospective challenges they wish to post to main. This is useful because writing a clear and fully specified challenge on your first try can be difficult, and there is a much better chance of your challenge being well received if you post it in the sandbox first.
Sandbox FAQ
## Posting
To post to the sandbox, scroll to the bottom of this page and click "Answer This Question". Click "OK" when it asks if you really want to add another answer.
Write your challenge just as you would when actually posting it, though you can optionally add a title at the top. You may also add some notes about specific things you would like to clarify before posting it. Other users will help you improve your challenge by rating and discussing it.
When you think your challenge is ready for the public, go ahead and post it, and replace the post here with a link to the challenge and delete the sandbox post.
## Discussion
The purpose of the sandbox is to give and receive feedback on posts. If you want to, feel free to give feedback to any posts you see here. Important things to comment about can include:
• Parts of the challenge you found unclear
• Comments addressing specific points mentioned in the proposal
• Problems that could make the challenge uninteresting or unfit for the site
You don't need any qualifications to review sandbox posts. The target audience of most of these challenges is code golfers like you, so anything you find unclear will probably be unclear to others.
If you think one of your posts needs more feedback, but it's been ignored, you can ask for feedback in The Nineteenth Byte. It's not only allowed, but highly recommended!
It is recommended to leave your posts in the sandbox for at least several days, and until it receives upvotes and any feedback has been addressed.
## Other
Search the sandbox / Browse your pending proposals
The sandbox works best if you sort posts by active.
To add an inline tag to a proposal use shortcut link syntax with a prefix: [tag:king-of-the-hill]. To search for posts with a certain tag, include the name in quotes: "king-of-the-hill".
Get the Sandbox Viewer to view the sandbox more easily!
# It's just a flesh wound!
The idea is to create a program that:
• If any one of the four quarters (counted in bytes) is removed, the program outputs "Tis' but a scratch" (exactly, with optional newline).
• If any two of the four quarters are removed, the program outputs "Just a flesh wound.".
• If any three of the four quarters are removed, the program outputs "Let's call it a draw, then.".
• The full program should output "None shall pass.".
Rules:
• The program has to have length divisible by four (4).
• The program must not read it's own source or it's length in any way.
• The output is to stdout if it is possible in your language (REPL output is considered valid in this case).
• The answer with the fewest bytes wins.
• I don't think this would actually be a duplicate of anything in the source-layout tag, but it doesn't feel like it would add anything to the sum of what's already in that tag. – Peter Taylor Mar 19 '15 at 19:17
• Would these 'quarters' be defined by the user, or is it any random 1/4th of the program? – ASCIIThenANSI Apr 5 '15 at 16:49
• @ASCIIThenANSI The quarters are successive quarters of the code. ie. the first one is 0 - 1/4, second is 1/4 - 2/4, third is 2/4 - 3/4 and fourth is 3/4 - 4/4 – seequ Apr 5 '15 at 17:48
• Would it be allowed to read the program's own length? – ASCIIThenANSI Apr 7 '15 at 13:03
• @ASCIIThenANSI No. Updated. – seequ Apr 7 '15 at 14:50
• Here's an idea: If it is full, it prints None shall pass., one-quarter, Tis' but a scratch., and three-quarters, Let's call it a draw, then.. – ASCIIThenANSI Apr 7 '15 at 14:55
• @ASCIIThenANSI Awesome. – seequ Apr 7 '15 at 15:41
# Programming Tetris Blocks (Even More Literally?)
In this challenge, you will write a Tetris AI. There's one twist though: the AI will operate from the perspective of the Tetris blocks themselves.
Note: I am worried about the novelty of this question. The key is "the perspective of the Tetris blocks themselves." In order to make this interesting, I have to give the AI a bare minimum of information needed to make a move. Otherwise, it will just be a regular Tetris AI challenge.
When a Tetris block is spawned at the top of the map, a new AI object is created. Each time step, and the block receives data about its immediate surroundings and returns a move (move left/right, rotate clockwise/counterclockwise, or nothing).
An idea as to "block vision": each of the four squares in a block each have four "eyes," one on each side. Each eye returns the distance to the nearest wall/block (including/excluding other squares in the same block?). This means that the AI will receive exactly sixteen numbers each update.
#######
# 1234#
# # #
#### ##
#######
If a 2D array where each row (1st level) is a square and each column (second level) is an eye in the directions [U,D,L,R], then here is what could be seen as input, with 0s representing an adjoining block.
[[1,2,2,0],[1,1,0,0],[1,3,0,0],[1,2,0,1]]
More details coming sometime not now.
• For 'block vision', what if returns -1 if it hits the same block? Also, shouldn't the squares be numbered left to right and up to down, so the I piece is [1][2][3][4], and an example 'block vision' would be [1, 2, 3, 1, 2, 3, 1, 2, 3, 4, 4, 4, 4, 4, 2, 2]? – ASCIIThenANSI Apr 14 '15 at 16:24
• @ASCIIThenANSI In your example, which numbers refer to which squares/eyes? – PhiNotPi Apr 14 '15 at 16:49
• Square 1 is the first 4 numbers ([1, 2, 3, 1]), square 2 the next 4 ([2, 3, 1, 2]), etc. The directions are in the format [U, D, L, R], and it works like [U, D, L, R, U, D, L, R, U, D, L, R, U, D, L, R]. – ASCIIThenANSI Apr 14 '15 at 17:10
• I couldn't really visualize where you were getting the numbers from, so I added my own example. – PhiNotPi Apr 14 '15 at 17:27
# Background
Consider the following grid:
a b c
| | |
| | |
| | |
d---+---+---+---e
|1 |2 |3
| | |
| | |
f---+---+---+---g
|4 |5 |6
| | |
| | |
h---+---+---+---i
|7 |8 |9
| | |
| | |
j k l
I've marked every endpoint with a letter a-l, and every + with a number 1-9. Imagine, for a moment, that this grid represents a small section of a town. Each | or - represents one segment of a two-way road, and each + represents an intersection, which will have a corresponding stoplight.
During the game, cars will be added and removed from the grid at the endpoints a-l. Cars move exactly one space (through one segment of road or through one intersection) per turn, and never change direction. Thus, if a car enters the grid at endpoint d, it will exit after reaching endpoint e. We may assume that the cars are smart enough to avoid all collisions. They will never move to a space occupied by another vehicle, and they will never enter an intersection when the stoplight they see is red. When a car reaches the opposite endpoint, it disappears and can be safely forgotten.
Assume that we have a variable entitled public_unhappiness that is initialized to 0.
If a car following the above rules may not move due to another vehicle or a stoplight, the value of public_unhappiness is increased by 1.
//SANDBOX NOTE: This formula is linear, but one could say that unhappiness goes up exponentially the longer you sit at a stoplight. This formula is subject to change.
We pit two bots against each other, both controlling traffic flow in different ways. One bot aims to maximize public_unhappiness and the other aims to minimize it. We will refer to the former as The Driver and the latter as The Traffic Engineer. Because this KotH is inherently unbalanced, Drivers and Traffic Engineers will face off in a round-robin tournament (playing only against the opposing faction) and will be placed in separate leaderboards.
# Input
Though the bots are different and rely on entirely different strategies, every bot has access to the same information. Every turn, the bots will be prompted with a list of command-line arguments. Below is a general format:
./Traffic_Troubles Your_bot.extension S1 S2 S3 S4 S5 S6 S7 S8 S9 N a,b,c a,b,c ...
S1 through S9 are binary digits that represent what direction traffic may flow through the corresponding stoplight. If the value is 1, traffic flows horizontally through this stoplight. If the value is 0, traffic flows vertically. Hence, a car approaching intersection 1 from the east will stop moving if the value of S1 is a 0, and continue moving along if that value is a 1.
The following argument is N. This represents the number of cars currently active on the board.
There then follows N descriptions of cars in the form a,b,c. Here, a is the character of the endpoint that a car originated, b is its destination, and c is the number of spaces it has moved. A car that has just been put on endpoint a has moved 0 spaces, and would thusly be described as a,j,0. On the other hand, a car approaching intersection 6 from the west would be described as f,g,11.
On the first turn, every stoplight has value 0, and no cars exist on the board (N == 0).
//SANDBOX NOTE: This input seems pretty messy... Any ideas?
# The Traffic Engineer
Traffic Engineers aim to minimize public_unhappiness by changing the values of the stoplights to allow for traffic to continue through.
You may specify the values of up to three stoplights per turn. Every turn your bot is called, you must provide up to 3 space-separated output pairs of the form a,b where a is the number of the spotlight you want to change, and b will be a binary digit representing the desired value of the stoplight. Invalid output will count as a change to the stoplights, but be ignored. You may choose to output any number of changes less than or equal to 3.
//SANDBOX NOTE: The value of 3 is subject to change.
# The Driver
Drivers aim to maximize public_unhappiness by choosing entry points for cars.
Every turn, you may output up to six distinct entry points for cars in the form X Y Z .... If a car already exists on that entry point and is not moving in the opposite direction that output will be ignored. You may specify any number of entry points less than or equal to 6.
//SANDBOX NOTE: The number 6 is subject to change
# The Sequence of Events
1. Both bots are called at roughly the same time with access to the exact same information.
2. Cars are added to the entry points and the value of stoplights are changed.
3. Cars move, and public_unhappiness is incremented accordingly.
4. Any car that has surpassed its respective exit point is removed from play.
//SANDBOX NOTE: Perhaps the Traffic Engineer should be able to view where the Driver put cars and adjust accordingly. Thoughts?
# Rules
1. Your bot is given 1 second to respond.
2. You may not tailor your bot to act specifically against another bot.
3. Please provide a method for compiling your bot and a command-line method for running your bot.
4. The header of your answer should be in this format:
[Language-name] - [Traffic Engineer/Driver] - [Bot-name]
5. Standard Loopholes are disallowed.
//SANDBOX NOTE: If this idea is received well (~4-6 upvotes on the sandbox) I will build the controller. For now, it's just an idea. If you wish to run/improve on this KotH, you are welcome to.
# 8-FTU - Retrofit UTF-8 to any pre-1988 language
The design of Unicode started in 1987 and was first published in 1988. UTF-8 itself was designed in 1992 and first presented in 1993. Your goal is to retrofit the UTF-8 encoding of Unicode to any language that was in existence on 31 December 1987. You can't use any features that were added to the language after this date.
Your program will take a text input (byte encoded characters, possibly with errors) and up to two integers. Your program must accept any value at any byte position (00-FF).
### Task 1 - Validate the input
Your program will print one of TRUE/FALSE, True/False or true/false depending on whether the text is valid UTF-8 or not, and exit if the format is not valid. See below for validation rules. There are also many online resources that cover the format that you can reference.
### Task 2 - Count the code points
If your program didn't exit at the end of Task 1, it will print the number of Unicode code points encoded within the text.
### Task 3 - Substring
Using the two integer inputs your program will find and output the matching substring. The first integer will be the starting position, 0 will be the start of the string. If the starting position is after the end of the string, return an empty string. The second integer will be the length of the substring in Unicode code points. If the length is omitted or goes past the end of the string return all the text from the start position to the end of the string. You do not need to program for negative numbers, although you can if you want to.
Tasks 1 & 2 must be printed to standard output. If printing the output of Task 3 would have undesirable consequences (e.g. characters interpreted as control codes) you may return the text instead. You don't have to worry about how the text will display, your code will be taken by DeLorean or TARDIS (depending on country) to 1987 or earlier where a team of engineers will work on displaying it correctly!
### Valid encodings
Code points Byte encoding
--------------- -----------------
U+0000 - U+007F Standard 7-bit ASCII (00 - 7F)
U+0080 - U+07FF Two bytes per code point (C2 80 - DF BF)
U+0800 - U+D7FF Three bytes per code point (E0 A0 00 - ED 9F BF)
U+D800 - U+DFFF High and low surrogate pairs, invalid in UTF-8 (ED A0 80 - ED BF BF)
U+E000 - U+FFFF Three bytes per code point (EE 80 80 - EF BF BF)
U+010000 - U+10FFFF Four bytes per code point (F0 80 80 80 - F4 8F BF BF)
### Byte table
• 00 - 7F: Standard 7-bit ASCII
• 80 - BF: Continuation bytes
• C0 - C1: Invalid - Task 1 must print one of the false messages if either of these bytes are present
• C2 - DF: Start of two-byte code
• E0 - EF: Start of three-byte code. ED codes where the next byte is one of A0-BF are invalid because they encode surrogate pairs
• F0 - F4: Start of four-byte code. Note: not all sequences starting with F4 are valid. You need to test for these too
• F5 - FF: Invalid - Task 1 must print one of the false messages if any of these bytes are present
The remainder of a multi-byte code must only be continuation bytes until the length is reached. E.g. E4 85 B9 is valid because E4 marks the start of a three byte code, there are exactly three bytes and 85 and B9 are both within the range 80-BF. A continuation byte must not appear except as part of a multi-byte sequence, which must start with C2-F4. Long encodings are not allowed. E.g. "A" is 41, which could also be encoded as C1 81 or E0 81 81. These longer sequences are invalid because there is a shorter, valid sequence.
You don't need to worry about the BOM code point U+FEFF (EF BB BF). Treat it as any other character even if it appears within the text.
### Example input (to be expanded)
C3 87 61 20 76 61 3F 0 2 (Ça va?, 7 bytes, 6 code points)
Outputs:
True
6
Ça
C3 87 61 20 76 61 3F 2 (Ça va?, 7 bytes, 6 code points)
Outputs:
True
6
va?
C1 87 61 20 76 61 3F 0 2 (Ga va?, 7 bytes (overlong error), 6 code points)
Outputs:
False
As mentioned above, the output for Task 3 may be returned as a string instead of printing it.
### Scoring
Either shortest code in bytes or a bonus awarded for retrofitting an older language. Maybe bytes minus the number of months before January 1988, assuming a release date of December if not otherwise specified?
# KotHgress
As everyone knows, the only way to make sure your voice is heard among a group of people is to shout louder than everyone else. This is especially true in KotHgress, a bureaucratic committee of PPCG bots.
The KotHgress Register is a 1D string, at least 100 characters long, containing the minutes of each committee meeting. The only problem is that all the committee members talk at the same time, often shouting over each other, so that (like a typical committee), nothing ever gets done. However, since this is a committee of bots, efficiency is prized almost as much as volume.
# Rules
The Register for each meeting is a string of length max(100, N_bot * 4). At the beginning of each meeting, a committee member bot is pseudorandomly assigned an ascii character to be its voice, and 3 starting positions for its voice in the Register (initial index of 1), with each bot's positions having the same sum - for example, [1,4,100] and [5, 25, 75] could be starting positions.
Each turn, a bot receives 20 points times the number of times its voice appears in the Register. The bot can spend any amount of its points to bid on positions in which to place its voice. A bot that does not spend all its points banks any remaining points towards its score for the round; points do not carry over to following rounds.
Once all bids have been collected, each position is overwritten with the voice of the highest bidder, with ties for high bid causing no change in that position's current character. Note: a bot that is outbid for a position it already occupies loses that position.
Then, each bot accumulates score equal to the combined rank of its voice characters in the Register (for example, "ABABB" would score 4 for "A" at rank 1 and 3, and 11 for "B" at rank 2, 4, and 5), and the Register is sent as input to each member for them to choose their next bids.
After 100 turns, the meeting is over, and the bot with the highest accumulated score wins the meeting.
# Input
Each turn, bot will receive four inputs, in this order:
1. a single character which is its voice
2. a positive integer indicating its current (banked) score
3. a positive integer indicating the number of points it collected this turn
4. a string of length max(100, N_bot * 4), the Register
# Output
The bot should output a string consisting of integer pairs, separated like so: "pos0 bid0|pos1 bid1|...|posM bidM". Banked points will be automatically calculated from the output: banked_points = turn_points - sum(bids).
Invalid output, including sum(bids) > turn_points, will cause your bot to lose its turn (not banking any points).
# Meta-notes
• Controller construction is in progress.
• I expect it to be language-agnostic (using a similar setup to aBOTcalypse). Bots will be allowed one storage file for memory purposes.
• I understand that a > z, A > a, and A > Z. But which would be greater: a or Z? And is the bot with voice a placed before or after the bot with voice b? – ASCIIThenANSI May 15 '15 at 13:10
• I was figuring on going in ascii order, so A->Z->a->z'. That allows for 52 committee members; I can do non-alphas if we get more interest than that, lol. – sirpercival May 15 '15 at 13:33
• OK. Just make sure that you add that to the rules. You could also use some of ASCII's 95 printable characters (minus space, and maybe take out some others that could mess up the input.) – ASCIIThenANSI May 15 '15 at 13:43
• I'll be a little more specific about this, sure. – sirpercival May 15 '15 at 15:02
• This has the classic KotH flaw: the best strategy is either to be purely random or to be the last person to submit your bot, and to metagame it against everyone else's bots. – Peter Taylor May 17 '15 at 16:01
• how would one metagame? the priority order is randomized at the beginning of each meeting – sirpercival May 17 '15 at 16:19
# Time Travel in KotH
This is not a question but a possible mechanic for KotH (type challenge).
In KotH we can ask each program (player) to store all its memory in a file between steps. This makes it possible to change a player's memory to an older one which is not possible with human players. This fact makes it possible to crate time-traveling based games.
I will outline to mechanics here, a simpler one (Time Reverters) and a more complex one (Time Travelers). Both will use a simple game to show how they would work.
# Time Reverters (mechanic I)
## A very simple example game
• Two players, N rounds.
• At every round a random player scores a point.
• The winner is the player with more points after round N.
## The time reverse twist
• At any time in the game a player can chose to time travel (TT) back to any previous round. This means the players will receive the memory they had at that round and forget everything else. Neither of them will know a TT happened.
• Each player can TT K times and this is counted by the controller. If a player tries to travel when it has no more travels left, the TT request is simply ignored.
# Time Travellers (mechanic II)
(will be written later...)
# Split multi-language no-space sentences
You will be given a string representing a sentence without word any boundaries. Additionally, you're given a dictionary of all possible words. Output all possible possible ways of splitting the sentence into words.
But there's a catch! The sentence was written by a drunken polyglot and contains words from multiple languages mixed together. Luckily, you've already got a dictionary of which words from different languages cirrespond to each other. So for each word, you should output its surface form (as it appears in the input sentence) and base form (eg. English). To prevent meaningless interpretations of a sentence as many one-letter words and abbreviations, sort the output by the number of words of each interpretation, lowest first.
Example:
teeistunnationalgetränkdeeikoku
tee/tea ist/is un/a national/national getränk/drink de/of eikoku/england
Given the dictionary below, the first word can only be tee, German for tea. Then comes is, which is already the English is. Then un, French for a. And so on.
tea => tea, tee, cha
is => is, ist, est, dess
a => a, ein, un, une, aru
national => national, kokkateki
drink => drink, getränk, nomimono
of => of, de, von, no
england => england, angleterre, igirisu, eikoku
Each entry of the dictionary is of the form:
<base_form> => <surface_form_1>, <surface_form_2>, ...
That is, the dictionary contains the base form (here English is used), and a list of possible variations in different languages for each.
See below for examples with multiple possibilities.
# Scoring
Code-golf.
(Golfed explanation. Ungolfed: The answer with the shortest code as measured by its bytesize in an encoding the interpreter or compiler accepts without additional flags wins. Multiple files add a penalty of 1 byte for each additional file. Flags add to the score.)
# Rules
• The defaults apply, function or program.
• You may assume there exists at least one solution.
• Your program must run in a reasonable amount of time, so don't just try every possible combination of words. Be prepared for a dictionary that contains hundred or thousands of words. Let's say about ~10 minutes on a modern PC.
• You must support unicode, at least the basic multi-lingual plane, codepoints 0x0000 - 0xFFFF. If your language of choice does not support unicode, you can emulate it using a fixed-length unicode encoding: consider each n bytes of the input sentence a letter.
• You do not need to worry about unicode modifiers, normalizing etc. -- each codepoint is considered a unique "letter".
# Input
• The defaults apply, stdin, command line argument, function argument, javascript prompt etc.
• The input sentence is given as a string (teaist) or list/array ([t,e,a,i,s,t])
• You may assume the input sentence is already in lower case.
• There is no additional punctuation in the input sentence to take care of.
• However, the input sentence and the dictionary may contain "words" with commas, periods, etc., ie. every unicode codepoint from the basic multilingual plane. It will not contain any of the codepoints 0x00-0x20, which means no null-bytes, spaces, tabs, newlines, so you can use them for separating the output.
• The dictionary may be in any format of your choice, but it must contain an association between a base form and all possible surface forms. It must not list all base forms for each surface form.
• You may also read the dictionary from a file, and take the file name or raw data as input.
• You may also assume the dictionary has already been stored in one variable of your choice. If you do, please provide some code for reference how I can put custom data in it, or read the dictionary from a file. However, it must not be pre-processed and as close to a hash (eg. {<base_form> => [ <surface_form_1>, <surface_form_2>, ... }) or array/list (eg. [ [<base_form>,<surface_form_1>,<surface_form_2>,...], [<base_form>,...], ...]) as possible in your language.
• You may choose whether the list of surface forms includes the base form or not, eg. big => big, gross or big => gross.
# Output
• The defaults apply, stdout, stderr, return value, javascript alert, etc.
• It should not need to be said, but if you output to stderr, nothing else but the solution must go to stderr, unless it's a compiler/interpreter warning that can be turned off by a flag.
• A list/array of all possible interpretations of splitting the sentence into words, sorted by the number of words.
• Each interpretation is a list/array of words. Each word is pair/list/array containing the surface form and the base, you may choose in which order. The words must be ordered as they appear in the input sentence.
• Alternatively, the array may be flattened.
• Alternatively, output two lists/arrays, one containing the base form for each word, and one the surface form.
• Alternatively, output a string representation of the array/hash/list.
For example, you could output an array
[["tee","tea"],["est","is"]]
or a flattened array
["tee","tea","est","is"]
or two arrays
["tee","est"]
["tea","is"]
or a string representation such as tee:tea:est:is or tee\ttea\nest\tis\n (\t tab, \n newline). But make sure you're escaping characters properly.
# Test cases:
First line is the input string. Each following line is a possible way of placing word boundaries, surface/base. Afterwards a sample dictionary is provided.
## 1
Note that some words contain a semi-colon.
abcd;efghi
ab/test cd;ef/awesome ghi/result
a/only bcd/test ;ef/awesome ghi/result
with the dictionary:
test: ab, bcd
awesome: cd;ef, ;ef
result: ghi
only: a
yahoo: abcd;efgh
## 2
sumomouserune
sumo/sumo mouse/mouse rune/rune
sumomo/plum useru/lose ne/right
with the dictionary:
sumo: sumo, mouse:mouse, rune: rune, plum: sumomo, lose: useru, right: ne
## 3
koukousensei
koukou/school sensei/teacher
koukou/shiptravel sensei/starfortunetelling
with the dictionary
school: koukou, teacher: sensei, shiptravel: koukou, starfortunetelling: sensei
## 4
unicode support:
with the dictionary:
sira:白, haku:白, kumo:雲, uñ:雲
## 5
Note the order of the results.
aaaa
aaaa/test
a/test aaa/test
aa/test aa/test
aaa/test a/test
a/test a/test aa/test
a/test aa/test a/test
aa/test a/test a/test
a/test a/test a/test a/test
with the dictionary:
test: a, aa, aaa, aaaa
I'll add a larger example should I post this.
• This is basically like a previous suggestion I made, but less complicated. My main motivation had been parsing Japanese kanji compounds, hence the unicode support, but I translated it into something more familar to non-Japanese speakers. – blutorange May 20 '15 at 19:21
## Snake vs labyrinth
Write a program that takes as input a text file representing a labyrinth and checks if this labyrinth can be entirely filled with a snake path. The program should output true or 1 if this is the case, false or 0 else.
The snake can enter the labyrinth at any point. He can move one cell up, left, right or down; once he has crossed a cell of the grid, he cannot go back to that cell. The snake cannot cross a wall or the borders of the labyrinth.
The labyrinth file is a grid of m x n characters, containing either # (wall) or . (empty space).
Example 1
should return true
.
Example 2
should return true
..
..
Possible solution (S = snake start, E = snake end, v = go down, < = go left)
Sv
E<
Example 3
should return true
...
.#.
...
Possible solution (S = snake start, E = snake end, v = go down, < = go left, > = go right, ^ = go up)
S>v
E#v
^<<
Example 4
should return false
.#
#.
Example 5
should return false
#.#
...
Example 6
should return false
.#.
...
...
This is code-golf, so the shortest code wins.
• I take it there aren't any limitations on efficiency? I seem to remember it's an open problem whether HAM-PATH is hard on subgraphs of the grid graph, and if so, you won't be getting an poly-time algorithms. – xnor May 22 '15 at 9:13
• @xnor Yes, no limitations on time/efficiency – Arnaud May 22 '15 at 11:59
Given an input, calculate the correct suffix and output the number in a readable format. The suffixes must go to at least 10^3000, in which the rules for calculating them can be found here, or a list can be found here.
For example:
10000 = 10.0 thousand
135933445 = 135.93 million
-2 = -2.0
-2.36734603 = -2.37
'1'+'9'*3000 = 2.0 nongennovemnonagintillion
Rules:
• No getting things from external resources - it must all be calculated within the code.
• External modules are fine as long as it doesn't breach the above rule.
• The input should work when input as a string, integer or float.
• The output must always contain a decimal place.
• The output must be rounded if above 2 decimal places.
• Leaving zeroes at the end is optional, as long as it doesn't go above 2 decimals (1.00 or 1.0 are both fine) and is consistent for all inputs (1 should output the same as 1.0).
• Must not throw an error no matter how high or low the input is.
Scoring:
• Score is the length of the code, including indents.
• Lowest score wins.
• It does not need to be a function, printing the output is fine.
As a starting point, here is an ungolfed version of some code to generate the list of suffixes. Feel free to build upon this or start from scratch.
a = ['', 'un','duo','tre','quattor','quin','sex','septen','octo','novem']
c = ['tillion', 'decillion', 'vigintillion', 'trigintillion', 'quadragintillion', 'quinquagintillion', 'sexagintillion', 'septuagintillion', 'octogintillion', 'nonagintillion']
d = ['', 'cen', 'duocen', 'trecen', 'quadringen', 'quingen', 'sescen', 'septingen', 'octingen', 'nongen']
num_dict = ['']
num_dict.append('thousand')
num_dict.append('million')
num_dict.append('billion')
num_dict.append('trillion')
num_dict.append('quintillion')
num_dict.append('sextillion')
num_dict.append('septillion')
num_dict.append('octillion')
num_dict.append('nonillion')
for prefix_hundreds in d:
#tillion can't be used first time round
if not prefix_hundreds:
b = c[1:]
else:
b = c
for prefix_tens in b:
for prefix in a:
num_dict.append(prefix_hundreds+prefix+prefix_tens)
For the record, my result is 578 characters. To be fair I'm surprised I couldn't find this being asked before :P
• The meat of this challenge is basically just the reverse of this one, which may explain why you haven't seen it this way. – Geobits May 27 '15 at 16:07
• "The input can be in any format, not just integers." is far too vague. "The output must be rounded if too long." needs to define "too long", and ideally give a rounding rule (e.g. floor, ceiling, nearest half-up, nearest half-down, nearest half-even, nearest half-odd). – Peter Taylor May 30 '15 at 13:46
• Thanks, sorry I'd missed the first answer, that is quite similar haha. This one should use 10x the values and may golf better with outputting the number instead of reading the input. And is the rounding rule actually needed? I generally meant what the default rounding functions do (that you learn in school), which is like half up, but then half down when you're below 0. – Peter May 31 '15 at 8:45
• Ended up busy with other stuff recently so only just remembered about this, would it be worth posting, or is it too similar to that other question? – Peter Jul 13 '15 at 21:46
# Code Bots ϕ
This is a challenge based on the popular Code Bots ("What's wrong with public variables?") challenge.
I made a few observations during the development of the original Code Bots:
1. I had a literal ton of ideas and wouldn't stop harassing Nathan Merrill about them.
2. Nathan Merrill already claimed Code Bots 2.
The solution is obvious: make my own challenge. So, that is exactly what I intend to do.
Note: I am working on various ways to distinguish the two challenges.
## Variables
Note: I have changed the names of some variables to make them more "intuitive."
The variables A and B each store an integer 0-23.
The variable C (for Control) stores an integer 0-23 and is incremented at the end of every turn. It indicates which line of the program is to be executed.
The variable D (for Direction) stores and integer 0-23, which determines the current direction of the bot. The direction is determined by {north east south west}[[D % 4]].
The variable E (for Entropy) is overwritten by a random integer at the end of each turn, but only if your bot uses it that turn.
The variable F (for Feeling) provides a sense of touch. This allows you to detect when a bot is next to you. The value equals the number of adjacent bots. For example, F equals 4 when you are completely surrounded on all four sides (and thus out of luck).
The variable G (for aGe) provides a timer. The value is incremented after the end of every turn, mod 24. This allows an easy way to do for-like loops.
## Instructions
Each line contains a single command, and each command takes a variety of arguments.
Flag : This represents your flag. Your goal is to smear your flag across the known universe. Each flag line has a hidden identifier denoting the owner. These lines do nothing upon execution.
Move : This moves the bot 1 unit forward in the direction that it is facing.
Copy [expr|line] [var|line] : This copies one expression to another. Both expressions must be of the same type. You can copy a line to another line or copy the value of one variable to another. Copying to yourself has a new advantage in that, instead of making an immediate change, the change is made just prior to the start of your next turn.
Copy2 [expr|line] [var|line] : This is a new version of copy, but instead of performing the action immediately after your turn, it performs the action immediately before your next turn. [todo: find a better name for it. Maybe "DelayCopy"]
If [cond] [line] [line] : This is an If statement, one of the most important statements. If the conditional evaluates to true, then the first of the two lines is executed immediately afterwards (on the same turn). If the conditional is false, then the second line is executed immediately afterwards. In order to prevent infinite loops of various kinds, a bot is not allowed to execute the same line twice in a single turn.
Jump [expr] [cond] : This is a new instruction designed to help speed up bots. Given a number N, it immediately sets the value of C to N and then executes #N on the same turn. The condition is optional, but if present it will determine whether or not the Jump command is executed or not.
Block [var|line] : This blocks a certain variable or line. Each variable or line can be blocked once, and this block prevents one modification attempt of that variable/line. If an opponent (or yourself) attempts to modify that variable, then the variable is merely unblocked rather than modified.
## Arguments
There are four types of arguments, var, line, expr, and cond. Here are their relationships:
var = *[var] | A | B | C | D | E
expr = [var] | [expr][op][expr] | [literal number]
op = + | - | %
line = *[line] | #[expr]
cond = [expr] | [line] | [expr]=[expr] | [expr]==[expr] | [line]=[line] | [line]==[line]
There are three types of operators which can be used in expressions: Addition +, Subtraction -, and Modulo %. The modulo operator has highest preference (left to right), with addition/subtraction being applied afterwards.
There are several different kinds of conditionals.
1. If [expr] usually returns true if the value of the expression is non-zero. There are several special cases. If D returns true if there is a bot directly in front of you. If E returns true if E is odd. (Note: add more special cases)
2. If [line] returns true if the line contains a flag.
3. If [expr]=[expr] returns true if the two expressions are equal mod 4.
4. If [line]=[line] returns true if the two lines are the same type (same command).
5. If [expr]==[expr] returns true if both expressions are equal mod 24.
6. If [line]==[line] returns true if the two lines are exactly equal (such as both flags having the same owner).
## The Turn Structure
(Initially, all variables are 0 except for E)
[command execution starts]
The command at line #C is executed.
If there is a chain of logic, it is followed until it stops or a line is visited twice.
The effect of a Copy statement is applied, if any.
C is incremented
If needed, E is randomized.
(other bot's turns here)
F is updated
The effect of a Copy2 statement is applied, if any.
[next command execution]
## Line Labels
To increase the ease of writing bots, there will be new things called line labels, which look like this word: and can be placed at the start of a line. Later in the code, you can reference the line label like this :word. (Note: The exact formatting is up for discussion).
During preprocessing, the controller will replace all instances of :word with the number of the line labeled word:. If the :word label is the only thing on the line (no command) then the entire line will be copied into that blank line. Here is an example:
main: If D #:move #:attack
:main
:main
Jump :main
move: Move
attack: Copy #:flag *#*C
## Other usability features
The language will be completely case-insensitive. Comments will take the form of //comment.
## The Arena
There will be 50 bots of each type entered into an arena. Initially, all of the bots will be evenly spaced on a grid and facing north.
....@.......@.......
.......@.......@....
..@.......@.......@.
.....@.......@......
@.......@.......@...
...@.......@.......@
......@.......@.....
.@.......@.......@..
Note: I believe this to be an improvement because the bots are not directly lined up with each other. In Code Bots 1, a bot could Move on its first turn and end up right behind another bot. In this grid, a bot has a much smaller chance of
## A complicated example bot
Each bot can contain up to 24 lines, and each line contains an instruction. If there are any blank lines (after substitution with line labels), then those lines are filled by Flags.
main: If D #:attackloop #:move //attack if an opponent
:main //automatically filled in with the same line
loop: jump :main //executes line 0 again and sets C to 0
move: if T #:run #:turn //always move (never rotate) when being attacked
run: Move
turn: If E #:run #:turn2 //randomly pick move or rotate
turn2: Copy2 D+1 D
attackloop: jump :attack
attack: Copy #:freeze *#*C //freeze the opponent
plant: Copy #C+6 *#E //plant your flag
:plant
:plant
:plant
freeze: Copy2 C-1 C //this creates an endless loop when executed
After preprocessing, the above bot turns into the bot below. You can also simply submit the bot below without using any line labels if you'd like.
If D #8 #3
If D #8 #3
Jump 0
If T #4 #5
Move
If E #4 #6
Copy2 D+1 D
Jump 9
Copy #14 *#*C
Copy #C+6 *#E
Copy #C+6 *#E
Copy #C+6 *#E
Copy #C+6 *#E
Jump 2
Copy2 C-1 C
Flag
Flag
Flag
Flag
Flag
Flag
Flag
Flag
Flag
## Short, Useful Code Snippets
Block #G
Jump 0 G //block all the lines in 24 moves
A covering array is an N by k array in which each element is one of {0, 1, ..., v-1} (so v symbols in total), and for any t columns chosen (so an N x t array) contains all possible v^t tuples.
Input: N, k, t, v (all 4 are positive integers), and then N x k integers, each of which comes from {0, ..., v-1}. Each of the integers are separated by spaces.
Output: Yes if the input is a valid covering array, and No if it is not.
Goal: in any language you want, write a program that validates if the input is a covering array in the fastest time. I will run programs on my machine, which is a Macbook Pro 2.2 Ghz Intel i7 with 16 GB RAM.
• Are you planning a follow-up which asks people to generate minimal covering arrays? That seems to be the more interesting direction. – Peter Taylor Jun 11 '15 at 5:31
• @PeterTaylor that's actually a cool idea, but other than v=t=2, the minimal case is not known, and no explicit construction is known (other than orthogonal arrays, which only work for one value of k). One question could be to ask for the best known minimal CA, which is actually something I'd really like (because it is part of my research). – Ryan Jun 11 '15 at 11:46
• I was thinking of a code-challenge which scores by the best CA for certain parameters, possibly required to be deterministic and inside a certain time limit. – Peter Taylor Jun 11 '15 at 12:33
• @PeterTaylor I like that idea, possibly allowing any technique such as simulated annealing. – Ryan Jun 11 '15 at 21:02
# KOTH Simpleton's Chess
## Introduction
### Disclaimer: The word "simpleton" is not meant to offend anyone who is simple or anything else. Don't take it personally, please.
It is a lovely afternoon at the Completely Average Chess Club, and like most afternoons, the chess players, who are also part-time code golfers, are golfing programs to play chess. They use incredibly complicated move-finding algorithms and are golfing them down into 4 byte programs. However, on this particular afternoon, some aliens, who are looking to use human intelligence to play alien chess, steal the chess players' intelligence, turning them into simpletons! Now they can't even remember all the pieces, let alone perform an alpha-beta tree-search.
Today, we won't be focusing on restoring the chess players' minds back. Instead, we'll be playing a modified chess game with simpler rules!
## Rules
### The pieces
Like regular chess, Simpleton Chess has two teams: white and black. Unlike regular chess, however, Simpleton Chess only has one piece that can attack north, north west and north east and can move two squares at a time. If you are familiar with regular chess, this is much like the pawn except for the fact you can attack forward (or north) and can move two squares at a time. Like regular chess, to take a piece you must move in the place of the piece that you wish to take. If there is a piece in of you, and you move 1 square in front, then you will take that piece.
Castling and en passant are ignored.
### Winning the game
To win the game, you must take all the pieces of the opposing team.
### Time limit
Each entry has an allowed time is 3 minutes (180000ms).
### The board
Any piece that moves outside the board will disqualify you. The board is a 2D int array that you can access by assigning a variable to SimpletonUtils.read
### Entries
Your entry is expected to have two methods: getName() and move().
The getName() method will return a String of the name of your entry.
The move method is a void and is called every time you need to move.
To submit the board you use SimpletonUtils.submitBoard
Entries are suggested (and very, very much encouraged) to verify the board using SimpletonUtils.verifyBoard (you have 3 minutes, no need to worry about speed). You will be disqualified if you submit an invalid board, however, you will not be disqualified for sending an invalid board to SimpletonUtils.verifyBoard. If you don't verify the board, a warning will be sent to console output if debug is on (edit SimpletonConfig.java).
Entry template
Here is an example entry to follow:
package SimpletonChess;
import SimpletonChess.SimpletonPlayer;
import SimpletonChess.SimpletonUtils;
public class MyEntry extends SimpletonPlayer {
/**
* Return the name of our bot to the controller
*/
public String getName(){
return "MyEntry";
}
/**
* Method to carry out the logic for our entry
*/
public void move(){
SimpletonUtils utils = new SimpletonUtils();
int[][] board = utils.read(); // Your own local copy of the board
/*
* TODO: Template. Add logic here
*/
utils.submitBoard(board); // Note that you will be disqualified if your board is invalid! Check it with utils.verifyBoard(board), just use an if statement.
}
}
** Remember to add your entry to the controller's main class, SimpletonTournament.java, when you're finished! **
### Controller
The controller is on GitHub. (Link will be added when the controller is ready)
Your entries are expected to be written in Java (unless I find the time to write a console parser).
## Final words
Good luck simpletons! I would very much like to see an entry using the monte carlo method, that would be splendid!
Also, a fantastic link for all things chess programming related: http://chessprogramming.wikispaces.com/
That's it from me, send your entries over today!
Notes
• I know there is another KOTH chess tournament. I don't think this is a duplicate since it has simpler rules and different pieces
• Is this too similar to checkers?
TODOs
• Fix overbolding
• Fix possibility to mess up the board with illegal moves (feedback needed)
• Themed intro, since a lot of other people are doing it (feedback needed!)
• Finish controller
• Add some diagrams
• Fix numerous typos and formatting errors
• "Any pieces that the board will be deleted" Did you mean "Any pieces that the board contains will be deleted"? Also, your KOTH seems to have too much boldness . Use bacticks() instead of bold for code formatting and try to use bold whenever appropriate.. – Spikatrix Jun 12 '15 at 4:54
• From your description of the move method, it sounds like an entry has the opportunity to change data that it shouldn't be allowed to change. We all tend to be sportsmanlike here, and I'm sure nobody would try to game the system like that, but you should try not to make such things available. And anyway, if somebody tries to obfuscate their code, or has a bug, an accident could end up catastrophic. I don't know how rigorous your movechecking will be, but be careful. – BrainSteel Jun 12 '15 at 5:12
• What size is the board? Does a piece which gets level with the rearmost of the opponent's pieces become irrelevant except as a guarantee that you can't lose? – Peter Taylor Jun 12 '15 at 8:57
• Thank you for your feedback everyone. CoolGuy: My apologies for over-bolding, will fix. I was editing in another editor, not on StackExchange. @BrainSteel I will play around with various ways of making sure moves are not illegal. Any ideas for how I can make it impossible to unfairly change the board? I'm not very sure about this one, and I'll have a look at other KOTHs to see how they handle it. If you have any suggestions please let me know. Peter Taylor: The size of the board is 8x8. – Matt Y Jun 13 '15 at 3:51
• @BrainSteel I have now finished a system for checking the move which should stop people from changing the board. They receive a local copy of the board and must submit it back with submitBoard (hence now move() is a void). So now I can do move checking. There will also be a method called verify which will verify the board. – Matt Y Jun 13 '15 at 5:52
• Thanks, that looks like a fine way to handle it! You can never be too careful. I await your controller! – BrainSteel Jun 13 '15 at 15:48
• Some diagrams/images would really help clarify the rules- for example the starting set up - I assume each player starts with two rows of pieces at their end of the board but this is not clear. Also a diagram showing the allowable moves would be really great. What happens when one of your pieces reaches the opposite edge of the board and cannot be taken. Other than that looks like a really interesting challenge and looking forward to it... :) – euanjt Jun 13 '15 at 16:38
• Thanks for the feedback! @TheE Controller is on its way. Might take a while... I code as slow as a tortoise ;) – Matt Y Jun 13 '15 at 22:31
• I don't think the themed intro adds anything. – mbomb007 Dec 12 '16 at 16:07
# How Many Isomers of Nonane / Undecane Are There?
Background
Hydrogen and carbon form various series of compounds called hydrocarbons. Carbon forms four bonds and hydrogen forms 1 bond. The alkanes are the series of hydrocarbons without any double bonds between carbon atoms. The first four alkanes are shown below. Hydrogen atoms are omitted for simplicity. We know each carbon forms four bonds, so the unused bonds must have hydrogen atoms on them.
Methane Ethane Propane n-Butane Isobutane
C C-C C-C-C C-C C-C-C
| |
C-C C
Note that Methane, Ethane and Propane have only one isomer, whereas Butane comes in two isomeric forms. Both have four carbon atoms but in n-Butane they are arranged in a continuous chain, whereas in Isobutane (also known as 2-Methyl Propane) they are arranged as a continuous chain of 3 atoms plus a side branch of one carbon atom.
In both cases, the ten loose bonds on the carbon atoms are occupied by hydrogen atoms. In general, for hydrocarbons with no rings or multiple bonds, the formula is Cn H(2+2n).
1. Your task is to create a program or function that accepts a single integer 0 < n < 10 and outputs to STDOUT or a file all the possible structural isomers of the hydrocarbon of formula Cn H(2+2n). That is to say, all the different ways in which n carbon atoms can be connected together, assuming freedom to twist about all bonds.
2. The carbon atoms shall be represented by the symbol C. In the interest of simplicity, hydrogen atoms shall be omitted. The bonds shall be represented by the symbols - and |. Only horizontal and vertical bonds are allowed. Each isomer shall be represented by a network of carbon atoms separated by bonds. As such the character C will appear on a grid of pitch 2x2.
3. Your program / function shall draw each possible isomer once and only once. The type of isomerism to be considered is structural isomerism. That is to say, compounds with different branching patterns shall be considered different. Different stereoisomers and different conformations shall be considered equivalent. Any conformation that complies with Rule 2 is valid.
4. The different isomers shall be displayed one below the other, in any order. To assist in checking, each isomer shall be preceded by a sequential number (starting at 1) on its own line. There shall be no more than 5 blank lines between any isomer and the preceding / following numbers. Unnecesary whitespace to the left and right of each isomer shall not exceed 10 characters in either direction.
5. To avoid extreme brute force solutions, execution time shall not exceed 1 minute on my machine for any input case.
6. Scoring: This code golf. Shortest code wins. If your program can handle up to n=11 instead of n=9 there is a -50% bonus. Above n=9 is significantly harder, because the sidechains can themselves have sidechains.
Above n=11 there exist some isomers that cannot be represented according to the rules of this question as some atoms would overlap.
The number of isomers for each value of n is given in https://oeis.org/A000602. Note that behaviour for n=0 can be undefined. The names of the isomers are here: http://www.kentchemistry.com/links/organic/isomersofalkanes.htm
n 0 1 2 3 4 5 6 7 8 9 10 11
isomers 1, 1, 1, 1, 2, 3, 5, 9, 18, 35, 75, 159
EXAMPLE OUTPUT n=6 (all 5 possible isomers) Note: you must display the isomers one below the other. They are displayed side by side here to save space. For further explanation, a video explaining structural isomerism with this example is here: https://www.youtube.com/watch?v=qOhEJK4Umds
1 2 3 4 5
C
|
C-C-C-C-C-C C-C-C-C-C C-C-C-C-C C-C-C-C C-C-C-C
| | | | |
C C C C C
EXAMPLE OUTPUT n=8 (only some of the possible isomers, you must display them all.)
n-Octane
C-C-C-C-C-C-C-C
3,4dimethyloctane. Those who know about stereoisomerism will know that in 3 dimensions the bonds are arranged in a tetrahedron around the carbon atom. This means that this compound can exist in 3 distinct forms: a lefthand form, a righthand form and a mirror symmetric form. For the purpose of this question these are equivalent and any one of the following is acceptable (and there are many other acceptable ways of drawing this isomer.)
C-C-C-C C C-C-C-C C C-C-C-C C-C-C-C-C-C
| | | | | | |
C-C-C-C C-C-C-C-C-C C-C-C-C C-C-C-C-C-C C-C-C-C C C
| |
C C
EXAMPLE OUTPUT n=10 (only some of the possible isomers, you must display them all.)
triisopropylmethane, or 2(methylethyl)1,3dimethylpentane (n=10 is the smallest n where there can be a sidechain of three C atoms, and therefore also the smallest n where a sidechain itself can be branched.)
C C
| |
C-C-C-C-C
|
C-C-C
2,2,3,4,4pentamethylpentane (it can be shown that for n=10 a maximum of 2 carbon atoms can be completely surrounded by four other carbon atoms each.)
C C
| |
C-C-C-C-C
| | |
C C C
• I'm surprised this hasn't been asked before, but I can't find it. Is code challenge (with codegolf tie break) the best scoring? I'm not sure about making it a codegolf, because I think the higher n are probably quite hard and it would discourage participation. – Level River St Jun 14 '15 at 10:27
• Do enantiomers count twice? – feersum Jun 15 '15 at 5:37
• "Your task is to create a program or function that accepts a single integer 0 < n < 12 ... The winning entry will be the program that produces correct output for the highest value of n ... Above n=11 there exist some isomers ... that cannot be represented according to the rules of this question" What I understand from this is that every valid submission gets the same score, which seems a bit pointless. With the given tie-breaker, it's effectively just a code-golf. – Peter Taylor Jun 15 '15 at 7:13
• @PeterTaylor I've changed it to a codegolf for n=9, plus a generous bonus for n=11. n>9 is fiddly (more for display reasons than underlying maths) because sometimes the sidechains have sidechains. I want to see all the way to n=11, but I don't want to exclude people by making it too hard. – Level River St Jun 15 '15 at 14:20
• @feersum to avoid complication, enantiomers and other types of stereoisomers are considered equivalent, you should only include one of them for each structural isomer. In any case they can't be distinguished properly according to the display spec. Bond symbols such as < and > would be needed to show whether an atom is closer/further than another. I thought this was clear from rule 3 and example 3,4 dimethyloctane, but I'm not surprised someone asked. Is there a way to make this clearer? – Level River St Jun 15 '15 at 14:28
# Solve the nonogram
Nonograms, also known as Hankie or Picross, are fascinating. They are really simple is essence, but to solve the most complexe one, some tricks have to be learned.
## Basics
Nonograms are usually presented this way :
1
2 1 2
3 2 1 2 3
+-----------
3|
2 2|
1 1 1|
2 2|
3|
The numbers in lines and columns determine how much box in a row will be present, and how much sets there will be on this line/columns. The seconde line says 2 2which means "2 boxes in a row, at least one space, 2 boxes in a row" Let's give it a try with the 5x5 sample above. I will use #for boxes and .for blank confirmed.
1
2 1 2
3 2 1 2 3
+-----------
3|
2 2|
1 1 1|
2 2|
3|
As we said, second line says 2,some spaces,2.
As we are playing on a 5x5 board, there's only one space remaining after
putting the boxes, so their position is certain.
1
2 1 2
3 2 1 2 3
+-----------
3|
2 2| # # . # #
1 1 1|
2 2|
3|
There's some other 2 2 rows, let's fill them !
1
2 1 2
3 2 1 2 3
+-----------
3| # #
2 2| # # . # #
1 1 1| . .
2 2| # # . # #
3| # #
We can say this puzzle is over :
Look at the 1 1 1 rows, they have already 2 blank confirmed
which means there's only 3 spaces left. We can fill these, and complete the
puzzle.
1
2 1 2
3 2 1 2 3
+-----------
3| # # #
2 2| # # . # #
1 1 1| # . # . #
2 2| # # . # #
3| # # #
We didn't checked all confirmed blank, but it's not the matter, we only need
to check the boxes. Note that the 3 rows has been useless to solve this
puzzle.
There you got the basics, but some tips on the wikipedia page could be useful.
## Goal
Your job is to write a program in the language you want to solve nonograms.
There shouldn't be any problem, but here's some loopholes that are forbidden, just in case :).
I'd also like you to write which down command have to be run to execute your code, and
## Input
The input can be graphic, an array, a string via stdin, or whatever you want. You may hard-code it, I want to know how fast your program is to solve nonograms, not how fast it is to parse datas. I'll provide two format for each test case. A ASCII-Art'd one, as shown above, and one structured as an array in the form :
[[columns],[lines]]
columns and line will also be noted the same way, the array for the sample :
[[[3],[2,2],[1,1,1],[2,2],[3]],[[3],[2,2],[1,1,1],[2,2],[3]]]
## Output
You must produce the solved nonogram, formated as you want as long as it is clear. It should be outputed via stdout, or your language closest alternative, as an Image, Ascii-art or left on top of the stack.
## Test Cases
Puzzles will never be greater than 99*99, nor smaller than 2*2.They all are solvable by using basic techniques, the ones shown on wikipedia are far more than enough.
I might add some test cases latter, but big ones take time, a lot of time. If you want to give it a try, post one with your answer, and it will be added if it is solvable without any guess. Which means solving only those one won't be necessary nice, you need to be able to solve any "basic" nonogram : No multi-row - multi depth contradiction looking. Even contradiction shouldn't be necessary
5x5
[[[3],[2,2],[1,1,1],[2,2],[3]],[[3],[2,2],[1,1,1],[2,2],[3]]]
1
2 1 2
3 2 1 2 3
+-----------
3|
2 2|
1 1 1|
2 2|
3|
10x10
[[[3],[2,2],[1,1,1],[2,2],[3],[3,1],[2,7],[1,1,1,1],[2,2,2],[3,3]],[[3],[2,2],[1,1,1],[3,2,2],[2,2,3],[1,1,1,1],[2,4,1],[3,1,2],[4],[1]]]
1
1 1 2
2 1 2 3 2 1 2 3
3 2 1 2 3 1 7 1 2 3
+--------------------
3|
2 2|
1 1 1|
3 2 2|
2 2 3|
1 1 1 1|
2 4 1|
3 1 2|
4|
1|
30x30
[[[10,6,10],[9,8,9],[7,10,7],[7,14,6],[6,15,5],[6,15,5],[5,17,4],[5,19,4],[26,3],[4,2,10,2,3],[4,1,8,1,3],[3,12,3],[3,9,3],[3,16,3],[3,16,3][3,16,3][3,16,3][3,16,3],[3,9,3],[3,12,3],[3,1,8,1,3],[4,2,10,2,3],[4,25],[4,19,4],[5,17,5],[6,16,5],[6,12,7],[8,10,8],[8,8,10],[10,6,10]]
4 4 3 4
2 1 1 2
10 9 7 7 6 6 5 5 10 8 3 3 3 3 3 3 3 3 3 810 4 5 6 6 8 810
6 810141515171926 2 112 91616161616 912 1 2 41917161210 8 6
10 9 7 6 5 5 4 4 3 3 3 3 3 3 3 3 3 3 3 3 3 325 4 5 5 7 81011
+------------------------------------------------------------
30|
30|
30|
11 9|
9 6|
6 3 1 1 3 5|
4 3 1 1 3 3|
2 3 2 2 4 3|
2 6 5 4 1|
1 6 5 5 1|
7 5 6|
9 7 8|
30|
30|
30|
30|
30|
30|
28|
26 1|
1 7 1 5 1 6 2|
2 6 1 3 1 4 2|
2 5 1 3 1 4 3|
3 3 1 1 1 3 4|
4 3 1 3 4|
6 3 3 6|
8 8|
30|
30|
30|
## Solution
For those who are interested, here's the solution for the test cases.
5x5
1
2 1 2
3 2 1 2 3
+----------
3| . # # # .
2 2| # # . # #
1 1 1| # . # . #
2 2| # # . # #
3| . # # # .
10x10
1
1 1 2
2 1 2 3 2 1 2 3
3 2 1 2 3 1 7 1 2 3
+--------------------
3| . . . . . . # # # .
2 2| . . . . . # # . # #
1 1 1| . . . . . # . # . #
3 2 2| . # # # . # # . # #
2 2 3| # # . # # . # # # .
1 1 1 1| # . # . # . # . . .
2 4 1| # # . # # # # . . #
3 1 2| . # # # . # # . # #
4| . . . . . . # # # #
1| . . . . . . # . . .
30x30
Hope you'll like it, it took me a lot of time :)
4 4 3 4
2 1 1 2
10 9 7 7 6 6 5 5 10 8 3 3 3 3 3 3 3 3 3 810 4 5 6 6 8 810
6 810141515171926 2 112 91616161616 912 1 2 41917161210 8 6
10 9 7 6 5 5 4 4 3 3 3 3 3 3 3 3 3 3 3 3 3 325 4 5 5 7 81011
+------------------------------------------------------------
30| # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
30| # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
30| # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
11 9| # # # # # # # # # # # . . . . . . . . . . # # # # # # # # #
9 6| # # # # # # # # # . . . . . . . . . . . . . . . # # # # # #
6 3 1 1 3 5| # # # # # # . . # # # . . # . . . # . . # # # . . # # # # #
4 3 1 1 3 3| # # # # . . . # # # . . . # . . . # . . . # # # . . . # # #
2 3 2 2 4 3| # # . . . . # # # . . . . # # . # # . . . . # # # # . # # #
2 6 5 4 1| # # . # # # # # # . . . . # # # # # . . . . # # # # . . . #
1 6 5 5 1| # . . # # # # # # . . . . # # # # # . . . . # # # # # . . #
7 5 6| . . # # # # # # # . . . . # # # # # . . . . # # # # # # . .
9 7 8| . # # # # # # # # # . . # # # # # # # . . # # # # # # # # .
30| # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
30| # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
30| # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
30| # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
30| # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
30| # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
28| . # # # # # # # # # # # # # # # # # # # # # # # # # # # # .
26 1| . . # # # # # # # # # # # # # # # # # # # # # # # # # # . #
1 7 1 5 1 6 2| # . . # # # # # # # . # . # # # # # . # . # # # # # # . # #
2 6 1 3 1 4 2| # # . # # # # # # . . # . . # # # . . # . . # # # # . . # #
2 5 1 3 1 4 3| # # . . # # # # # . . # . . # # # . . # . . # # # # . # # #
3 3 1 1 1 3 4| # # # . . . # # # . . # . . . # . . . # . . # # # . # # # #
4 3 1 3 4| # # # # . . . # # # . . . . . # . . . . . # # # . . # # # #
6 3 3 6| # # # # # # . . # # # . . . . . . . . . # # # . # # # # # #
8 8| # # # # # # # # . . . . . . . . . . . . . . # # # # # # # #
30| # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
30| # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
30| # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
60x60
Not completed, not tested
60| # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
6 3 1 1 335| # # # # # # . . # # # . . # . . . # . . # # # . . # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
6 3 1 1 3 6 6 5 5 1| # # # # # # . . # # # . . # . . . # . . # # # . . # # # # # # . . # # # # # # . . . . # # # # # . . . . # # # # # . . #
647| . # # # # # # . . . . . . # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
5 1 1 3 2 1 4| # # # # # . . . . . . . . . . . . . . . . # . . . . . . . . . . . . . . . . . . . . # . # # # . # # . # . . . . # # # #
1 2 1 2 2 1 4| . # . # # . . . . . . . . . . . . . . . . # . . . . . . . . . . . . . . . . . . . . . . . # # . # # . # . . . . # # # #
1 2 1 2 1 3| # . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . # . # # . # . . . . . # # #
2 2 1 1 2| # # . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . # . . # . . . . . . . . # #
3 2 2| # # # . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . # #
3 2 1| # # # . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . #
3 3 1| . # # # . # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . #
1 8| # . # # # # # # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 4 4 1| # . # # # # . # # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . #
1 3 2 2| # . # # # . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . # #
1 3 2 1| # . . # # # . . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . #
2 4 3 1| # # . # # # # . . . # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . # .
4 3 3 2| # # # # . # # # . . . # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . # #
3 4 3 1| . # # # . . # # # # . # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . #
1 3 8| # . # # # . # # # # # # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 3 1 5 1| # . # # # . # . # # # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . #
1 5 4 2| # . # # # # # . # # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . # #
2 4 2 3 2| # # . # # # # . # # . # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # .
2 2 2 1 3 2 1| # # . # # . # # . # . . # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . #
1 2 2 1 3 2 2| . # . # # . # # . # . . . # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . # #
3 1 2 2 4 2 2| # # # . # . # # . # # . # # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . # # .
3 1 2 4 3 2 2 1| # # # . # . # # . # # # # . # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . # # . #
1 2 2 3 2 3 2 2 2| # . # # . # # . # # # . # # . # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . # # . # #
1 2 2 3 2 3 2 2 2| # . # # . # # . # # # . # # . # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . # # . # # .
4 2 6 4 2 2 2 1| # # # # . # # . # # # # # # . # # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . # # . # # . #
6 6 4 2 2 2 2| . # # # # # # . # # # # # # . # # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # . # # . # # . # #
3 3 2 3 5 3 2 2 2| # # # . # # # . # # . # # # . # # # # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . # # # . # # . # # . # #
1 1 3 2 6 3 4 2 2 2| # . # . # # # . # # . # # # # # # . # # # . . . . . . . . . . . . . . . . . . . . . . . . . . # # # # . # # . # # . # #
1 5 2 2 1 1 3 2 2 2 2 1| # . # # # # # . # # . # # . # . # . # # # . . . . . . . . . . . . . . . . . . . . . . . . . # # . . # # . # # . # # . #
1 3 1 2 2 1 1 4 3 2 2 2| # . # # # . # . # # . # # . # . # . # # # # . . . . . . . . . . . . . . . . . . . . . . . # # # . . . # # . # # . # # .
1 3 1 2 2 3 4 4 2 2 2| # . # # # . # . # # . # # . # # # . . # # # # . . . . . . . . . . . . . . . . . . . . . # # # # . . . . # # . # # . # #
1 1 1 1 2 2 3 4 2 2 2 2 1| # . # . # . # . # # . # # . # # # . . # # # # . . . . . . . . . . . . . . . . . . . . # # . . # # . . . . # # . # # . #
3 6 2 4 1 3 3 2 2 2| # # # . # # # # # # . # # . # # # # . # . # # # . . . . . . . . . . . . . . . . . . # # # . . . # # . . . . # # . # # .
2 2 3 2 4 1 4 4 2 2 2| . # # . # # . # # # . # # . # # # # . # . # # # # . . . . . . . . . . . . . . . . # # # # . . . . # # . . . . # # . # #
2 2 3 2 1 1 1 4 2 2 2 2 1| # # . . # # . # # # . # # . # . . # . # . # # # # . . . . . . . . . . . . . . . # # . . # # . . . . # # . . . . # # . #
2 6 2 1 2 1 2 3 3 2 2 2| # # . . # # # # # # . # # . # . # # . # . # # . # # # . . . . . . . . . . . . # # # . . . # # . . . . # # . . . . # # .
4 5 4 2 1 1 3 4 2 2 2| # # # # . # # # # # . # # # # . # # . # . # . . # # # . . . . . . . . . . . # # # # . . . . # # . . . . # # . . . . # #
3 5 4 2 1 1 2 2 2 2 2 1| . # # # . # # # # # . # # # # . # # . # . # . . # # . . . . . . . . . . . # # . . # # . . . . # # . . . . # # . . . . #
8 4 4 1 2 1 2 2 2 2 1| # # # # # # # # . # # # # . # # # # . # . # # . # . . . . . . . . . . . # # . . . . # # . . . . # # . . . . # # . . # .
3 4 1 2 4 1 3 2 2 2 2 2| # # # . # # # # . # . # # . # # # # . # . # # # . . . . . . . . . . . # # . . . . . . # # . . . . # # . . . . # # . # #
3 6 2 4 1 6 4 2 2 2 1| # # # . # # # # # # . # # . # # # # . # . # # # # # # . . . . . # # # # . . . . . . . . # # . . . . # # . . . . # # . #
3 6 2 4 1 4 2 2 2 2 2 2 1| # # # . # # # # # # . # # . # # # # . # . # # # # . # # . . . # # . # # . . . . . . . . # # . . . . # # . . . . # # . #
3 6 2 4 1 5 1 1 3 2 2 2 1| # # # . # # # # # # . # # . # # # # . # . # # # # # . # . . . # . # # # . . . . . . . . # # . . . . # # . . . . # # . #
3 6 2 4 1 212 2 2 2 1| # # # . # # # # # # . # # . # # # # . # . # # . # # # # # # # # # # # # . . . . . . . . # # . . . . # # . . . . # # . #
3 6 2 4 1 2 1 1 3 2 2 2 1| # # # . # # # # # # . # # . # # # # . # . # # . # . . . . # . . . . # # # . . . . . . . # # . . . . # # . . . . # # . #
3 6 2 4 1 4 1 4 2 2 4| # # # . # # # # # # . # # . # # # # . # . # # # # . . . . # . . . . # # # # . . . . . . # # . . . . # # . . . . # # # #
3 6 2 4 1 4 1 2 2 2 2 4| # # # . # # # # # # . # # . # # # # . # . # # # # . . . . # . . . . # # . # # . . . . . # # . . . . # # . . . . # # # #
3 6 2 4 1 4 1 2 2 2 2 3| # # # . # # # # # # . # # . # # # # . # . # # # # . . . . # . . . . # # . . # # . . . . # # . . . . # # . . . . # # # .
3 6 2 4 1 4 1 2 2 2 2 4| # # # . # # # # # # . # # . # # # # . # . # # # # . . . . # . . . . # # . . . # # . . . # # . . . . # # . . . . # # # #
3 6 2 4 1 4 1 2 2 2 2 2 1| # # # . # # # # # # . # # . # # # # . # . # # # # . . . . # . . . . # # . . . # # . . . # # . . . . # # . . . . # # . #
3 6 2 4 1 4 1 2 2 2 2 2 1| # # # . # # # # # # . # # . # # # # . # . # # # # . . . . # . . . . # # . . . # # . . . # # . . . . # # . . . . # # . #
3 6 2 4 1 4 3 2 2 2 2 2 1| # # # . # # # # # # . # # . # # # # . # . # # # # . . . # # # . . . # # . . . # # . . . # # . . . . # # . . . . # # . #
3 6 2 4 1 4 1 1 1 2 2 2 2 2 1| # # # . # # # # # # . # # . # # # # . # . # # # # . . # . # . # . . # # . . . # # . . . # # . . . . # # . . . . # # . #
3 6 2 4 1 4 2 1 2 2 2 2 1 2 2 1| # # # . # # # # # # . # # . # # # # . # . # # # # . # # . # . # # . # # . . . # # . . . # # . . # . # # . . . . # # . #
3 6 2 4 1 4 2 1 2 2 2 2 1 2 4| # # # . # # # # # # . # # . # # # # . # . # # # # . # # . # . # # . # # . . . # # . . . # # . . # . # # . . . # # # # .
3 6 2 4 1 4 2 1 6 4 412| # # # . # # # # # # . # # . # # # # . # . # # # # . # # . # . # # # # # # . # # # # . # # # # . # # # # # # # # # # # #
## Winning criteria
Puzzle solving might be long, who will find the best heuristics? Who will find the best implementation? Fastest code will win (code will be running on my computer, i5-4440, and must not use more than 4GB of RAM). You'll be scored using the 60*60 nonogram sandbox note : have to be added. Other test cases are here to help you while developing your submission. If there's a need of a Tie-breaker, i'll provide more complex test case (maybe a 90*90?)
## Sandbox
I have two questions for the sandbox :
• Do I need to put more explanations?
• Is there too much grammar/others faults? (I'm not native, sorry :()
As suggested by @steveverrill, I changed it to a fastest-code contest, some basic things changed. It makes much more sense, thanks.
I added the first part of the 60x60 nonogram, still have to form the array, and do the columns. I know it is ugly, but I wasn't able to come with a nice one which would be viable AND long to solve. T
• You really need some time limit on this, and for a specified size. "don't try to brute force it" is vague. The shortest-code way of solving this will be to try every possible combination of 0's and 1's and check. That will take less than a second for example 1, a lifetime for example 2, and something like the current age of the universe for example 3. There are other marginally less naive ways of doing it, like generating all possible rows then checking the colums, which will also take too long. I'd go as far as to say this might go better as fastest code (largest example solved in 1 minute.) – Level River St Jun 16 '15 at 13:55
• Did you make that batman yourself? Are you sure it's the only valid solution? (doesn't matter so long as you specify that the program can terminate when it finds any valid solution.) Note that a human will immediately note the completely full rows and the (near) symmetry, which it will make it easier, while a computer will not. – Level River St Jun 16 '15 at 14:00
• @steveverrill I was thinking of 2~3 minutes for the 30*30 one, I don't want it to take some billion years :). But I'll take your fastest code in consideration, as it may have more sense, and could be interesting to see what people will bring. – Katenkyo Jun 16 '15 at 14:25
• @steveverrill Yes I did, and there shouldn't be any other solution. But as per the rules of a nonogram, any solution satisfying lines/columns is a good one, just less prettier than the intended one. Using symettry is useless to solve this one. I solved it only using the "30 is length of the row, I fill", "27 2 means w+27+x+2+y=30, 30- 27+2=29 so w=y=0 and x=1" plus an other strategie designed as "simple boxes" on the wikipedia page. This is the basic movement a beginner learn, and they shouldn't be hard to code them as they can be done line-by-line and column-by-column – Katenkyo Jun 16 '15 at 14:30
• Are you sure your 30x30 has only one solution? – Sparr Jun 16 '15 at 15:15
• @Sparr I'm still testing to find other ones, but I can't find an other one – Katenkyo Jun 16 '15 at 16:44
• I see you're aware fastest code means you must take reponsibilty for running the code. Score should either be: 1. time to solve a given grid size, or 2. max (square?) grid size solved in a given time. simply "fastest code" is not enough. The trouble with 1 is you've no idea how long it will take (see meta.codegolf.stackexchange.com/q/5360/15599). The trouble with 2 is you may have to define additional grids, but I think that's preferable over 1. Users can time themselves on the test cases to give an idea of the winner - probably there will be vast differences in timing.. – Level River St Jun 16 '15 at 19:40
• @steveverrill I'm going for 2., surely with a 60*60 or 70*70 grid (would be hard to design a greater one). I think it has enough cells to see who's the best. If not, i'll push myself into a 90*90 one. – Katenkyo Jun 17 '15 at 7:05
• According to Wikipedia this problema is NP-hard. That means grid size will have a massive impact on time. It's not at all clear whether 30x30 will run in reasonable time. If you want to score by time to complete a certain grid, I suggest you start with the smallest grid, eliminate all entries that run a measurable time (say 1 second) slower than the leader, then proceed to do the same with larger and larger grids until only one entry remains. That way you can avoid the problem of timings that are ridiculously fast or ridiculously slow. It also means the entries get tested on a variety of grids – Level River St Jun 17 '15 at 9:07
• Oops- just seen the new note about "solveable without any guess"! If anyone encodes an algorithm for that (I'm guessing a máximum of 2 people will do that, it's easy for a human to do but quite difficult to code) it should run very fast. If it is guaranteed that no guessing is required this becomes quite a different challenge. Is it guaranteed? – Level River St Jun 17 '15 at 9:16
• @steveverrill I might try to code a submission, to see how much time it would take. It could be used by others as a scale to see how far they are from it. A real nonogram must be solved by logic only. It can be complicated logic (while you predict what 2+ boxes placed would do on the board(depth-reasoning)). I don't think I'll be able to prove that point, but I tried for those 3 every way I could to solve them, and each time no guesses were needed. That's the reason it is long to design one ^^'. – Katenkyo Jun 17 '15 at 9:23
• The challenge is currently self-contradictory, asking "Who will find the best heuristics?" but promising "they are all solvable using basic techniques." If it is going to be a fastest-code and not a code golf, then the puzzles should be as difficult as possible. – feersum Jun 18 '15 at 22:26
• @feersum The fact they are all solvable by basics is because it was initially a code-golf. The 60*60 I'm designing will be a bit more trickier to solve, and I might prepare a 40*40 as an intermediate tricky case. But yeah, thanks for denoting this fact, I might have forgotten. – Katenkyo Jun 19 '15 at 7:12
• i know this game and i cant solve it another way aside bruteforce, sorry but i thnk this challenge wouldnt give desired results – Abr001am Jun 25 '15 at 10:50
# 4-Way Intersection Simulator
Consider an intersection as follows:
Cars will drive up any of the above Input lanes, and will exit out of any of the 3 other Output lanes. The goal is to take the list of cars, their arrival time, and their destination, and to return the times they will exit the intersection.
We will measure the time that it takes a car to cross an intersection as 1 Tick. We will assume that the time it takes for a car to approach and leave the intersection to be 0 ticks.
Each input acts as a queue of cars. Each tick, the car that has been at the front of its respective lane the longest will cross the intersection in his respective direction.
# Priority
If multiple cars have been waiting for the same amount of time, the rightmost car has priority. If there are 2 cars that are on opposing sides, they will both cross at the same time (as described below), unless only one of them is turning left. If that is the case, the car not turning left will have priority. Two cars may turn left at the same time. If there are 4 cars that arrive at the same time, the car in Input 1 will have priority.
After the car to cross has been chosen, other cars may cross at the same time, assuming their paths don't cross. Priority is given to the lane directly across from the crossing car, then to the car to right, then to the car to his left.
# Input/Output
Input will be a list of cars. Each car will be passed as a tuple containing the cars' unique ID, arrival time, arrival lane, and destination lane. The arrival lane will never be the destination lane.
Your program should output a list of cars, where each car is a tuple containing the cars' unique ID and the time it reaches its destination.
I don't care if the input format exactly matches the examples below. What I do care is that you input a list of tuples/lists and output a list of tuples/lists.
# Examples
[(0, 5, 2, 3)] -> [(0, 6)]
Car 0 arrives in lane 2 at tick 5. He leaves in lane 3 at tick 6
[(0, 3, 1, 3), (1, 3, 3, 1)] -> [(0, 4), (1, 4)]
Car 0 and 1 arrive in lanes 1 and 3 at tick 3. They both leave at tick 4.
[(0, 0, 3, 1), (1, 0, 4, 2), (2, 1, 4, 2)] -> [(0, 2), (1, 1), (2, 3)]
Car 0 and 1 arrive in lanes 3 and 4. Their paths intersect, so car 1 leaves first
because it is the rightmost car. The next tick, car 2 arrives, but car 0 has been
waiting the longest, so car 0 leaves next. Finally, car 2 leaves at time 3.
[(0, 0, 1, 2), (1, 0, 2, 3), (2, 0, 3, 4), (3, 0, 4, 2)] -> [(0, 1), (1, 3), (2, 1), (3, 2)]
All four cars arrive at the same time. Car 0 has the priority as it is in Input 1.
Car 2 is directly across from it, and both are turning left, so they cross at the same
time. Car 1 is turning left, so car 3 will cross next, followed by Car 1.
[(0, 0, 1, 4), (1, 0, 2, 1), (2, 0, 3, 2), (3, 0, 4, 3)] -> [(0, 1), (1, 1), (2, 1), (3, 1)]
All four cars arrive at the same time, all are turning right, so all leave at tick 1.
[(0, 0, 1, 3), (1, 0, 4, 2), (2, 0, 3, 2), (3, 1, 1, 4), (4, 1, 4, 1), (5, 1, 2, 1), (6, 2, 3, 4), (7, 2, 1, 3), (8, 3, 1, 4), (9, 3, 4, 2), (10, 3, 3, 2)] -> [(0, 1), (2, 1), (1, 2), (3, 2), (5, 2), (6, 3), (4, 4), (10, 4), (7, 5), (9, 6), (8, 6)]
Car 2 is the rightmost, and has priority. Car 0 is also able to cross at Tick 0
Car 1 has now been waiting the longest, and has priority. Both Car 3 and 5 are able to
cross as well. Car 4 was waiting behind Car 1, and so Cars 4, 6, and 7 arrive at the
same time. Car 6 is the rightmost, so he exits at tick 3 while cars 8-10 arrive.
Car 4 is the next rightmost, so he makes his turn next, while Car 10 makes his right turn.
Car 7 finally has his turn, and crosses. Car 8 is behind Car 7, and Car 9 intersects with
Car 7, so neither cross at the same time, but both are able to cross the next tick.
# Black and White Morphing
Given two black and white images, the goal is creating a animated black and white gif that transforms one image into the other and back.
The catch is, that for all frames the number of black pixels (as well as the number of white pixels, obviously) stays the same. You can assume that the two input images have the exact same size and the exact same number of black pixels.
### Discussion
@PeterTaylor suggested making the restrictions that from one frame to the next you can only swap adjecent pixels. Otherwise this challenge would be almost the same as this one, so we need a further restriction.
My goal is enforcing a 'slow' transition that can produce nice effects. One way of picturing that was considering the white pixels as fluid or sand that has to be rearranged step by step into the other image.
@trichoplax suggested making the limit that e.g. only 5% of the pixels may change in each transition.
### Test Cases
This is a first series of test cases, all 320x386px and 33844 white pixels.
• Is the stark contrast how you want to the challenge to look, or just what you happen to have at the moment? How would you feel about including dithered images that use the same number of white pixels as these but give the impression of grayscale, and make more detail visible in the currently pure black and pure white regions? – trichoplax Jun 20 '15 at 21:04
• Would it provide more challenge and more variety to include more than one value for number of white pixels? Perhaps the same 5 images could be provided in 3 categories: black heavy, white heavy and balanced. Then the results for each category can be shown so we can judge whether a given technique gives good results across the board. – trichoplax Jun 20 '15 at 21:07
• Of course there should be more series, and I like your suggestion of black heavy, white heavy and balanced. My idea way that people should come up with creative transitions from one image to the other and back, I imagined something like the 'powerponit' slide transitions. The images are black and white only for making the challenge somewhat easier. – flawr Jun 20 '15 at 21:09
• I like the restriction to black and white only. I don't necessarily think it makes it easier, but I think restriction is very important for popularity contests otherwise they get too open ended. It might be worth adding more restriction otherwise it risks being closed as too broad. For example, you might restrict how much difference there can be from one frame to the next. Maybe only 5% of pixels can change each frame (or whatever percentage you feel is best). You might feel that particular restriction detracts from some potential solutions so that's just an example. – trichoplax Jun 20 '15 at 21:31
• The answers to this question might have some useful code for creating nice dithered images for your sample images. It converts greyscale to strictly only black and white. – trichoplax Jun 20 '15 at 21:55
• It might be worth thinking up a few examples of transition effects and then adding restrictions that don't rule out any of the effects you thought of. To make it an interesting challenge but without losing potential answers. – trichoplax Jun 20 '15 at 21:56
• For example, the effect of paint running down from the top of the picture overwriting the old picture with the new. – trichoplax Jun 20 '15 at 21:57
• Or a simple slide across (with or without the old image moving too). – trichoplax Jun 20 '15 at 21:58
• Or "burn through" from a hole appearing in the centre, like old cinema projectors if left on one frame too long. – trichoplax Jun 20 '15 at 21:59
• Cf codegolf.stackexchange.com/q/33172/194 , although the morphs there don't preserve the number of pixels of each colour in the intermediate frames. At present I would vote to close this as too broad, and suggest adding a restriction that in each frame the changed pixels must pair up into adjacent pairs which swap with each other. – Peter Taylor Jun 20 '15 at 22:01
• @PeterTaylor I was aware of that challenge (I did even participate=), but I did not realize that this one would be so similar to the other one. I really like your idea of the swapping restriction, but I am not sure whether this is perhaps too restrictive. Another idea I had (very vague so far) is considering the white pixels as some kind of 'fluid' that can only move with a certain velocity (or another property) and your suggestion would really match that idea. The question is for each frame transition, should we limit the number of swaps? Or the number of swaps per pixel? – flawr Jun 21 '15 at 10:04
# Find the maximum of ax+b online
You are given a list of (a,b), and a list of x. Compute the maximum ax+b for each x. You can assume a, b and x are non-negative integers.
But this time, items in the list are added dynamically. Your program should support the following operations (you can rename the operations if that's convenient):
• Add a,b, to insert (a,b) into the list.
• Query x, to find the maximum ax+b in the current list, with the given x.
Your program or function must run in expected (to the randomness if your code involves that, not the input) O(nlogn) time where n is the total input length (or total number of operations).
You can write a complete program, a function, a list of functions or methods doing each operation, or a function taking one operation each time. For the later two cases, you can either return or print the result after each operation, or add an "output" operation, or output automatically when the program ends.
### Examples
(will be added later.)
This is code-golf. Shortest code wins.
### Note about the complexity:
If you used a builtin having a good average-case complexity, and it can be randomized to get the expected complexity easily in theory, you can assume your language did that.
That means, if your program can be tested to be O(nlogn) (in theory), with edge cases for your code, but not the implementation of your language, we'll say it is O(nlogn).
• Can we assume our language's built-in sorting is O(n lg n)? – xnor Mar 5 '15 at 2:58
• @xnor Usually they are O(n lg n). But if you meant some language with a built-in sorting function not in O(n lg n) (or nobody is bothered to check the real complexity), I'm not sure. Strictly speaking they may be not in O(n lg n) and invalid. But it seems nobody is downvoting or deleting those answers. – jimmy23013 Mar 5 '15 at 4:38
• @user23013 They are very rarely O(n log n). Most languages implement quick sort, which is O(n log n) on average but has a worst case complexity of O(n^2). That being said, I'd always include a statement along the lines "you may assume that your language's built-in sorting function runs in O(n log n)". – Martin Ender Mar 6 '15 at 21:10
• @MartinBüttner Allowed expected complexity. Is it better now? – jimmy23013 Mar 7 '15 at 2:06
• I don't know... it just seems unnecessarily complicated to me and puts some esolangs at a disadvantage that might have a naive sort implementation, but ultimately it's your call. – Martin Ender Mar 7 '15 at 2:09
• @MartinBüttner Do you have ideas about the stronger version (allowing inserting (a,b) dynamically)? I'm trying to make it consistent. And esolangs can answer the convex hull question anyway. – jimmy23013 Mar 7 '15 at 2:41
• @user23013 I don't think I'm qualified to have an opinion about the stronger version. ;) – Martin Ender Mar 7 '15 at 3:02
# Diffusion Battle
## Overview
Players are all present on a toroidal grid. Each player has 16 particles to start with. The total number of particles is fixed but they can change colour. Each turn a player decides what type of action to take for each particle of their colour, but cannot control the direction, which is always random.
All players' particles then move in a random direction at the same time, possibly resulting in some of them changing colour. The player with the most particles of their colour at the end of the game is the winner.
## Action types
A player chooses from the following actions for each particle:
• Drift: do not attempt to change the colour of other particles
• Eat: attempt to change the colour of other particles
Each of these actions is applied after all players' particles have aimed in a random direction. This may result in two particles aiming for the same cell. No cell will end up with more than one particle, but aiming for the same cell results in interaction, with no movement and the following rules being applied:
• If both particles chose Drift, nothing happens.
• If both particles chose Eat, nothing happens.
• If one particle chose Eat, the particle that chose Drift will become the colour of the particle that chose Eat.
Clearly choosing Eat is always an advantage when two particles aim for the same destination. However, if a particle aims for a cell that no other particle is aiming for, it will move there, with the following rules being applied:
• If the particle chose Drift it will move with no change.
• If the particle chose Eat it will move and take on a random colour (which may be its own colour or that of any other player).
### N particles colliding
The case where N particles are all aiming for the same destination cell is a generalisation of the case for 2 particles. None of them will move to the destination cell and the following rules will be applied:
• If all of the N particles chose Drift, nothing happens.
• If all of the N particles chose Eat, nothing happens.
• If some chose Drift and some chose Eat, none will move and all those that chose Drift will change to a colour chosen randomly from those exhibited by those that chose Eat. If there is more than one particle of a given colour that chose Eat, that colour will have a correspondingly higher probability of being chosen.
When N = 2 this reduces to the case described for 2 particles.
It follows that if the N particles are of the same colour, then regardless of their individual choices none will move and they will all remain the same colour.
### Collision with a particle that was unable to move
What happens to particles that were aiming for an empty cell but the cell is not empty because its occupant was unable to move?
# Sandbox thoughts
EITHER
• ALL THOSE THAT AIMED FOR THE SAME CELL AFFECT EACH OTHER
OR
• THOSE THAT AIMED FOR THE SAME CELL DO NOT MOVE, ALL THOSE THAT END UP ADJACENT AFFECT EACH OTHER
I favour the second but I need to consider how it would work with large numbers of particles adjacent.
• Zgarb pointed out in chat that it would be better to have a small probability of changing if failing to eat, so that the penalty for failure is not so extreme. I'm likely to use this as it is fine tunable.
# Minimum? Vertex Cover
<insert definition of (minimum) vertex cover here>
Given a graph, you must output a valid vertex covering of that graph. The entry with the smallest total size (number of vertices) over five test cases (TBD) wins.
### Input
Your program will be run with one argument given: a file name. For example:
python findacover.py graph_1.txt
Submissions will read the graph from the file specified. The format of the file will be:
5
0:1 2 3
1:0 3
2:0
3:0 1 4
4:3
The first line is simply the number of vertices in the graph (V). The next V lines are the list of vertices. Each line consists of the vertex number and a colon, followed by a space-separated list of vertices connected to that vertex by an edge.
Note that each edge will be listed twice, once for each vertex it connects. You can see in the example that the edge connecting 1 and 3 is present on the line for both vertices.
### Output
Output is simply a list of vertices that represent a valid vertex covering of the input graph. Output should be written to STDOUT (so my validator can score it.
I will strip all non-digit, non-space characters ([^0-9 ]) from your output and interpret the remainder as a space-separated list. For example, outputs [0, 1, 2, 3] and 0 1 2 3 will be treated the same.
For the example graph above:
Valid:
0 1 2 3 4
or
0 3
among others.
Invalid:
0 1 2
This does not cover the graph, since the edge between 3 and 4 is not covered.
### Rules
• Submissions will be run once for each graph.
• You have a time limit of five minutes for each graph. You cannot "roll over" unused time to the next graph. This time is clocked on my computer, an i7-3770K CPU with 16GB RAM, running Ubuntu 14.04. If you might bump against this limit, make sure you send a "best-yet" output before time is up.
• Feel free to use multiple threads, but keep it on the CPU. My graphics card is not your playground.
• Your submission must be deterministic. If you use a PRNG, seed it with a constant value.
• You cannot use any built-in or third-party function designed to solve vertex covering problems.
• Standard loopholes apply. This means (for example) that you cannot hardcode your submission to these test cases. If I choose five more test cases to run, you should get comparable (obviously not exact) results.
### Scoring
Your score is the number of vertices in your cover. If you return anything except a valid covering, or do not return anything within the time allotted, your score for the graph will be 200000.
Score is summed over five test cases, each consisting of a graph with 20k to 100k vertices. The lowest total score wins.
<link to test cases here>
<insert generator/validator/scorekeeper here>
### Sandbox
• Does the "function designed to solve vertex covering problems" need to be better specified? If so, how could I word that?
• Are the graph sizes and time limits reasonable? They are designed to prevent a straight brute-force attack, but I don't believe they are too large to prevent a good approximation. Are they too small?
• "Function designed to solve NP-complete problems?" Not sure, but I'm assuming you also want to rule out equivalent things like independent set. – Sp3000 Jun 27 '15 at 6:28
# Be the shortest in your own standard
This was cops-and-robbers at the beginning. But I'm thinking of changing it to a user scored challenge instead, where each user can propose a limit number of regex, and for each regex, the shortest matched program gets one point.
Working in progress. More details to be added
## How to answer
In each answer, you should write a program with length n for the above task, and optionally a regex with no more than n/2 bytes.
The regex should consist of only character literals (including escaped ones) and [...] (...) ? * + |. You cannot use other features such as specifying number of repetitions or the beginning/ending anchors.
If you choose to include a regex, you should also specify whether programs should be nearly-matched or nearly-unmatched by the regex, defined as following:
• A regex nearly-matches a program, if there is a character C and a subset S of the set of all occurences of C in the program, that when everything in S is replaced by a character C', the program will be matched by the regex.
• A regex nearly-unmatches a program, if for every character C and every subset S of all occurences of C in the program, that when everything in S is replaced by a character C', the program will not be matched by the regex.
If you choose to nearly-match, it must nearly-match your own program in the same answer. And this program must be shorter than any other answer nearly-matched by the regex. The same applies if you choose to nearly-unmatch, where your program must be shorter than anything nearly-unmatched.
The programs and regexes should only use printable ASCII, tabs and newlines.
Each user can write any number of programs, but can write at most 5 regexes at the beginning. You can write one extra regex for each 5 upvotes you get.
## Scoring
For each regex, if one of your program is the first of the shortest of the programs nearly-matched/unmatched by the regex according to the specification, you get 2 points. If the regex is not your own, you get one extra point.
Each of your program can be scored more than one times if they are the shortest for more than one regex.
You should not post an answer that doesn't get any score at the time you post it. But you can leave it there if it loses all the score it had. And you can post an answer that is only the shortest for your own regex.
## Rules about posting and editing answers
You can always edit regexes into an answer, if you are allowed to write more regex. But once a program is the shortest for a regex at the time you post the regex, you can't post another regex that makes this version of the program shortest.
You should not edit a program in a way that it loses scores from some regex. And if you edit, you should keep the version that is paired with a regex in your answer.
Regexes should not be modified after posting for any reason if they are valid.
You should not post programs/regexes that is the same as a previous submission.
• @githubphagocyte It is the program + regexes. I left an m there when there was only one regex... – jimmy23013 Dec 27 '14 at 1:58
• This doesn't seem to prevent people from just writing the best-golfed program. Bowling doesn't seem to work either (and will be a duplicate of the bowler-golfer fraction war). I'll think about it later... Shouldn't post this when I'm sleepy... – jimmy23013 Dec 27 '14 at 3:14
• It seems to me that the two regexes will be one character each, or at worst three characters in total. – Peter Taylor Dec 27 '14 at 19:27
• @PeterTaylor You can comment out the string matching the regex. The rule about replacing a character literal is to make sure the comment character is always available. But I'm going to abandon this post anyway if I can't find a better winning criterion... – jimmy23013 Dec 28 '14 at 5:33
Inspired by the last question asking for the masses of the elements, this challenge will be slightly more specific. In this challenge, you will find the molar mass of a sequence of amino acid peptides.
Amino acids, of which there are 21, are the units that combine into chains and then bend and change shape to form proteins, which serve widely varying functions in cells and in the body of most all living organisms. Scientists working with peptides, such as chemists and doctors, can easily synthesize a desired peptide chain in the lab thanks to the powers of modern technology. After some purifying and such, he will have the desired sequence in the form of a white powder/crystal like substance.
However, this is science! This means that he will eventually need to weigh out a desired amount of his sequence to perform some reactions or tests. To do this, he needs to know its molar mass.
Today, we know that there are 21 amino acids, and we have found their molar masses and given them names and symbols, just like the 118 elements on the periodic table. They are as follows:
Name Symbol Molar Mass
Alanine A 89
Cysteine C 121
Aspartic acid D 133
Glutamic acid E 147
Phenylalanine F 165
Glycine G 75
Histidine H 155
Isoleucine I 131
Lysine K 146
Leucine L 131
Methionine M 149
Asparagine N 132
Proline P 115
Glutamine Q 146
Arginine R 174
Serine S 105
Threonine T 119
Selenocysteine U 169
Valine V 117
Tryptophan W 204
Tyrosine Y 181
BUT WAIT!! (how do I make this text bigger?)
# But wait!!
Unlike the elements, the mass of a peptide sequence isn't just the sum of the masses of the constituent amino acids! Amino acids combine in a reaction called a hydrolysis reaction that forms a bond called a peptide bond. Take a look at this diagram:
A hydrolysis reaction is a reaction in which two large molecules combine to make a larger one, but in the process lose a small molecule. In this case, they lose a water molecule (hence the name hydrolysis). Since the mass of water is 18, when two peptides bond together in a chain they lose 18 molar mass units. So if our sequence was AC (Alanine-Cysteine), the mass would be 89 + 121 - 18 = 192.
The Challenge
Your job is to golf a program that computes the molar mass of a given peptide sequence. The sequence will be specified by their one letter symbols, in all caps.
Examples: A returns 89
AC returns 192
WAGAKRLVLRRE returns 1453
Shortest byte count wins, no loopholes. Weights must be hardcoded in the program somehow.
• I would estimate a greater than 50% chance that this would be closed as a near-enough dupe of codegolf.stackexchange.com/q/35599/194 . (Although I thought the same of the elements one, and I was wrong about that). – Peter Taylor Jun 27 '15 at 20:51
• @PeterTaylor Do you know how to make the text bigger? – Faraz Masroor Jun 28 '15 at 3:44
• No. I know how to create headers, but that's not the same thing. – Peter Taylor Jun 28 '15 at 7:07
• @FarazMasroor "(how do I make this text bigger?)" -- You mean as I did in my "mass of elements" post? Select the text you want to format as code and press CTRL+K. Alternatively, add 4 spaces before each line. – Spikatrix Jun 28 '15 at 11:42
• The protein synthesis reaction you show is an example of condensation. Hydrolysis would be the reverse reaction (which incidentally occurs during digestion of proteins.) – Level River St Jun 29 '15 at 18:27
• If it's the text "but wait!!" that you wanted to make bigger, I've edited to show the different header text sizes available - just delete the ones you don't want. – trichoplax Jul 3 '15 at 16:52
• I've also edited to make your table of molar masses a single block (without the white lines running across it). This is just to show you how - click "rollback" from the edit history if you don't like it. – trichoplax Jul 3 '15 at 16:56
# Fundamentals of City Planning
In this challenge, you are a city planner. You have been given an N by M rectangle to fill with residential lots of size K and roads of width 1. You know that money is made based on the number of residents in the city, so your goal is to maximize the number of lots in your rectangle. However, the following rules are enforced:
• Roads have a width of 1 square, and all roads must be orthogonally connected to each other.
• Every lot must share at least 1 side with a road
• Lots must all be the same size K. They can be in the shape of any polyomino of size K.
• There must be at least 1 road that touches the edge of the rectangle, as your residents need to be able to get in and out!
The winner of the challenge is the one that:
1. Fits the most lots across all of the below examples. In the case of a tie:
2. Fastest solution, unless multiple answers are running under a second. In that case:
3. The earliest posted solution
Submitted answers must run in under a minute.
# STDIO
You will be passed three integers, N, M, K. You need to output the generated grid. Roads should be represented by .. Lots should be ordered and numbered, and when printed should be represented by their number mod 10. The ordering can be arbitrary, and is simply used to distinguish lots on output. Empty squares are allowed and are represented by #.
# Test Cases for correctness
Provided solutions can be numbered differently, rotated, and/or reflected
1 2 1
.1
1 3 1
1.2
2 2 1
1.
2.
3 3 1
123
...
456
3 3 2
11# 11. #11
2.. or 2.. or 2..
233 233 233
# Test cases used for scoring:
20 15 1
20 15 2
20 15 3
20 15 4
20 15 5
20 15 6
20 15 7
20 15 8
• 1. I got to the bit about lots being the same size K and wondered why I would choose K to be anything other than 1. Only when I got to the input spec did I understand. Adding "of size K" to the end of the second sentence would probably avoid that confusion. 2. Do you have a reference implementation? I'm worried that "fastest solution" might not be a good tiebreaker because on the given test cases the answers may execute in under 200ms. – Peter Taylor Jul 4 '15 at 6:52
• I updated, tell me if it looks reasonable? – Nathan Merrill Jul 4 '15 at 11:45
• Are answers required to be deterministic? – trichoplax Jul 4 '15 at 13:53
• @trichoplax no, but deterministic will probably lead to better results. Updated the description. – Nathan Merrill Jul 4 '15 at 14:46
# Catch the robber
This is my first time making a KOTH. I (mostly) will not post this KoTH. Read the ReadMe file for more info.
# Overview
A cop spots a robber and the robber runs and ends up in a basement. The cop then goes into the basement and locks the door.
# Gameplay
The basement consist of 49 rooms with dimensions 7 x 7. The top-left room has coordinates [0,0] while the bottom-right room has coordinates [6,6]. The cop starts on the room with coordinates [6,3] while the robber starts on the room with coordinates [0,3].
Cop
The cop moves first. The cop can move in one of these directions:
• Up
• Right
• Down
• Left
• Here
The direction here indicates that the cop will stay in the current room and will not move. The rest of the directions are self-explanatory. The cop can move in a particular direction if it is a valid one, i.e, the cop cannot move out of the grid or move into a room with a trap.
The cop can also put traps in a room. At the start of every match, the cop has 3 traps. Once a trap has been placed, the cop will not be able to move into the room where the trap is placed.
The cop also has 3 pressure sensors at the start of every match. The cop can move into a room where a pressure sensor has been placed.
If the robber moves into an adjacent room of the cop, the cop will be alerted.
Robber
The robber can move in the same directions as the cop does. The robber too cannot move outside the grid.
The robber has two TrapDetector5000 which can be used by the robber. It will detect if there is a trap in one of the adjacent rooms that the robber is in.
If the cop moves into an adjacent room of the robber, the robber will be alerted.
Goal
The cop must catch the robber as soon as possible. This can be done by moving into a room where the robber is. The cop will also catch the robber if the robber moves into the room where the cop is.
If the robber moves into a room where the cop has placed a trap or a pressure sensor, the cop will be alerted and the robber will not be able to move for 2 turns, if the room had a trap. However, the robber will be able to move if the room had a pressure sensor.
The robber will be alerted if the robber steps into a room with a pressure sensor.
# Controller
The controller is written in java and can be found here. As a cop or a robber you each have to each complete implement a Java class.
You have to implement the Cop interface if you are writing a Cop Bot and implement the Robber interface if you are writing a Robber Bot.
There is an enum direction with 5 directions Here, Up, Right, Down, Left which you can use when building your Bot.
You can use the Grid.isValidMove(direction) to check if that direction is a valid move. This is for Cops.
You can use the Grid.isValidPosition(direction) to check if that direction is a valid move. This is for Robbers.
You also may write additional functions within that class. The controller comes with one working example of a simple cop and robber bot.
Please use java 7 and please do not exploit stuff in the controller and cheat.
Note that your bot needs to return an int from takeTurn within 200 milliseconds. Failure to do so will result in the disqualification of your Bot.
# Scoring
Each cop plays 10 rounds against each robber and the number of moves each robber makes in each round will be added up and this is the score of that particular robber. The same goes for cops.
The robber with the highest score and the cop with the lowest score wins!
• This KoTH has potential, I believe. I might expand the room to ensure that traps aren't an insta-catch. Also, if a cop moves into a room next to a robber, only the robber knows? (and vice versa)? If that is true, its going to hard to beat the robber strategy of "stand until a cop moves next to you" – Nathan Merrill Jun 27 '15 at 2:08
• First of all. Thanks for reviewing this KoTH. What size should the grid be? And "if a cop moves into a room next to a robber, only the robber knows? (and vice versa)?" -- Yes. Because this is different from those KoTHs where a turn means movement of both the players. Any idea for making it more interesting? – Spikatrix Jun 27 '15 at 8:15
• I'd say that a 7x7 would be sufficient. If I am caught in the center, it is still possible that the cop won't catch me if he is on the edge of the room. I'm not sure how to make it more interesting, but an idea I had would be to allow both the cop and robber to place "pressure sensors" instead of traps. Their party is informed when stepped on, but they don't know which sensor has been stepped on (unless there is only 1 they placed) – Nathan Merrill Jun 27 '15 at 11:52
• "Their party is informed when stepped on" -- Party? There is just one cop. Should there be more than one cop? – Spikatrix Jun 27 '15 at 11:59
• No. I just used party to refer to either cop or robber depending on who placed it. – Nathan Merrill Jun 27 '15 at 12:04
• I was thinking of cops having 2 traps + 4(?) pressure sensors. Also, I've been thinking of giving robbers something... like a dummy (moves in a straight direction and tricks the cops will be alerted if it is in an adjacent room) or one trap-detector-5000 (which detects if there if there is a trap in an adjacent room and can be used only once). What do you think? – Spikatrix Jun 27 '15 at 12:28
• I'm not sure about the dummy idea. I think that this challenge is about calculating probabilities of the enemy's location. I'd personally would prefer abilities to be purely knowledge granting, but that's my opinion. – Nathan Merrill Jun 27 '15 at 13:37
• @NathanMerrill I've written some code, but I don't think I'll post this. See the answer for more info. – Spikatrix Jul 7 '15 at 9:01
# Blackjack
## How to Play
Blackjack is for any number of people, but there will be only one in this case. The goal is to get as close to 21 as possible without going over. Aces will be 1 for this program. All other face cards are worth 10.
To start, the player is dealt two cards. They can then choose to hit (take another card) or stand. This repeats until they go over 21 (bust) or decide to stay.
If the player busts, their score is 0. Otherwise, their score is the total of all the cards.
## Input/Output
1. The program should output 2 "cards" (randomly choose between 2, 3, 4, 5, 6, 7, 8, 9, 10, K, Q, J, and A).
2. The player then inputs a move, stand or hit.
3. If he/she stands, output Final Score: [total of cards].
4. If he/she hits, output another "card". Output Bust! if the score is over 21.
5. Repeat steps 2-4 until the player busts or stands.
## Test Cases
Output: 5 K
Input: hit
Output: 6
Input: stand
Output: Final Score: 21
Output: A Q
Input: hit
Output: 7
Input: hit
Output: 6
Output: Bust!
## Rules and Other Notes
• Aces are always 1.
• Each output/input should be on a new line.
• Any trailing spaces and newlines are okay.
• Assume that all input will be valid. You don't need to notify the player of invalid input.
• You cannot read from a file or other source.
## Scoring & Submissions
• This is code golf. Shortest code in characters wins.
• How to win: post the shortest working code within one week.
• Please include the language, number of characters, and code. Explanations are appreciated, but not required.
Good luck!
## Sandbox Questions
Anything I'm forgetting? Does anything need more clarification?
• @TNT code-golf, game, card-games. I'll edit my post. – Nick B. Jul 11 '15 at 2:54
# roll me back a game of hearts
Roll me back a game of hearts
given just a deck of cards
So, I've been playing a game of hearts (https://en.wikipedia.org/wiki/Hearts) with three of my friends but I'm not entirely sure if all of them were playing perfectly according to the rules. So I'd like to replay the game with everyone's cards openly visible. And because you like algorithms, you offered me your help determining everyone's cards from the final deck after each hand. However, I can't quite remember if you promised me a full program or just a named function.
A card is represented with two characters. The first character represents its value and is one of 23456789TJQKA (2-10, jack, queen, king, ace). The second character represents its suit and is one of CDHS (clubs, diamonds, heards, spades).
The input is a list of 52 cards. It can either be a list (array, vector...) of two-character strings or a single space-separated string. You will be given the cards in the exact order they were played. The list represents a valid deck and 2C is the first card in the deck.
Output four sets of cards, each representing the starting hand of one player. The first set may correspond to any player but the rest must be ordered in the order of play (so if the first hand to be output is the third one to play, the rest must be in the order of fourth, first, second). The cards in each player's hand may be output in any order (it's a set). If you choose to output a single string, separate the cards in each hand with spaces and the hands with newlines.
Game rules:
Rules irrelevant to this challenge have been formatted in small font
• There are variants for three to six players but the base variant is for four players so let's assume this one.
• Before the main game each player passes three cards to another player. Since this is a lossy operation, let's just ask for the hands after this passing moment.
• Each game consists of 13 tricks. Each trick consists of each player in clockwise order playing one card from their hand, then one player "taking" the trick.
• The first trick starts by the two of club. Each subsequent trick is started by whichever player took the previous trick.
• The first player in a trick can play any card. The first player cannot play hearts unless hearts have already been played in that game or he has no other cards, however. The other players have to play the same suit as the leading player if they have that suit, otherwise they can play any card. Scoring cards cannot be played in the first trick
• The player that played the highest valued card of the same suit as the leading card of that trick takes that trick. E.G.: in 2C AD KC 5C, the king of clubs takes the first trick. In 2H KS AS QS the leading player takes the trick (and fourteen points).
• The objective of the game is to end up with the fewest points possible. A player gets one point per each hearts taken, and 13 poins for the queen of spades.
You may assume that the deck of cards is valid (exactly one of each card) and that the two of clubs has been lead. You may also assume that the rules concerning the order of play and trick taking have been followed. You may not assume the rules concerning which cards can be played when have been followed. Heck, you don't even know that I haven't been cheating. Because a player may have been dealt nothing but hearts, you may not even assume only non-scoring cards have been played in the first trick (if everyone on the planet plays 100 games in their life, this may realistically happen to someone).
Should I loosen the I/O requirements? How much?
Formatting advice? Which parts (if any) should I trim down? What needs to be clarified?
Anything else?
• So the program you're asking me to write can actually ignore most of the rules of Hearts and I can think purely in terms of a no-trump game in any whist-like game? – Peter Taylor Jul 13 '15 at 12:13
• @PeterTaylor Correct. Spades does have trumps, however, and hearts is the only other whist-like game I've played. – John Dvorak Jul 13 '15 at 12:17
# How to Create a Dating Website
Dating websites make lots of money. You want lots of money, so you're going to make a dating website. However, we all know that the most important part of any dating website is the algorithm, so you need to build that first.
When your customers will sign up, they are going to fill out a short survey describing themselves*. On the survey, they filled out personal interests, personality traits, and other important data for you to process using your algorithm.
Your algorithm must then accept two things:
1. A list of people, where each person has a list of traits
2. A list of trait pairs (A, B), where each pair has a score S. A trait pair matches a couple if one of them has trait A, and the other has trait B. The score can be negative. If A and B are different, and both people have both of the traits, then the score is doubled.
Your algorithm must then output a list of couples. Each person must be included in a couple exactly once, and only two people is allowed in each couple**. Your score is the sum of each couple's score. A couple's score is the sum of each of the trait pairs they match.
# Input/Output
Input is given as shown in the following example. Ignore the # comments
4 # Number of customers
1,3,5,7 # Customer 1's list of traits
1,2,4,5,6 # Customer 2's list of traits
1,6,7 # Customer 3's list of traits
1,2 # Customer 4's list of traits
4 # Number of trait pairs
1,2,-2 # Trait 1 and 2 give a score of -2
1,6,4 # Trait 1 and 6 give a score of 4
2,3,-4 # Trait 2 and 3 give a score of -4
6,6,5 # Trait 6 and 6 give a score of 5
Let's say you output:
2,3 1,4
Then that would match Customer 2 to 3 and 1 to 4.
If we look at 2,3, they match:
• The first trait pair, because Customer 3 has Trait 1, and Customer 2 has Trait 2
• The second trait pair, because they both have Traits 1 and 6. (This means double the score)
• The fourth trait pair, because they both have trait 6. However, because the trait pair only references 1 trait, we don't double the score.
Adding it all up, we get -2 + 4*2 + 5 = 11. The other couple scores -2 + -4 = -6, so the final score is 11 + -6 = 5.
The person who generates the highest scoring pairing wins the challenge. In the case of a tie, the program that generates it the fastest wins. If programs are generating the answer in under a second, then the earliest posted answer wins.
Question: I'm planning on doing 10K people, 500 traits, an average of 50 traits per person, and 5K trait pairs. I'm doing large numbers because I want efficient algorithms, but I want to know if the numbers are feasible
*They clicked a check box saying that they didn't lie, so we know that the survey is accurate
**You can assume everybody is a hermaphrodite
• FYI there's a well-known polynomial-time algorithm. – feersum Jul 13 '15 at 4:25
• @feersum, that's for weighted matchings in bipartite graphs. This is maximum weight matching in a complete graph, for which the well-known polynomial-time algorithm is Edmond's blossom algorithm. PS Nathan, if you don't want a debate about your cover story, you should probably specify that this dating site is aimed specifically at bisexuals. – Peter Taylor Jul 13 '15 at 6:08
• ... or hermaphrodites. Poor snails, perpetually being asked about their gender and not being able to reply "both". – John Dvorak Jul 13 '15 at 8:31
• stdio is a standard header file for input and output functions in the C language. Why is it mentioned in this problem? – aditsu quit because SE is EVIL Jul 13 '15 at 10:11
• @aditsu STDIO stands for "Standard I/O" – Nathan Merrill Jul 13 '15 at 12:39
• Yes, and that doesn't change anything I said. – aditsu quit because SE is EVIL Jul 13 '15 at 15:14
• @aditsu then I don't see your point. I'm describing how to input/output (and giving an example at the same time) – Nathan Merrill Jul 13 '15 at 16:51
• My point is STDIO specifically refers to the C header file. What you are then actually showing is example input. If it really was stdio, I would expect some C macros and declarations. – aditsu quit because SE is EVIL Jul 13 '15 at 17:54
# LindenMASM
LindenMASM is an Assembly-like programming language which can be used to generate images from Lindenmayer systems. Lindenmayer systems are very interesting in the fact that they can provide a rudimentary method of generating fractals, such as a Sierpinski triangle. They are also interestingly able to mimic nature very closely for some reason, which can be seen in the below image. You will be implementing a LindenMASM interpreter in a language of your choice.
# Understanding Lindenmayer Systems
You should check out the Wikipedia page for a more detailed overview of Lindenmayer systems, as I will simply describe the process of actually using a system. I will be referring to a turtle in this explanation. A turtle is simply the device by which an L-system is drawn. We will use a dragon curve L-system as an example.
Firstly, we need to consider the variables we will be using. In the case of an L-system, a variable is used to control evolution, and does not actually correspond to any movement. We will need two for this, so let's call X and Y our variables.
Next, we would define our constants. In most L-Systems, the character F refers to moving forward, - turns left and + turns right. We will follow these conventions here, and specify that - turns the turtle 90 degrees left and + turns the pointer 90 degrees right.
After this, the axiom needs to be defined. This is the starting point of the system, i.e. what it looks like after 0 iterations. In our case, we will set it to FX.
Finally, we need to define some rules. Rules are applied by going through each character of the axiom, and if one of them matches a rule, replace it with the defined set of instructions. Our rules are that X -> X+YF+ and Y -> -FX-Y. I will show a quick evolution of steps, so you can see how these rules are applied.
• n=0 - FX
• n=1 - FX+YF+
• n=2 - FX+YF++-FX-YF+
• n=3 - FX+YF++-FX-YF++-FX+YF+--FX-YF+
• n=4 - FX+YF++-FX-YF++-FX+YF+--FX-YF++-FX+YF++-FX-YF+--FX+YF+--FX-YF+
When this is interpreted, however, since X and Y don't control movement, the interpreted steps for n=4 would look like this:
F+F++-F-F++-F+F+--F-F++-F+F++-F-F+--F+F+--F-F+
Simplified..
F+F+F-F+F+F-F-F+F+F+F-F-F+F-F-F+
Which would result in the following drawing:
# Syntax
There are only a few keywords available in LindenMASM which you will need to implement.
1. STT - Begins every LindenMASM file.
2. AXI $ - Sets the axiom (initial state) of the system. • $ is a series of commands/variables/constants, ranging from the built-ins plus any user-defined functions.
3. DEG $ - Sets the degree of which all turns will follow. • $ will be a integer or float between 0 and 359, inclusive. The default value is 90 otherwise.
4. MOV $ - Sets the move distance of which all position adjustments will follow. • $ will be a integer or float between 1 and 100, inclusive. The default value is 10 otherwise.
5. INC $ - Sets the number of iterations the generation should go through. • $ will be a number between 0 and 30, inclusive. The default value is 0 otherwise. (a value of 0 means just the axiom is displayed).
6. SET $# - Sets a constant $ to a specified command #
• $ will be a letter between A and Z, inclusive, and will be uppercase. • # will either be a 0 or a 1, where a 0 corresponds to the constant being one that draws forward, and a 1 corresponds to the constant being one that moves fowards. 7. RPL$ # - On every iteration, variable/constant $ will be replaced with the command/variable/constant string #. • $ will be a letter between A and Z, inclusive, and uppercase. It does not need to be SET to be replaced.
• # is a string of commands/variables/constants that $ should be replaced with. 8. END - Ends every LindenMASM file. Each keyword should be placed on a new line. Your program should fail parsing if (a) The file does not start with STT or does not end with END. Your program should assume that the rest of the keywords will have proper arguments attached to them. Below is a list of all of the regular commands that cannot be defined by the user: 1. + - Rotates the pointer to the right DEG degrees. 2. - - Rotates the pointer to the left DEG degrees. 3. [ - Saves the pointer's coordinates and heading to a list. 4. ] - Pops the last value of a list and sets the pointer's coordinates and heading to that. # Examples I will give 5 examples, each of which will have detailed information on the pattern, plus a link to have it visualized online. Fractal Tree - n=6, axiom=X, Θ=25, X->F-[[X]+X]+F[+FX]-X, F->FF (Test Online) STT AXI X DEG 25 MOV 10 INC 6 SET F 0 RPL X F-[[X]+X]+F[+FX]-X RPL F FF END Gosper Curve - n=4, axiom=F, Θ=60, F->F+G++G-F--FF-G+, G->-F+GG++G+F--F-G (Test Online) STT AXI F DEG 60 INC 4 SET F 0 SET G 0 RPL F F+G++G-F--FF-G+ RPL G -F+GG++G+F--F-G END Koch Variant - n=4, axiom=F-F-F-F, Θ=90, F->FF-F--F-F (Test Online) STT INC 4 RPL F FF-F--F-F SET F 0 AXI F-F-F-F END Sierpinski Triangle - n=7, axiom=F-G-G, Θ=120, F->F-G+F+G-F, G->GG (Test Online) STT RPL G GG RPL F F-G+F+G-F DEG 120 AXI F-G-G SET F 0 SET G 0 INC 7 END Dragon Curve - n=12, axiom=FX, =90, X->X+YF+, Y->-FX-Y (Test Online) STT INC 12 DEG 90 AXI FX SET F 0 RPL X X+YF+ RPL Y -FX-Y END # Input Aside from the examples given above, your code should support the following test cases as well: Input: SET F 0 AXI FF RPL F F-F+F END Output: Error: No STT at beginning. Input: STT SET F 0 AXI FF RPL F F-F+F Output: Error: No END at ending. # Output Your program should output the resulting image by outputting an image or by drawing to the screen (e.x. turtle graphics). If you would like to check out a Python 3 example, here is a Github link to pylasma. This is , so least number of bytes wins. • Why is "turtle graphics" in particular mentioned? Why not any other method of drawing to the screen? – feersum Jul 14 '15 at 3:40 • @feersum My intent was to say that, at the time of writing I didn't know how to word it, I was very tired :P – Kade Jul 14 '15 at 11:04 • Do any of the commands accept floating-point arguments? – feersum Jul 14 '15 at 14:41 • @feersum MOV and DEG are the only two which should support floating-point arguments. I'll update. – Kade Jul 14 '15 at 14:44 • This is pretty close to codegolf.stackexchange.com/q/9341/194 . The bit that's different is parsing the input, so if you want to make an original question then you could restrict it to validating that a file obeys the structure rules (although I must say that I find "$ ... will be one of the variables within #" overly restrictive). – Peter Taylor Jul 14 '15 at 18:42
• @PeterTaylor Just my opinion, but validating a text file is much less exciting than the question I'm posing :P I'd rather just scrap this. – Kade Jul 14 '15 at 20:50
# Print time of day using words
I'm not sure if this has been done before. I thought it must have but I could not find one using search. The idea is basically, given a number in seconds, e.g. the output from time(NULL). Return the current time in words using 12 hour clock, e.g.
HALF PAST FIVE PM
A QUARTER TO SIX PM
TEN TO SIX PM
SIX O'CLOCK PM
SIX TEN PM
TWELVE NOON
TWELVE O ONE AM
Has this challenge been done before?
One thing I cannot decide is when the "PAST" should be used. Should it be used only when there is less than 15 minutes left? Should it be FIVE FIFTY FIVE or FIVE TO SIX?
• It doesn't matter too much which approach you choose as long as you define it clearly so there is no doubt which way is the correct way for your question. I'm guessing using PAST and TO would make it a slightly more challenging/interesting golf. – trichoplax Aug 4 '15 at 0:43
• Yes, I want it to be a 12 hour clock. As humanly as possible. I think I should add AM and PM to the answer. Possibly even NOON. – some user Aug 4 '15 at 0:49
• Yes it's a tricky system. Just keep editing to clarify edge cases until the comments stop coming in :) – trichoplax Aug 4 '15 at 0:55
• This is essentially It's Spanish Time! in a different language. – Dennis Aug 8 '15 at 4:40
• You have three options. 1. use only the digital times, like FIVE FIFTY FIVE (boring in my opinion) 2. use the word based system, TWENTY TO FIVE (the changeover occurs between 30 and 31 past the hour) 3. use word based system for the 15 minute intervals (half past, quarter past/to and o'clock.) Whatever you do, be very clear about what's required acceptable and what is not. For me 12 noon is PM. If you require NOON you should require MIDNIGHT also. Other things like the A in A QUARTER and the O in TWELVE O ONE AM should be spelt out in detail in the specification. – Level River St Aug 10 '15 at 0:41
# Maze to regex
Suppose we have an ASCII maze like so:
#######
#s# # #
# # # #
# #
# ### #
# # e#
#######
The input maze will have the following properties:
• One cell (marked s) will denote the start of the maze, and a separate cell (marked e) will denote the exit.
• The walls will be denoted by hashes #, and empty corridors will be denoted by spaces.
• The maze will be a perfect rectangle, have no cycles, and will consist of exactly one connected component (i.e. all cells will be reachable)
A single character from NSEW represents a move North, South, East or West respectively, and consists of moving two characters in the specified direction. For instance, the above example is a 3 by 3 maze where the following cells can be occupied:
#######
#x#x#x#
# # # #
#x x x#
# ### #
#x#x x#
#######
A string consisting of NSEW is said to solve a maze if applying each move in turn results in the exit being reached at some point in time, regardless of whether the string continues on afterward. If a move is blocked by a wall, the move is ignored and no movement occurs.
Example strings which solve the above maze are SEES, SENSES and SSSSSNENNNNNSENNNNNSSSSSSSSWW.
## The challenge
Your task is to write a program or function which takes in an ASCII maze and outputs a regex. The regex must match a string of NSEW if and only if it solves the given input maze.
For instance, all solutions to the 2 by 2 maze
#####
#s#e#
# # #
# #
#####
can be encapsulated by the regex
^(([NEW]|S[WS]*N)*S[WS]*E([ES]|W[WS]*E)*W[WS]*N)*([NEW]|S[WS]*N)*S[WS]*E([ES]|W[WS]*E)*N[NEWS]*$ (Try it online at Regex101) ## Available features You may only use the following regular expression features: ^$ Start and end anchors respectively
N|E Alternation
NE Concatenation
() Grouping
* Repetition (0+ times)
+ Repetition (1+ times)
? Optional (0 or 1 times)
[NESW] Character classes (but not negated classes)
In particular, recursion, wildcards, lookaheads and other unlisted features are not allowed.
Sandbox questions:
• What would be better, (scoring by providing a few test mazes, and taking the sum of output regex lengths) or (any output regex is okay as long as it is finite and correct)?
• I've chosen this ASCII representation because it looks the nicest, but I'm not sure if it's the most convenient. I'm open to suggestions for alternatives.
• What is the best way to test submissions? I can write a bunch of test cases per maze, but it's impossible for me to test an infinite number of strings.
• Is it guaranteed that there will never be a wall in a position that permits a move of only 1 step instead of 2? Or should such cases be simply treated as no move possible? – trichoplax Aug 10 '15 at 17:02
• What range of maze sizes must this work for? – trichoplax Aug 10 '15 at 17:03
• If you don't have an automated way of checking submissions, you could announce one of "innocent until proven guilty" or "guilty until proven innocent". Either answers require a proof, or answers are assumed to be valid until someone proves otherwise. – trichoplax Aug 10 '15 at 17:06
• @trichoplax I'll work on the maze definition later, but the input is guaranteed to be valid. Maze size would have to depend on if this is golf (which would probably have a larger limit) or metagolf – Sp3000 Aug 11 '15 at 1:46
# Be Rational! Finding Rational Roots of Polynomials
In this challenge you are to find all rational zeroes of a polynomial. The results have to be exact. I would suggest using The Rational Root Theorem.
## Input
Input can be through function argument, command argument, or user input. Input will be a polynomial. The polynomial may have rational coefficients. If a term has a coefficient of zero, that term will not be included in the input. x^1 will be abbreviated as x.
Examples:
-x^7+4x^4/7-21x^2/2+5x+23/19
...//More to be added when posted
## Output
Output will be a list of the rational roots of the input polynomial. Output can be through function return value or stdout. If output in string format, you will use improper fractions separated by commas. The output must be simplified as much as possible. Duplicate roots should not be printed more than once.
Examples:
4/5,2/3,-15/2
...//More to be added when posted.
## Example Cases
> x^2-1
1,-1
...//More to be added when posted.
Just like all questions, the answer with the lowest byte count wins.
## Questions:
Is this too much like Peter's earlier question?
Are there any points I haven't covered or are not clear?
Any grammar/spelling mistakes?
Any tips on improved formatting?
• Somewhat related question – Sp3000 Aug 17 '15 at 2:01
• The title is one character shy of the minimum title length and "abbreviate" should be "abbreviated." Can the roots be listed in any order? Do they have to be fully reduced or could we, for example, use 2/4 in place of 1/2? I would also suggest rewording "fractional coefficients" to "rational coefficients." – Alex A. Aug 17 '15 at 2:38
|
2021-07-23 22:28:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5938038229942322, "perplexity": 474.459677353056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.51/warc/CC-MAIN-20210723210216-20210724000216-00035.warc.gz"}
|
https://warwick.ac.uk/fac/cross_fac/iatl/reinvention/archive/volume5issue1/srivastava/
|
# Developing Countries and Scientific Collaboration in Pharmaceuticals
Richa Srivastava[1], Department of Humanities and Social Sciences, Indian Institute of Technology, Kanpur
## Abstract
This paper analyses the nature and causes of international scientific collaboration between developed and developing countries in pharmaceuticals. The effect of proximity factors such as contiguity, common language, colonial links and distance, and of institutional factors such as ease of clinical research and the presence of private organisations on collaboration is estimated. Poisson count data model is used for the analysis, in which evidence indicating the impact of the presence of private institution, common language and colonial links on collaborations is found. However, the effect of geographical distance and ease of clinical research on pharmaceutical collaborations is not very clear.
Keywords: Scientific collaboration, pharmaceuticals, developing countries, clinical research, private firms, Poisson model
## Introduction
Scientific collaboration is frequently used to solve complex scientific problems and promote various political, economic and social agendas. It involves the sharing of research data, equipment sharing, joint experimentation, building of databases and conferences. With scientists from different countries working towards common goals, international scientific collaboration has grown in recent years: research teams are not only bigger in size but are also more diversified in terms of nationalities. The increasing trend towards team formation has been attributed to increasing scale, complexity, costs of big scientific projects and the fact that teams produce more influential (highly cited) work than individual authors (Wutchy et al., 2007 :1036-37).
International collaboration has also emerged as a preferred method for building scientific capacity in developing countries (Wagner et al., 2001: 9). Innovation has always been an important source of economic growth and increased welfare. Advanced countries spend huge amounts to fund merit-based collaborations in order to exploit the strong link between knowledge generation, productivity enhancement and economic growth. Developing countries, on the other hand, strive to build up their scientific capacities and employ the knowledge generated abroad to move up the development ladder. However, with the emergence of new international trade, investment and intellectual property rules, developing countries have faced difficulties in using the technology developed abroad. This has served as a trigger for innovation (knowledge generation) and international collaboration (knowledge flow) (Mytelka, 2006: 415-16).
From the findings of literature on scientific collaboration, it can be observed that collaboration mainly takes place between countries which share similar research profiles. Two developed countries can, therefore, collaborate in all major scientific fields. Developing countries, however, tend to focus on specialising in a few specific areas of science which are related to their national need - disease control, for example. North-south collaboration in pharmaceuticals is, therefore, of interest.
Developing countries are not only the largest producers of generic drugs but also hold large markets. Clinical trials of pharmaceutical and device companies are increasingly being conducted in developing countries. An important factor affecting this trend is the increasing bureaucratic and expensive regulatory environment in advanced countries. The large number of potential participants and lower cost of research have attracted the pharmaceutical companies towards developing countries such as India and China, where the population size alone offers the promise of an expanding market (Glickman et al., 2009: 818).
Moreover, in the pharmaceutical industry, the knowledge produced by the public sector does not spill over free of cost to the downstream researcher. Thus, the private sector not only needs to invest in basic research but also to collaborate with the public sector to gain commercial advantage. The pharmaceutical companies produce a large number of publications which form a rich subset of collaboration data (Cockburn and Henderson, 1998:158). However, the effect of private sector research and clinical research on international collaboration remains largely unknown. The paper aims to address these issues.
In this paper, the publication data from Web of Science Databases (WoS) for the period 1974-2008 is used to estimate the effect of various institutional and proximity factors on collaboration between developed and developing countries. The analysis focuses on observing country-specific factors like institutional interaction, nature of scientific research and therapeutic areas of interest, and on policies which play an important role in shaping the way these factors are perceived.
The rest of this paper consists of 4 sections. Section 2 reviews literature and builds conceptual hypotheses for the analysis. In section 3, the descriptive and comparative analysis to test the proposed hypotheses is performed. Section 4 lists and explains the main findings. Section 5 contains some concluding remarks.
## Literature Review
##### Collaboration for building capacity in developing countries
Wagner et al. (2001:1-9) examined the role of merit-based collaboration between developed and developing countries in building scientific capacity in the latter. They identified the need for expertise and presence of particular research equipment, databases and laboratories as factors influencing collaboration.
They recognised the role of information and communication technology in boosting international collaboration but mentioned that it cannot alone motivate or enable collaboration. They suggested that presence of a baseline level of scientific capacity (which is different for different fields) is necessary for any sort of collaboration. Agrawal and Goldfarb (2005:1-3), on the other hand, found an 85% increase in the likelihood of collaboration after the adoption of Bitnet (an earlier form of the internet) by universities. Similar evidence was found by Adam et al. (2005: 275-77) who reported rapid growth in university-level and university-firm collaboration. They attributed this growth to the decline in collaboration costs due to the deployment of the National Science Foundation's program to promote advanced networking in the US (NFSNET) and its connection to networks in Europe and Japan. This evidence lowers the significance of geographical distance as a determinant of international collaboration.
Wagner and Leydesdorff (2005: 186-88) mapped the networks created by international co-authorships for the years 1990 and 2000. They analysed the observed linkages at the global and regional level and witnessed a pronounced expansion of the global network along with emergence of regional hubs. Using factor analysis, they found that large countries compete with each other for developing partners in the global network.
Maina-Ahlberg et al. (1997: 1229-30) found that most of the collaborative projects were initiated from the North and that some disagreements concerning remuneration/compensation existed, due to different policies and remuneration rates set by institutions. In addition, loopholes in financial management, legal and regulatory obstacles, the absence of a common language and spillover effects of international diplomacy were the main factors which hindered international cooperation.
##### Policies for International Collaboration
Mytelka (2006: 420-22) examined the experiences of India, Cuba, Iran, Taiwan, Egypt and Nigeria to study the country-specific drivers and triggers of pharmaceutical innovation processes. He suggested that policies play an important role in shaping the way these triggers are perceived and how they drive the innovation process, arguing that simply increasing the 'supply of researchers' does not ensure a process of innovation. Rather, complementary policies which provide incentives to these researchers to acquire knowledge and focus on domestic problems are needed for the development of an innovation dynamic.
Cockburn and Henderson (1996: 159-63) found evidence of excessive co-authoring between pharmaceutical researchers belonging to the public and private sectors. They suggested that collaboration between public and private sectors is an essential determinant of the productivity of the latter.
Based on the literature review the following four hypotheses are proposed:
• Hypothesis 1: Developing countries collaborate more with developed countries than with other developing countries.
• Hypothesis 2
• A: Common Language increases collaboration between countries.
• B: Contiguous countries collaborate more.
• C: Colonial links between countries increase collaboration.
• D: The greater the distance between two countries, the less they collaborate.
• Hypothesis 3: Developed countries with a larger share of clinical research publications prefer to collaborate with developing countries.
• Hypothesis 4: Developed countries with more publications under private research organisations collaborate with developing countries.
## Data
The data for this study has been taken from various sources. BioPharmInsight was used to draw up a list of 215 medical indications which were then assigned to one of the 12 therapeutic areas. These therapeutic areas were defined according to a system of an organism or a general disease group. A list of the 12 therapeutic areas can be found in the appendix (A.1). The list of medical indications was used to search for corresponding scientific pharmaceutical publications in the Web of Science databases (WoS). It consists of 7 databases, the most important one being the Science Citation Index Expanded. It covers the scientific fields of biochemistry, medicine and pharmacology and indexes more than 6500 scientific journals. Information concerning the scientific publications themselves, like the title, the year of publication, the journal, author's affiliation (including the country of respective organisation), cited references, categorisation of research fields that a publication can be assigned to, and further bibliographic information can be obtained using the WoS database. Information concerning the authors' affiliations is matched with WHO Regions and World Bank income groups in order to include the geographical region a country is located in and the wealth level of the countries in our sample.
In this study, all publications included in categories related to pharmaceutical research are considered. Articles from the subcategories "Biochemistry & Molecular Biology", "Biotechnology and Applied Microbiology", "Chemistry, Applied", "Chemistry, Medicinal", "Medicine, Research and Experimental", "Pharmacology and Pharmacy" and "Toxicology" are included in our dataset. The sample is restricted to journal articles and excludes publications that are labelled as meeting abstracts, editorials or reviews as well as other non-journal publications and conference proceedings.
In order to determine whether researchers affiliated to private companies publish, authors' affiliations were searched for the occurrence of companies' legal forms and the names of big pharmaceutical companies. Also, articles originating in universities and public research institutions were classified on the authors' affiliations.
The CHI classification (Hamilton, 2003) was used to distinguish between "clinical observation", "clinical mix", "clinical investigation", and "basic biomedical research" journals. CEPII (Centre d'Etudes Prospectives et d'Information Internationales) was used to obtain data on the distance measures and proximity measures, like presence of a common language, colonial link and common border etc. (Mayer and Zignago, 2006).
The dataset was restricted to journal publications in which authors from at least one country assigned to the low-income or the lower-middle income group (according to the World Bank classification) were involved. For the period from 1974 to 2008, 13,126 publications were obtained. Each publication was assigned to the respective countries mentioned in the authors' affiliations. Since co-publications represent undirected links, each pair of countries was included only once in the analysis.
##### Data description and summary statistics
The patterns of collaboration between developed and developing countries can be significantly different from those between two developed countries. The difference may arise depending on the type of research (clinical/ basic) and the institutions involved (firm/universities). Our main focus here is to observe such differences and report them.
Dummy variable 'developing_2' is used to differentiate the developed-developing country pair from a developing-developing country pair. Another source of difference is the therapeutic area. Table 1 presents the main variables used in the descriptive and regression analysis.
Variables Description and Source
Collaborations Total number of collaborations between countries in a pair in a particular therapeutic area.
distance Distance between two countries based on the largest (population wise) cities of those countries
shareclinPub_developing / shareclinPub_developed Share of total number of publications of the respective country that are published in journals assigned to CHI category "clinical research", "clinical mix" or "clinical observations".
sharefirmPub_developing / sharefirmPub_developed Share of publications of the respective country in the respective therapeutic area that can be assigned to firms
Developing _2 A dummy variable which takes value 1 if both the collaborating countries are developing
common_lang_official A dummy variable which indicates whether two countries share a common official language
colony A dummy variable which indicates whether two countries ever had a colonial link
contiguity A dummy which indicates whether two countries are neighbours
Period_dummy A dummy which indicates whether the collaboration takes place in period 1(1998-2002) or period 2(2002-2008)
Table 1: Variable Description and Source
The publication data is divided into two periods (Period 1: 1998-2002 and Period 2: 2002-2008). Table 2 presents the pooled summary statistics for the variables.
Variables Mean S.D. Max Min Skewness N
collaborations 1.309 30.167 3021 0 71.447 17362
Distance 7173.47 4200.062 19772.34 56.99 0.621 17362
sharefirmPub_developed .0199 .0686 1 0 7.952 17362
sharefirmPub_developing .0111 .0866 1 0 10.680 17362
shareclinPub_developed .4898 .3059 1 0 -.2685 17362
shareclinPub_developing .6085 .3307 1 0 -.4749 17362
Table 2: Summary Statistics
The dataset comprises 121 countries forming 6486 pairs over the 2 periods. Each country pair in the dataset necessarily includes at least one developing country. This characteristic of the data makes it highly skewed. The collaboration variable, for example, is highly skewed towards zero and even more so when both the collaborating countries are developing.
The collaboration data contains a large number of zeros which leaves the classical tests and regression techniques inappropriate. Taking a closer look at the data (with collaborations > 50 for developed-developing pairs), the major collaborators among the developing countries are found to be China, India, Kenya, Nigeria and Thailand (see Figure 1).
Figure 1 also shows that "infectious disease" is one of the most researched therapeutic areas amongst the developing countries; however, the maximum numbers of collaborations take place in "cancer". Cancer research is one of the major concerns of the developed world and such observations in data may point towards the growing potential of China as a cancer research centre.
Figure 1: Plot showing the major developing collaborators and the concerned Therapeutic Areas
With such a skewed distribution, the collaborations data fails to follow the normality assumption for the t-test (mean comparison test). Therefore for the comparative statistical analysis here, a two-sample median test (Wilcoxon-Mann-Whitney test) is used.
##### Two sample Wilcoxon rank-sum Test
The Wilcoxon-Mann-Whitney (or rank sum) test is the non-parametric alternative of a t-test. It does not assume normality. In this approach, the two samples to be tested are combined and ranked. The sum of ranks from each sample then acts as the test statistic, thereby avoiding assumption of a specific distribution for the original data. In this study, this test is used to observe the differences in collaborations occurring due to the difference in the income group of partnering countries, their official language, common coloniser and contiguity.
###### Comparison based on Income group
Developing_2 Obs. Rank sum Expected
0 14606 1.263e+08 1.268e+08
1 2756 24401762 23926214
Combined 17362 1.507e+08 1.507e+08
Table 3a: Rank-sum test for developed and developing country collaboration
Ho: collaborations (Developing_2==0) = collaborations (Developing_2==1)
z = -4.004
Prob > |z| = 0.0001
P{collaborations(Developing_2==0) > collaborations(Developing_2==1)} = 0.488
The results suggest that there is a statistically significant difference between the underlying distribution of collaborations between two developing countries and those between a developed and developing country (p= 0.0001). The probability-order suggests the chances of collaborations between a developed-developing country pair as being 48.8 % more as compared to a developing-developing country pair.
This can be explained by the need for the developing countries to collaborate with the developed ones in order to build capacity (Wagner et al., 2001). Factors such as cost sharing, presence of better research equipment, databases and laboratories as well as a need for expertise has led to such international collaborations.
Also, many developed countries which previously donated resources to 'research-for-aid' are not willing to continue without reciprocity or some clear benefit. This has led to a collaboration system which is attractive to both the partnering countries.
###### Comparison based on Official language
Common_lang_official Obs. Rank sum Expected
0 15269 1.313e+08 1.326e+08
1 2093 19434058 18170380
Combined 17362 1.507e+08 1.507e+08
Table 3b: Rank-sum test for Common Official Language
Ho: collaborations (common_lang_official==0) = collaborations (common_lang_official ==1)
z = -11.942
Prob > |z| = 0.0000
P {collaborations (common_lang_official==0) = collaborations (common_lang_official ==1)} = 0.460
There is a statistically significant difference between the underlying distribution of collaborations between countries which share a common official language and those which do not. The probability indicates that the chances of collaborations between countries not sharing an official language are 46% more than between countries which do have a common official language.
As seen from the previous result, the collaborations between a developed-developing country pair are more as compared to a developing-developing country pair. Secondly, the probability of a developing country like India or one of the African nations, not sharing an official language with their developed partner like U.S. is quite high. Hence, it can be said that international collaborations take place irrespective of a common official language.
Since our sample consists of only those country pairs which contain at least one developing country, the above results seem to hold.
###### Comparison based on contiguity
Contiguity Obs. Rank sum Expected
0 16986 1.471e+08 1.475e+08
1 376 19434058 3264244
Combined 17362 1.507e+08 1.507e+08
Table 3c: Rank-sum test for contiguity
Ho: collaborations (contiguity==0) = collaborations (contiguity ==1)
z = -6.985
Prob > |z| = 0.0000
P {collaborations (contiguity==0) = collaborations (contiguity ==1)} = 0.448
The probability here indicates that the chances of collaborations between countries sharing a common border are 44.8% less than between countries which are not contiguous.
This result can again be tracked back to the higher number of collaborations between developed-developing countries pairs. Since it is unlikely that a developed and developing country share a common border, it can be said that factors influencing collaborations between two countries are: research facilities and expertise, rather than contiguity.
The Wilcoxon rank-sum test thus helps to gain an intuition about the data and the difference in patterns of collaboration between various country pairs.
## Methodology
##### Poisson Regression for Count Data Model
The Poisson regression is used to model count data where the error structure does not follow the normal distribution. It assumes that the response variable Y follows the Poisson distributions, i.e.:
$Pr\{Y = y\} = \frac{e^{-\mu}\mu^{y}}{y!}$
for µ>0. The mean and variance of this distribution can be shown to be
$\varepsilon(Y) = var(Y) = \mu$
Since the mean is equal to variance, the usual assumptions of homoscedasticity do not hold for the Poisson data.
In this study, there are four types of Poisson regression models. Model 1 gives the result of regression of collaborations on shareclinPub_developed, shareclinPub_developing and distance while controlling for the proximity dummies and the therapeutic dummies. Model 2 gives results of regression of collaborations on shareclinPub_actor, shareclinPub_partner for two developing countries while controlling for other variables. Model 3 regresses collaborations on sharefirmPub_developed, sharefirmPub_developing while Model 4 does the same for two developing countries while regressing on shareclinPub_actor, shareclinPub_partner. Regression results of Model 2 and 4 can be found in the appendix.
##### Test for Over-dispersion
There is a test of the null hypothesis of equal-dispersion, $V(y|x) = E(y|x)$, against the alternative of over-dispersion which can be given by equation
$V(y|x) = E(y|x) + \alpha^{2}E(y|x)$
the variance function being the one of a negative binomial model.
Hence, the hypothesis H0: α = 0 against H1: α > 0 is tested. The results (Appendix A.4) indicate the presence of significant over-dispersion in Model (1) and (3). To model this feature of the data, robust estimate of VCE (Cameron and Trivedi 2009) is used.
## Regression Results
The results are obtained by applying the Poisson model with robust VCE estimates (application of OLS yields similar results). Table 4 gives the marginal effects after Poisson regression of collaborations on the share of publications attributed to firms of the partnering developed and developing countries (Model 3). A developed country with higher firm-level collaboration has 93% more chance of collaborating with a developing country. Similarly, a developing country with low firm-level publications has 30.6% higher chance of collaborating with its developed counterpart. These positive and negative coefficients of sharefirmPub_developed and sharefirmPub_developing respectively imply that the pharmaceutical firms in developed countries prefer to collaborate with their counterparts who are less developed in pharmaceutical research. A general upward trend has been observed in firm-level international collaborations, however these results point towards the usage of economic and bureaucratic disparities by the developed countries. While the firms from developing countries collaborate to absorb capacity, their developed counterparts may actually gain by exploiting the wage and regulation disparities between the two nations. Local people who consider themselves to be poorly paid see these externally funded research projects as an added benefit (Maina-Ahlberg et al.,1998).
Independent Variables
(Dependent Variable : Collaborations)
Model (3)
dy/dx
(S.E.)
sharefirmPub_developed 0.93***
(0.17 )
sharefirmPub_developing -0.306**
(0 .15 )
distance -1.40e-06
(0.00002)
Common_lang_official 0.89**
(0 .35)
Colony 0.54*
(0 .29)
contiguity 0.07
(0.19)
Period control Yes
Therapeutic Area control Yes
N 14606
Log pseudo-likelihood -39367.039
Table 4: Marginal effects after Poisson Regression on Model 3
(*p<0.05, **p<0.01, ***p<0.001)
In Table 4, distance represents the geographic distance between two countries. An insignificant coefficient for distance only supports the fact that developed and developing countries are seldom closely situated. In the presence of new communication technologies, geographical distance does not matter much for collaboration and is found to be insignificant in individual periods.
It is found that a common official language facilities the collaboration between two countries belonging to different income groups. The number of collaborated research articles between partners sharing a common official language were 0.89 times more than those with different common languages.
The variable colony shows a positive association with the number of collaborated research publications while the effect of contiguity is ambiguous.
Table 5 gives the marginal effects after Poisson regression of collaborations on the share of publications assigned to the CHI category of "clinical research", "clinical observation" or "clinical mix", of the developed and developing countries.
The coefficients of shareclinPub_developing and shareclinPub_developed are negative and significant. This can be interpreted as the mismatch of human needs and priorities of different countries. The developing countries with high share of clinical research publication may not prefer to collaborate with their developing counterparts as they easily obtain the research participants and lower cost of research in their home country.
Independent Variables
(Dependent Variable : Collaborations)
Model 1
dy/dx
(S.E.)
shareclinPub_developed -0.240**
(0.06)
shareclinPub_developing -0.391***
(0.13)
distance 5.81e-06
(0.00002)
Common_lang_official 0.955**
(0.348)
Colony 0.533*
(0.29)
contiguity 0.113
(0.20)
Period control Yes
Therapeutic Area Control Yes
N 14606
Log pseudo-likelihood -39203.142
Table 5: Marginal effects after Poisson Regression on Model 1
(*p<0.05, **p<0.01, ***p<0.001)
Developed countries, on the other hand, may not find suitable partners who wish to collaborate in clinical research of conditions such as allergic rhinitis or over-reactive bladder, which are of their interest. Also considering the fact that the genetic make-up of the population differs widely across nations, the safety and effectiveness of drugs may not be the same in the trial sample and in the user sample. This can defer collaboration.
The variables distance and contiguity are again insignificant, while common_lang_official and colony show a positive association with the number of collaborated research articles.
##### Differences in pattern over the 12 Therapeutic areas
Figure 2: Data distribution over the Therapeutic Areas
Figure 2 illustrates the data distribution over the 12 therapeutic areas. As is evident, the share of infectious diseases surpasses all other shares and is more than one-third of the whole sample. This is justifiable because infectious diseases are a global problem and they ought to drive collaboration. However, the more basic therapeutic areas which are of research interest to the developing countries like HIV infection or gastro-intestinal or cardiovascular have a small share in the sample. This may be a reason behind the counter-intuitive results which say that collaborations decrease with higher level of clinical research publications of both the developed and developing countries.
Table 6 gives the regression results for different therapeutic areas. The sign of shareclinPub_developing is positive for the therapeutic areas cardio-vascular and gastrointestinal while it is negative for the areas like cancer and infectious diseases. However, the sign of shareclinPub_developed remains negative whenever it is significant.
Independent Variables
(Dependent Variable : Collaborations)
TA1 Coef.
(S.E.)
TA2 Coef.
(S.E)
TA7 Coef.
(S.E)
TA13 Coef.
(S.E)
TA17 Coef.
(S.E)
shareclinPub_developed -2.35*
(1.21)
-0.619
(0.58)
0.302
(0.574)
-0.78***
(0.226)
0.43
(0.29)
shareclinPub_developing -1.13 ***
(0.24)
0.904***
(0.13)
1.19**
(0.361)
-0.55**
(0.20)
0.34
(0.25)
distance .00004
(0.0001)
0.00002
(0.00006)
-0.0001*
(0.00007)
6.55e-06
(0.00003)
-0.00005
(0.00006)
Common_lang_official 1.53*
(0.848)
1.0*
(0.57)
2.18***
(0.44)
0.88**
(0.29)
1.32*
(0.57)
Colony -1.02
(1.04)
0.314
(0.54)
0.234
(0.69)
1.69***
(0.27)
0.25
(0.76)
contiguity 0.468
(1.03)
0.39
(0.76)
-0.99
(0.74)
-0.34
(0.45)
-0.04
(0.53)
Period control Yes Yes No Yes No
N 1528 1689 1515 5081 1107
Log pseudo-likelihood -12876.627 -3631.77 -1434.13 -13343.9 -1210.51
Table 6: Poisson regression for 5 therapeutic areas
(*p<0.05, **p<0.01, ***p<0.001)
## Conclusions
In this study, descriptive statistics and Poisson regression model were used to study the patterns of collaboration between developing countries and their developed counterparts, while focusing on the firm-level publications and clinical-research publications.
The preliminary results suggest that developing countries prefer to collaborate with developed countries and even more so with countries sharing a common official language and colonial links. The Poisson regression results for the firm-level collaborations support the hypothesis that developed countries with higher number of firm-level publications collaborate more. The results for clinical research, however, do not support the hypothesis and are ambiguous (different for the various therapeutic areas). A reason for such results can be the absence of data for collaborations specific to clinical research. Using the Poisson model, an attempt has been made to explain the total collaborations between two countries by the share each has in clinical research. However, the model fails to control for size, as would be the case of a gravity regression model.
Scope of future work lies in improving on the methodology (using gravity regression model) while focusing on a particular therapeutic area. Such a study would provide a better picture of 'why' does a developing nation collaborate with a developed one and 'what' role does it play. The dataset, however, will then have to be more refined, i.e. at the 'Therapeutic area' level.
## Acknowledgements
The DAAD WISE program and GSBC - EIC have jointly funded this project. I thank Professor Uwe Cantner, Chair of Microeconomics at the University of Jena for giving me the opportunity to work on this topic. I particularly thank Bastian Rake for sharing the data with me and for helpful comments, expressed interest and concerns. I also sincerely thank Dr Ljubica Nedelkoska for her very helpful comments and suggestions.
## List of Figures
Figure 1: Plot showing the major developing collaborators and the concerned Therapeutic Areas
Figure 2: Data distribution over the Therapeutic Areas
## List of Tables
Table 1: Variable Description and Source
Table 2: Summary Statistics
Table 3a: Rank-sum test for developed and developing country collaboration
Table 3b: Rank-sum test for Common Official Language
Table 3c: Rank-sum test for contiguity
Table 4: Marginal effects after Poisson Regression on Model 3
Table 5: Marginal effects after Poisson Regression on Model 1
Table 6: Poisson regression for 5 therapeutic areas
Table 7: List of the 12 Therapeutic Areas
Table 8: Cross-correlation between variables
Table 9: Poisson Regression results for Model 2
Table 10: Poisson regression results for Model 4
Table 11: Test for over-dispersion
## Appendix
##### A.1 List of Therapeutic Areas and their IDs
Therapeutic Area ID
Cancer 1
Cardiovascular 2
Central Nervous System 3
Eye and Ear 6
Gastrointestinal 7
Genitourinary 8
Haematological 9
HIV Infections 10
Immune System 12
Infectious Diseases 13
Musculoskeletal 15
Respiratory 17
Table 7: List of the 12 Therapeutic Areas
##### A.2 Correlations
1 2 3 4 5 6 7 8 9 1 Collaborations 1 2 shareclinP~g -0.014 1 3 shareclinP~d -0.018 0.054 1 4 sharefirmP~d 0.01 0.002 0.091 1 5 sharefirmP~d -0.003 0.02 -0.02 -0.004 1 6 distance -1 0.057 0.093 0.162 0.022 1 7 colony 0.023 0.003 0.002 0.027 0.0002 -0.062 1 8 contiguity 0.004 -0.019 -0.002 -0.034 -0.008 - .1834 0.142 1 9 common_lang_official 0.039 0.011 0.059 0.091 0.014 0.008 0.197 0.043 1
Table 8: Cross-correlation between variables
###### Model 2
Independent Variables
(Dependent Variable : Collaborations)
Model 2
dy/dx
(S.E.)
shareclinPubactor -0.063
(0.059)
shareclinPubpartner -0.042
(0.034)
distancew -0.0001***
(0.00002)
Common_lang_official -0.158***
(0.041)
Comcol -0.194***
(0.045)
contiguity -0.129***
(0.028)
Period control Yes
Therapeutic Area Control Yes
N 2756
Log pseudo-likelihood -31801.44
Table 9: Poisson Regression results for Model 2
###### Model 4
Independent Variables
(Dependent Variable : Collaborations)
Model (4)
dy/dx
(S.E.)
sharefirmPub_actor -0.568
(0.85 )
sharefirmPub_partner -0.038
(0 .79 )
distancew -0.0007***
(0.00004)
Common_lang_official -2.16***
(0.421)
comcol -2.77***
(0 .468)
contiguity -2.40***
(0.558)
Period control Yes
Therapeutic Area control Yes
N 2756
Log pseudo-likelihood -32178.144
Table 10: Poisson regression results for Model 4
##### A.4 Test for Over-dispersion
Model α
Model(3) 739.7* (345.9)
Model(4) 16.04 (340.6)
Model(1) 553.3** (175.5)
Model(2) 14.5 (261.1)
Table 11: Test for over-dispersion
## Notes
[1] Richa Srivastava is a penultimate year undergraduate pursuing Integrated Masters in Economics at the Indian Institute of Technology Kanpur. Her interests lie in the field of Game Theory, Behavioral Economics and Applied Microeconomics. She has received the Academic Excellence Award for being in the top 5% at IITK across all disciplines. In summer 2011, she was selected as DAAD WISE scholar to pursue undergraduate research at Economics of Innovative Change, Jena, Germany. She plans to pursue management after completing her masters program in 2013
## References
Adams, J. D., G. C. Black, J. R. Clemmons and P. E. Stephan (2005), 'Scientific Teams and Institutional Collaborations: Evidence from U.S. Universities', Research Policy, 34 (3), 259-85
Agrawal, K. A. and A. Goldfarb (2008), 'Restructuring research: Communication costs and the democratization of university innovation', American Economic Review, 98 (4), 1578-90
Cameron, A. C. and P. K. Trivedi (1990), 'Regression-based Tests for Overdispersion in the Poisson Model', Journal of Econometrics, 46 (3), 347-64
Cockburn, I. and R. Henderson (1996), 'Public-private Interaction in Pharmaceutical Research', Proceedings of the National Academy of Sciences, 93 (23), 12725-30
Cockburn, I. and R. Henderson (1998), 'Absorptive Capacity, Coauthoring Behavior, and the Organization of Research in Drug Discovery', Journal of Industrial Economics, 46 (2), 157-82
Glickman, S.W., J. G. McHutchison, E. D. Peterson, C. B. Cairns, R. A. Harrington, R. M. Califf and K. A. Schulman (2009), 'Ethical and Scientific Implications of the Globalization', The new England journal of medicine, 360 (8), 816-23
Maina-Ahlberg, B., E. Nordberg and G. Tomson (1998), 'North-South health research collaboration: Challenges in institutional interaction', Social Science Medicine, 44 (8), 1229-38
McKelvey, M., L. Orsenigo and F. Pammolli (2004), 'Pharmaceuticals Analyzed Through the Lens of a Sectoral Innovation System', Sectoral Systems of Innovation: Concepts, Issues and Analyses of Six Major Sectors in Europe, Cambridge, UK: Cambridge University Press, pp. 73-120
Mytelka (2006), 'Pathways and Policies to (Bio) Pharmaceutical Innovation Systems in Developing Countries', Industry and Innovation, 13 (4), 415-35
Plotnikova, T. and B. Rake (2010), 'Collaboration in pharmaceutical research: Exploration of country-level determinants', DIME-DRUID ACADEMY Winter Conference 2011, Denmark
Sonnenwald, D. H. (2007), 'Scientific collaboration', Annual Review of Information Science & Technology, 41 (1), 643-81
Wagner, C. and L. Leydesdorff (2005), 'Mapping the network of global science- comparing international co-authorships from 1990 to 2010', International Journal of Technology and Globalization, 1 (2), 185-208
Wagner, C., I. Brahmakulam, B. Jackson, A. Wong and T. Yoda (2001), Science and Technology Collaboration: Building Capacity in Developing Countries?, Monograph Reports, Santa Monica: RAND Publications
Wuchty, S., B.F. Jones and B. Uzzi (2007), 'The Increasing Dominance of Teams in Production of Knowledge', Science, 316 (5827), 1036-39
To cite this paper please use the following details: Srivastava, R. (2012), 'Developing countries and scientific collaboration in pharmaceuticals', Reinvention: a Journal of Undergraduate Research, Volume 5, Issue 1, http://www.warwick.ac.uk/reinventionjournal/archive/volume5issue1/srivastava Date accessed [insert date]. If you cite this article or use it in any teaching or other related activities please let us know by e-mailing us at
|
2022-09-28 21:44:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43234875798225403, "perplexity": 4162.831859224552}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00691.warc.gz"}
|
https://gitter.im/hyperspy/hyperspy
|
## Where communities thrive
• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
• Nov 29 01:20
codecov[bot] commented #2399
• Nov 29 01:19
github-actions[bot] synchronize #2399
• Nov 29 01:19
github-actions[bot] on non_uniform_axes
Update changelog. Set version to 1.6.1. Fix uploading wheel to pypi. and 9 more (compare)
• Nov 28 16:34
ericpre on RELEASE_1.6.1
• Nov 28 16:34
ericpre on RELEASE_next_minor
Update changelog. Set version to 1.6.1. Fix uploading wheel to pypi. and 8 more (compare)
• Nov 28 16:30
ericpre on RELEASE_next_patch
Update changelog. Set version to 1.6.1. Fix uploading wheel to pypi. and 7 more (compare)
• Nov 28 16:30
ericpre closed #2594
• Nov 28 16:30
ericpre commented #2594
• Nov 28 16:10
codecov[bot] commented #2594
• Nov 28 15:56
codecov[bot] commented #2594
• Nov 28 15:56
ericpre synchronize #2594
• Nov 28 15:31
ericpre on RELEASE_1.6.1
Update changelog. Set version to 1.6.1. Fix uploading wheel to pypi. and 4 more (compare)
• Nov 28 15:24
codecov[bot] commented #2594
• Nov 28 15:24
ericpre synchronize #2594
• Nov 28 15:21
codecov[bot] commented #2594
• Nov 28 15:06
codecov[bot] commented #2594
• Nov 28 15:06
ericpre synchronize #2594
• Nov 28 14:51
codecov[bot] commented #2594
• Nov 28 14:51
ericpre synchronize #2594
• Nov 28 14:44
codecov[bot] commented #2594
Weixin Song
@winston-song
Hi, ALL, is there any method to twin the Mn L3 and L2 H-S GOS edge height?
Magnus Nord
@magnunor
@winston-song, I think they're twinned automatically
Eric Prestat
@ericpre
@winston-song, yes there is the twin attribute of parameters
This is an example to show adapted from the documentation above:
import hyperspy.api as hs
s = hs.datasets.artificial_data.get_core_loss_eels_signal()
m = s.create_model()
Mn_L2_intensity_parameter = m[2].intensity
print(Mn_L2_intensity_parameter.twin)
m.print_current_values(only_free=True)
m.print_current_values(only_free=False)
Eric R. Hoglund
@erh3cq
Loading a Velox EDX SI that has been processed with "Reduce file size" in Velow throws some errors. The reduced file does not have a spectrum stream or multiple detectors but only a single SI. Does anyone have a fix for this?
Loading a Velox EDX SI that has been processed with "Reduce file size" in Velow throws some errors. The reduced file does not have a spectrum stream or multiple detectors but only a single SI. Does anyone have a fix for this?
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-3-51db32dd469f> in <module>
----> 2 SI = hs.load(file+'.emd', select_type='spectrum_image',
3 sum_frames=False, sum_EDS_detectors=False,
4 first_frame=1, last_frame=1)
5 SI
~\Programs\Anaconda\envs\hyperspy\lib\site-packages\hyperspy\io.py in load(filenames, signal_type, stack, stack_axis, new_axis_name, lazy, convert_units, **kwds)
277 else:
278 # No stack, so simply we load all signals in all files separately
--> 279 objects = [load_single_file(filename, lazy=lazy,
280 **kwds)
281 for filename in filenames]
~\Programs\Anaconda\envs\hyperspy\lib\site-packages\hyperspy\io.py in <listcomp>(.0)
277 else:
278 # No stack, so simply we load all signals in all files separately
--> 279 objects = [load_single_file(filename, lazy=lazy,
280 **kwds)
281 for filename in filenames]
316 else:
319
320
322 **kwds):
323 lazy = kwds.get('lazy', False)
325 **kwds)
326 objects = []
~\Programs\Anaconda\envs\hyperspy\lib\site-packages\hyperspy\io_plugins\emd.py in file_reader(filename, log_info, lazy, **kwds)
1379 if fei_check(filename) == True:
1380 _logger.debug('EMD is FEI format')
-> 1381 emd = FeiEMDReader(filename, lazy=lazy, **kwds)
1382 dictionaries = emd.dictionaries
1383 else:
~\Programs\Anaconda\envs\hyperspy\lib\site-packages\hyperspy\io_plugins\emd.py in __init__(self, filename, select_type, first_frame, last_frame, sum_frames, sum_EDS_detectors, rebin_energy, SI_dtype, load_SI_image_stack, load_reduced_SI, lazy)
588 except Exception as e:
--> 589 raise e
590 finally:
591 if not self.lazy:
~\Programs\Anaconda\envs\hyperspy\lib\site-packages\hyperspy\io_plugins\emd.py in __init__(self, filename, select_type, first_frame, last_frame, sum_frames, sum_EDS_detectors, rebin_energy, SI_dtype, load_SI_image_stack, load_reduced_SI, lazy)
585 self.p_grp = f.get('Presentation')
586 self._parse_image_display()
588 except Exception as e:
589 raise e
619 t1 = time.time()
621 t2 = time.time()
622 _logger.info('Time to load images: {} s.'.format(t1 - t0))
1056 stream.spectrum_image = sa
1057
-> 1058 spectrum_image_shape = streams[0].shape
IndexError: list index out of range
Eric Prestat
@ericpre
From the user guide:
"Pruned Velox EMD files only contain the spectrum image in a proprietary format that HyperSpy cannot read. Therefore, don’t prune Velox EMD files if you intend to read them with HyperSpy."
Mingquan Xu
I can not save file as *msa format suddenly, which can be done before. What would the cause?
Thomas Aarholt
@thomasaarholt
@Mingquan_Xu_twitter my guess is that you're trying to save a multidimensional spectrum, while the msa format only supports single spectra (1D). What is the shape of your signal?
Mingquan Xu
@thomasaarholt ,thanks very much for your reply. I have checked the dimensions of my data, and this is the cause. Thanks for your suggestion.
Eric R. Hoglund
@erh3cq
@ericpre thank you
Justyna Gruba
@justgruba
Hi, hello :)) i want to ask u, if it is the current version of quantification_cliff_lorimer function? in this link https://github.com/hyperspy/hyperspy/blob/851c9c0687533f429c853873834100aae2e6f92b/hyperspy/misc/eds/utils.py?fbclid=IwAR12kjkMJ2kd4KQAYWDgod7z0ufyGVL5x2tVJmFQcHyu7NKl2esQTK1uCnY. Than u in advance! Justyna
Eric Prestat
@ericpre
@justgruba, what do you mean with "current"? You should get an answer by looking at the history of that file on github
petergraat
@petergraat
I'm playing around with TEM/EDX quantification in Hyperspy, particularly with absorption correction in the Cliff-Lorimer method. I noted that I get a much smaller effect when including absorption correction in Hyperspy than when I do it "manually". Browsing through the code I noted that in the get_abs_corr_zeta method the mac is multiplied with 0.1 (line 560 in misc/eds/utils.py file: mac = stack(material.mass_absorption_mixture(weight_percent=weight_percent)) * 0.1). This seems to be the reason for the difference. What is the reason for this factor 0.1 ?
Alexander Skorikov
@petergraat Well it seems to be a conversion factor from cm^2/g to m^2/kg (indicated in the source file as a comment)
petergraat
@petergraat
@askorikov Ah, stupid, I should have seen that! Then there might be an issue with the mass thickness returned by the CL_get_mass_thickness method in the _signals/eds_tem.py file. That method should return the mass thickness in kg/m2. But I think it returns the mass thickness in g/cm2. On line 911 the elemental mass thickness is calculated as the product of composition (in %), thickness (in nm) and density (in g/cm3) and a factor of 1e-9 to get from % to fraction (1e-2) and from nm to cm (1e-7). Or do I oversee again something with the units?
Andrew Herzing
@aaherzing_gitlab
Is there any way to link two signals together so that if the navigation axes of the first dataset is cropped then this will automatically crop the navigation axes of the second? Something like this is what I'm looking for:
>>> s = hs.signals.Signal1D(np.zeros([100,100,10]))
>>> print(s)
<Signal1D, title: , dimensions: (100, 100|10)>
>>> s2 = hs.signals.Signal1D(np.zeros([100,100,10]))
>>> print(s)
<Signal1D, title: , dimensions: (100, 100|10)>
>>> s = s.inav[10:,:]
>>> print(s)
<Signal1D, title: , dimensions: (90, 100|10)>
>>> print(s2)
<Signal1D, title: , dimensions: (90, 100|10)>
Alexander Skorikov
@petergraat Hm, indeed looks like there's a mistake there
Thomas Aarholt
@thomasaarholt
@petergraat @askorikov great that you guys are tracking this :) Could one of you also make a GH issue about it?
petergraat
@petergraat
@thomasaarholt I've just created a new issue at GitHub.
petergraat
@petergraat
@aaherzing_gitlab The following might work:
s = hs.signals.Signal1D(np.zeros([100,100,10]))
s2 = hs.signals.Signal1D(np.zeros([100,100,10]))
s2.axes_manager = s.axes_manager
s.crop(0, 10, None)
print(s)
print(s2)
Using s = s.inav[10:,:] won't work because it creates a new copy of s, and then the axes_manager of s2 isn't the same as the axes_manager of s anymore.
Andrew Herzing
@aaherzing_gitlab
Thanks! This might do it. I'm actually hoping to embed the second signal in the metadata of the first. Is this a bad idea in practice?
Eric Prestat
@ericpre
@aaherzing_gitlab and @petergraat, the example above doesn't work and will break s2. I can't think of a way of doing this automatically
Would putting it in a for loop works well enough:
s = hs.signals.Signal1D(np.arange(100*100*10).reshape(100, 100, 10))
s2 = s.deepcopy()
for _s in [s, s2]:
_s.crop(0, 10, None)
petergraat
@petergraat
@ericpre and @aaherzing_gitlab OK, I understand, setting s2.axes_manager = s.axes_manager couples the axes managers, but not the data. Thus, s.data.shape() is affected by the s.crop() command, but s2.data.shape() has still the original shape.
Maybe stacking the signals is another possibility:
import hyperspy.misc as hm
s = hs.signals.Signal1D(np.zeros([100,100,10]))
s2 = hs.signals.Signal1D(np.zeros([100,100,10]))
s3 = hm.utils.stack([s, s2])
print(s3)
s3 = s3.inav[10:, :, :]
print(s3)
Katherine E. MacArthur
@k8macarthur
Quick question. Apart from Hyperspy what would people reference when using PCA or one of the other sub algorithms? Specifically I'm doing PCA denoising on my EDX stuff at the moment.
petergraat
@petergraat
@k8macarthur , Do you mean scientific literature? In that case this one might be relevant:
C.M. Parish, 'Multivariate Statistics Applications in Scanning Transmission Electron Microscopy X-Ray Spectrum Imaging', chapter 5 of 'Advances in Imaging and Electron Physics', Volume 168 (Elsevier, 2011).
Katherine E. MacArthur
@k8macarthur
@petergraat Yes I mean scientific literature. Thanks! :)
lukmuk
@lukmuk
Hey all, I am trying to run a Gaussian smoothing kernel over the spatial dimension of my STEM-EDS datacube, i.e. of shape (x, y | E). I am using the map() function for this and have a quick question regarding the signal/navigation dimensions:
If I have a signal s=(x, y | E), then running s.map(my_gaussian_filter) would smooth along the energy channels (signal dimension).
For filtering in spatial dimension I transposed the signal to swap signal/navigation, apply s.map(my_gaussian_filter) and then transpose back, i.e. s.T.map(my_gaussian_filter).T?
Both version return some smoothed version of my spectrum image, but I just wanted to ask if the ideas above are correct (as both version have a smoothing effect on the spectrum image, so it is hard to compare). Thank you for your help!
For anyone interested, I am trying to mimic the Gaussian kernel filtering from the temDM MSA plugin by Pavel Potapov: https://www.sciencedirect.com/science/article/pii/S0968432816303821?via%3Dihub
Eoghan O'Connell
@PinkShnack
Hey @lukmuk. I'm sure someone more experienced will be able to tell you if your method works (I don't using the map function often).
Without having your dataset I can't be sure, but I assume the Scipy multidim gaussian filter function should do the job without needing any transposing? See the example below with some dummy data.
from scipy.ndimage import gaussian_filter
import hyperspy.api as hs
import numpy as np
s = hs.signals.Signal1D(np.zeros([100,100,10]))
gaussian_filter(s, sigma=1)
# general 3D example
a = np.arange(125, step=1).reshape((5,5,5)) # 3D signal
a
gaussian_filter(a, sigma=1)
s = hs.signals.Signal1D(a)
gaussian_filter(s, sigma=1)
See documentation here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.gaussian_filter.html
Could also use the scikit-image wrapper version: https://scikit-image.org/docs/dev/api/skimage.filters.html#skimage.filters.gaussian
lukmuk
@lukmuk
@PinkShnack Thank your for the idea and the help! I tested different methods for Gaussian filtering and the map() method seems to match (relatively) well with the temDM output for separate filtering in spatial/energy dimension. Applying gaussian_filter(s) seems to smooth in 3D, i.e. over spatial and signal dimension simultaneously. I put the tests in a notebook (https://github.com/lukmuk/em-stuff/tree/main/Spectrum-image-Gaussian-filter). Again, thank you for your help.
Eric Prestat
@ericpre
@lukmuk: your example of s.T.map(my_gaussian_filter).T is brilliant! This is exactly what it is design for and illustrate very well the idea behind the transposition of navigation and signal space!
Jędrzej Morzy
@JMorzy
Since I updated my hyperspy recently, when using the remove_background() or fit_component() with the PowerLaw component, I keep getting the following warning 'WARNING:hyperspy.model:Covariance of the parameters could not be estimated. Estimated parameter standard deviations will be np.nan.
WARNING:hyperspy.model:m.fit() did not exit successfully. Reason: Number of calls to function has reached maxfev = 600.' It was not there before and it seems to fit poorly, any ideas of how I can improve this? It doesn't take maxfev argument, so perhaps needs changing of the default maxfev value in the code itself?
Thomas Aarholt
@thomasaarholt
My guess is that increasing the maxfev ("maximum function evaluations") will not improve the fit markedly. Could you share a screenshot or the data, as well as the code you're using for the fit? My guess is that you could probably improve it by changing the fit region or similar.
SataC90
@SataC90
Hi, I'm very new to Hyperspy and Python in general, I need Hyperspy to read .emd velox files. I have installed Jupyter notebook and tried to run the code, but I'm getting an error at hs.load ("filename"). It says "no file name matches this pattern". Since I'm not an expert I haven't been able to troubleshoot this error. I'm looking for a solution that's why I'm writing here. Any help would be much appreciated
Thomas Aarholt
@thomasaarholt
Hi @SataC90 I reckon you are trying to give the load function the wrong path to a file. Make sure that you're opening jupyter notebook in the same folder that you have the files in, or that you are passing the full path to the file you're opening.
Eric R. Hoglund
@erh3cq
@SataC90 and on Windows’s you need to use an r before your quoted string (r”...”) if you file path has \ instead of /. Easy to check.
SataC90
@SataC90
Thanks for the tips @thomasaarholt @erh3cq it seems that I've to upload each emd file into the jupyter notebook and then if I run the code it allows the image to be visualized. So I'm doing it this way as of now. But one thing I didnt understand why the code didn't run when I had already uploaded the entire folder with my data into the notebook. Any tips for that? Thanks again.
Katherine E. MacArthur
@k8macarthur
@SataC90 as a general rule for specific queries like 'why doesn't this work?' it far easier for people to help you if you copy the code into this thread. That way we can spot mistakes more easily. Generally, Jupyter notebooks run from the folder where they're stored. So if you type just a file name it expects it to reside in the same folder. Alternatively you can use the full file name starting your drive e.g. C:. If you have already imported file as a variable using:
s = hs.load('filename')
Then you perform the remaining tasks using s. However, for the Velox files in particular they don't just load one image for a given file name but a whole list of up to 8 items are loaded. Therefore often you need to run all your functions on s[3]. Or better still assign the data you wish to work on to another variable name. If you type s and run than your notebook will print a list of what your s variable actually contains.
Jędrzej Morzy
@JMorzy
@thomasaarholt here is the code used. It is a standard O-edge background fitting (the same thing happens when doing remove_background().
mO = sO.create_model(ll = ll, GOS="Hartree-Slater", auto_add_edges=False)
mO.fit_component(mO["PowerLaw"], bounded=False, fit_independent=True, signal_range=[500.0,520.0], only_current=True)
mO.assign_current_values_to_all()
mO.fit_component(mO["PowerLaw"], bounded=False, fit_independent=True, signal_range=[500.0,520.0], only_current=False)`
Here is a screenshot of the data and the fit - it does a reasonable job with the background, but I am just concerned about it not converging on the right answer every time
Thomas Aarholt
@thomasaarholt
@JMorzy Your model shouldn't be using a Power law fit. It looks like you already have removed the background.
I wouldn't be concerned about the fit. As these things go, the O-edge fit is really quite good.
Jędrzej Morzy
@JMorzy
You are right, sorry - I should have mentioned that I remove the background separately - the issue is exactly the same there
|
2020-11-30 12:06:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31632718443870544, "perplexity": 5025.315765901883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141213431.41/warc/CC-MAIN-20201130100208-20201130130208-00403.warc.gz"}
|
https://www.englishclub.com/writing/punctuation-backslash.htm
|
# Backslash
The backslash is not really an English punctuation mark. It is a typographical mark used mainly in computing. It is called a "backslash" because it is the reverse of the slash (/) or forward slash.
The backslash is used in several computer systems, and in many programming languages such as C and Perl. It is commonly seen in Windows computers:
• C:\Users\Win\Files\jse.doc
Do not confuse the backslash (\) with the slash (/) or forward slash.
Although it is not really an English punctuation mark, the backslash is included on these pages for completeness.
|
2015-09-04 12:39:46
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8544399738311768, "perplexity": 2079.5129499442946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645348533.67/warc/CC-MAIN-20150827031548-00151-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-9-section-9-6-conic-sections-in-polar-coordinates-concept-and-vocabulary-check-page-1030/3
|
Precalculus (6th Edition) Blitzer
$3$, $hyperbola$, $1$, $perpendicular$, $1$, $right$.
Comparing the given equation $r=\frac{3}{1+3cos\theta}$ with the standard forms (a), we can identify that $e=3$, so this is the equation of a $hyperbola$. We see that $ep=3, p=1$, so the directrix is $perpendicular$ to the polar axis at a distance of $1$ unit to the $right$ of the pole.
|
2021-05-15 11:24:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9840314984321594, "perplexity": 131.90646962340048}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00510.warc.gz"}
|
http://tex.stackexchange.com/questions/142/how-can-i-make-an-enumerate-list-start-at-something-other-than-1
|
# How can I make an enumerate list start at something other than 1?
Sometimes, I want to have enumerate lists in LaTeX start at other than the first value (1, a, i, etc.) How can I make an enumerate list start at an arbitrary value?
-
You can change the counter named enumi, like this:
\begin{enumerate}
\setcounter{enumi}{4}
\item fifth element
\end{enumerate}
(If you have lists at deeper levels of nesting, the relevant counters are enumii, enumiii and enumiv.)
-
The enumitem package provides a simple solution to very many common problems that are related to minor tweaks of enumerate/itemize/description. In this case, you can use the start parameter. Also have a look at the resume parameter.
-
I would just like to make explicit that the "resume" parameter causes the counter to continue from the previous "enumerate" environment. – Austin Mohr Jan 9 '14 at 5:37
If you only want to alter the starting value, the easiest way is:
\documentclass{article}
\begin{document}
\item This item is numbered 42.'
\begin{enumerate}\addtocounter{enumii}{5}% This cannot be more than 25
\item This one is numbered'' (f)'
\end{enumerate}
\end{enumerate}
\end{document}
While you can have six layers of nested list environments (itemize, description, enumerate), you can have no more than 4 of one type. The counters enumi through enumiv control the index of each item's label. You can increment (as shown) or decrement (add a negative value) all 4 levels.
Note, though, that this won't be entirely arbitrary. Levels enumerated alphabetically cannot have items after an item labeled 'z.' (You could, however, add a negative amount to the appropriate counter to get it back to the a' label.)
(Now that I see the other answer, I wonder why I always opt for the relative \addtocounter rather than the absolute \settocounter?)
-
\addtocounter` is safer in that it ensures monotonicity when used mid-list. – equaeghe Mar 19 '14 at 10:22
|
2015-05-27 18:10:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7220167517662048, "perplexity": 2014.2205651259292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929023.5/warc/CC-MAIN-20150521113209-00165-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://www.inchmeal.io/htpi/ch-1/sec-1.3.html
|
How to Prove It - Solutions
Chapter - 1, Sentential Logic
Summary
• Statements with Variables. For eg: “x is divisible by 9”, “y is a person” are statements. Here x, and y are variables.
• These statements are true or false based on the value of variables.
• Sets, a collection of objects.
• Bound and Unbound variables. Eg: $$y ∈ {x\,\vert\,x^3 < 9}$$ , $y$ is a free variable, whereas $x$ is a bound variable.
• Free variables in a statement are for objects for which statement is talking about.
• Bound variables are just dummy variables to help express the idea. Thus bound variables dont represent any object of the statement.
• The truth set of a statement P(x) is the set of all values of x that make the statement P(x) true.
• The set of all possible values of variables is call universe of discourse. Or variables range over this universe.
• In general, $y ∈ \{x ∈ A\,\vert\,P(x)\}$ means the same thing as $y ∈ A ∧ P(y)$.
Solutions
Soln1
(a) $D(6,3) \land D(9,3) \land D(15, 3)$ where $D(x, y)$ means $x$ is divisible by $y$.
(b) $D(x,2) \land D(x,3) \land \lnot D(x, 4)$ where $D(x, y)$ means $x$ is divisible by $y$.
(c) $(\lnot P(x) \land P(y)) \lor (P(x) \land \lnot P(y)$ where $P(x) = \{ x \in \mathbb{N}\,\vert\, x \text{ is prime} \}$.
Soln2
(a) $M(x) \land M(y) \land (T(x,y) \lor T(y,x))$ where $M(x)$ is “x is men”, $T(x, y)$ means “x is taller than y”.
(b) $[(B(x) \lor B(y)) \land (R(x) \lor R(y)]$ where $B(x)\text{ and }R(x)$ means “x has brown eyes” and “x has brown hairs” respectively.
(c) $[(B(x) \land R(x)) \lor (B(y) \land R(y)]$ where $B(x)\text{ and }R(x)$ means “x has brown eyes” and “x has brown hairs” respectively.
Soln3
(a) $\{ x\,\vert\,x\text{ is a planet }\}$
(b) $\{ x\,\vert\,x\text{ is a university }\}$
(c) $\{ x\,\vert\,x\text{ is a state in US }\}$
(d) $\{ x\,\vert\,x\text{ is a province in Canada }\}$
Soln4
(a) ${ x^2\,\vert\, x > 0 \text{ and } x \in \mathbb{N} }$
(b) $\{ 2^x\,\vert\, x \in \mathbb{N} \}$
(c) $\{ x \in \mathbb{N}\,\vert\, 10 \le x \le 19 \}$
Soln5
(a) $−3 ∈ \{x ∈ \mathbb{R}\vert\,13 − 2x > 1\} \Rightarrow -3 \in \mathbb{R} \land 19 > 1$. No free variables in the statement. Statement is true.
(b) $4 ∈ \{x ∈ \mathbb{R^+}\vert\,13 − 2x > 1\} \Rightarrow 4 \in \mathbb{R^+} \land 5 > 1$. No free variables in the statement. Statement is false.
(c) $5 \notin \{x ∈ \mathbb{R}\vert\,13 − 2x > c\} \Rightarrow \lnot{ \{ 5 \in \mathbb{R} \land 3 > c \}} \Rightarrow 5 \notin \mathbb{R} \lor 3 \le c$. One free variable(c) in the statement. (Thanks Maxwell for the correction)
Soln6
(a) $(w ∈ \mathbb{R}) \land (13 - 2w > c)$. There are two free variables $w$ and $c$.
(b) $(4 \in \mathbb{R}) \land (13 - 2 \times 4 \in P) \Rightarrow (4 \in \mathbb{R}) \land (5 \in P)$. The statement has no free variables. It is a true statement.
(c) $(4 \in P) \land (13 - 2 \times 4 > 1) \Rightarrow (4 \in P) \land (5 > 1)$. The statement has no free variables. It is a false statement.
Soln7
(a) {Conrad Hilton Jr., Michael Wilding, Michael Todd, Eddie Fisher, Richard Burton, John Warner, Larry Fortensky}.
(b) $\{ \lor, \land, \lnot \}$
(c) { Daniel Velleman }
Soln8
(a) {1, 3}
(b) $\phi$
(c)
Update:
As pointed out in comments, I got this wrong first time. Here is the correct answer:
$\{ x ∈ R \,\vert\, x^2 < 25 \}$ or, equivalently $\,\vert x \vert\, \lt 5$.
$$\{1, 2, 3, 4, 5, 6, 7 \}$$
|
2021-03-02 05:08:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5973055958747864, "perplexity": 1141.6010047207506}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363217.42/warc/CC-MAIN-20210302034236-20210302064236-00202.warc.gz"}
|
http://umj-old.imath.kiev.ua/authors/name/?lang=en&author_id=2263
|
2019
Том 71
№ 11
# Kalchuk I. V.
Articles: 6
Article (Ukrainian)
### On the approximation of the classes $W_{β}^rH^{α}$ by biharmonic Poisson integrals
Ukr. Mat. Zh. - 2018. - 70, № 5. - pp. 625-634
We obtain asymptotic equalities for the least upper bounds of the deviations of biharmonic Poisson integrals from functions of the classes $W_{β}^rH^{α}$ in the case where $r > 2, 0 \leq \alpha < 1$.
Article (Ukrainian)
### Approximation of functions from the classes $W_{β}^r H^{α }$ by Weierstrass integrals
Ukr. Mat. Zh. - 2017. - 69, № 4. - pp. 510-519
We investigate the asymptotic behavior of the least upper bounds of the approximations of functions from the classes $W_{β}^r H^{α }$ by Weierstrass integrals in the uniform metric.
Article (Ukrainian)
### I. Approximative properties of biharmonic Poisson integrals in the classes $W^r_{\beta} H^{\alpha}$
Ukr. Mat. Zh. - 2016. - 68, № 11. - pp. 1493-1504
We deduce asymptotic equalities for the least upper bounds of approximations of functions from the classes $W^r_{\beta} H^{\alpha}$, and $H^{\alpha}$ by biharmonic Poisson integrals in the uniform metric.
Article (Ukrainian)
### Approximation of ( ψ, β )-differentiable functions defined on the real axis by Weierstrass operators
Ukr. Mat. Zh. - 2007. - 59, № 9. - pp. 1201–1220
Asymptotic equalities are obtained for upper bounds of approximations by the Weierstrass operators on the functional classes $\widehat{C}^{\psi}_{\beta, \infty}$ and $\widehat{L}^{\psi}_{\beta, 1}$ in metrics of the spaces $\widehat{C}$ and $\widehat{L}_1$, respectively.
Article (Russian)
### Asymptotics of the values of approximations in the mean for classes of differentiable functions by using biharmonic Poisson integrals
Ukr. Mat. Zh. - 2007. - 59, № 8. - pp. 1105–1115
Complete asymptotic decompositions are obtained for values of exact upper bounds of approximations of functions from the classes $W^r_1,\quad r \in N,$ and WJr, $\overline{W}^r_1,\quad r \in N\backslash\{1\}$, by their biharmonic Poisson integrals.
Article (Ukrainian)
### Approximation of (ψ, β)-differentiable functions by Weierstrass integrals
Ukr. Mat. Zh. - 2007. - 59, № 7. - pp. 953–978
Asymptotic equalities are obtained for upper bounds of approximations of functions from the classes $C^{\psi}_{\beta \infty}$ and $L^{\psi}_{\beta 1}$ by the Weierstrass integrals.
|
2020-11-30 23:37:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5991091728210449, "perplexity": 1092.145970661589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141515751.74/warc/CC-MAIN-20201130222609-20201201012609-00047.warc.gz"}
|
https://www.billmongan.com/Ursinus-CS173-Fall2021/Assignments/NetBeans
|
# Assignment Goals
The goals of this assignment are:
1. To write and execute a Java program
2. To use the NetBeans Integrated Development Environment
Please refer to the following readings and examples offering templates to help get you started:
# The Assignment
In this assignment, you will create a NetBeans Java project and write a small Java program.
## Creating a New Project in NetBeans
Each program you write will begin with a new project in the NetBeans software development environment. Follow these instructions to create a new Java project.
## Reviewing Your First Java Program
Your new project contains at least one file that will look something like this:
/*
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package myprogram;
/**
*
* @author bill
*/
public class MyProgram {
/**
* @param args the command line arguments
*/
public static void main(String[] args) {
// TODO code application logic here
}
}
The text inside comment blocks /* and */ are called comments. These are lines where you can write anything you’d like - a description of what you’re doing, or your name, etc. They won’t execute as part of your program, so they aren’t code, per se. They’re just, as the name suggests, comments. They can span multiple lines, as you can see in three places in the example above. Another type of comment uses this format: //, and anything appearing on that line after the // comment marker is a comment. These are just like the /* ... */ comments, but only span a single line. You can see one such comment in the code above. The word TODO is not an official part of Java, but programmers like to make notes where they need to go back and write some code. In this case, it’s where your code will ultimately go.
Before we write any code, though, let’s look at the structure of this program. The first real line of code is right below the first comment. It starts with the keyword package. This essentially means that this is the name of the directory in which this source code file is saved inside your project. Every project has a directory on your hard drive, under which you can find more directories with code inside. This is useful if you have lots of source code files, so that you can organize them by what they do to make things easier for you to find. In our course, we’ll generally have only one of these, and so there won’t be a need to change this. On the left side of your NetBeans window, you should see the name of the package with a little yellow package icon. These have to match exactly (even the upper-case and lower-case letters must be exactly the same), so it is generally a good idea to leave this alone. The package for my project was called myprogram, but yours might be different - that’s ok, as it’s just a name: stick with yours!
Inside this package directory is your file. In Java, we generally call each file a class, and these represent modules in which your programs will be written. We can do more than just write code inside a class, but for now, that’s all we need to get started. The name of the class should, again, match the name of the file (letter case and all). NetBeans should have done this for you automatically, so there is likely no need to change this. In my example, you’ll see that my class is called MyProgram on the line that reads public class MyProgram. We’ll talk about what public means another day: for now, we can say that it means that our code can be seen by all the other files in our project, if we had more code files. On the left side of your NetBeans window, you should see your file (mine is called MyProgram.java; yours might be called something else, but it will end with .java) under the package directory we saw earlier.
Finally, inside your class is a function called main. I know that main is inside your class, because there are curly braces { and } following public class MyProgram and at the very bottom of your file: the public static void main(String[] args) line is in-between those braces. We even indent the function inside to help us visually confirm this. We’ll go over what each of the words public, static, and void mean later (along with the String[] args), but you can ignore them for now. For the moment, main is the name of our function, and it’s where we will write our program. We can have more than one of these functions, but we’ll just have one for now. In fact, every program you write will have at least one function, and Java will always start by running whatever code is inside of your function called main. So, you’ll see one of these in every program that you write!
## Writing Your First Java Program
For now, our main function is empty. It has a single comment in it, reminding us to write our program there.
If you were to replace the line:
// TODO code application logic here
with
System.out.println("Hello World!");
you could click the “Run” button (it’s a green triangle pointing to the right around the middle of your toolbar at the top, or you can use the “Run”->”Run Project” menu), and you’d see the text “Hello World” appear at the bottom of your NetBeans window. Let’s try writing something more interesting.
Make a new line after your System.out.println("Hello World!"); statement, but still inside the curly braces of your main function. Now, we can write another line of code that Java will execute right after printing “Hello World!” to the screen. We generally run one line of code at a time. For this example, we’ll write a few lines of code to compute the area of a circle using the classic formula:
$$A = \pi r^{2}$$
There are a few different pieces here. For example, we can see that we’re going to have to “square” the radius by raising it to the power of 2, and then multiply it by $$\pi$$. We’ll write code to do these one step at a time.
### Declaring a Variable
Let’s declare a variable called radius to represent the radius of the circle. A variable is a label in your program that we can associate with a value. The value can be a number, or text, or other things we’ll see later; importantly, a variable’s value can change over time as its name suggests. We can direct our program to refer to that value at any time during the program, and can update it as we like. This allows us to, for example, change the radius of the circle, without breaking the logic of our program. In other words, we can compute the area of any circle, as long as we know what its radius is.
Write the following code in your project, right below the print statement we wrote earlier.
double radius = 10.5;
A variable declaration generally consists of three things. The first is the type of value we plan to store in the variable. Numeric values that can have decimal places are called double values, so we’ll use the word double here. The second item is the name of the variable: radius (this can be just about anything we want, but it’s a good idea to use a descriptive name!). Finally, we use the equals sign to set this variable to a value. Here, I’ll use the value 10.5, but you can put any number here that you’d like. Later, you’ll see how to obtain this value from the user via the keyboard or even a file on your disk!
### Computing the Area
Now that we have our formula and a value for the radius, we can compute the area using mathemtical expressions. We need to square the radius, and we need to multiply that by $$\pi$$. We can do either of these steps first, but let’s square the radius first. There is a function called Math.pow that takes a base a and a power b, and computes $$a^{b}$$. We could use this to compute Math.pow(radius, 2), and square the radius.
How can we save that value for use later in our program? We can assign it to a variable! Write the following line of code below the line where you declared the radius variable earlier:
double area = Math.pow(radius, 2);
Now, we have a variable called area, whose value will be a numeric double type, and its value will be whatever number was in the radius variable squared.
Next, we multiply this by $$\pi$$. We could multiply it by 3.14, but Java gives us a more precise value of $$\pi$$ that we can use as a sort of built-in variable. It’s called Math.PI. Our variable area already contains the value of the radius squared, so we can multiply the current value of area by Math.PI to compute the final area of our circle. Here’s the line of code that you can place below the Math.pow line you just wrote:
area = area * Math.PI;
Note that the * character means multiplication in Java, and that we didn’t need the word double this time. We can leave out double because Java already knows that area is a double - we told it a few lines ago when we declared the variable! If you type double area = area * Math.PI, you’ll get an error letting you know this, and you can just remove the word double to fix it.
### Printing the Area to the Screen
We now know the area of our circle, but how do we display it to the user? We saw the System.out.println function earlier, and we can print variables just like text. Here’s the code to put right below the line you just wrote:
System.out.println(area);
Try running it! You should see the text “Hello World!” followed by the area of your circle.
### Improving Our Output
It’s nice to print a more friendly output to the user, so that they know what this number means. Right before the System.out.println(area); line, try adding this line:
System.out.println("The area is: ");
Running this program, you should see this print out before the area. The only trouble is that they’re on two different lines. Modify the line you just wrote to this:
System.out.print("The area is: ");
… and you should see “The area is: “ followed by the area of your circle, all on one line. With this in mind, what do you think the ln in System.out.println means? What happens when you leave it off and simply write System.out.print?
Modify your program to print the following output using additional System.out.print and System.out.println statements:
The area of the circle with radius 10.5 is: 346.3605881077468
## Submission
• Describe what you did, how you did it, what challenges you encountered, and how you solved them.
• Please answer any questions found throughout the narrative of this assignment.
• If collaboration with a buddy was permitted, did you work with a buddy on this assignment? If so, who? If not, do you certify that this submission represents your own original work?
• Please identify any and all portions of your submission that were not originally written by you (for example, code originally written by your buddy, or anything taken or adapted from a non-classroom resource). It is always OK to use your textbook and instructor notes; however, you are certifying that any portions not designated as coming from an outside person or source are your own original work.
• Approximately how many hours it took you to finish this assignment (I will not judge you for this at all...I am simply using it to gauge if the assignments are too easy or hard)?
• Your overall impression of the assignment. Did you love it, hate it, or were you neutral? One word answers are fine, but if you have any suggestions for the future let me know.
• Any other concerns that you have. For instance, if you have a bug that you were unable to solve but you made progress, write that here. The more you articulate the problem the more partial credit you will receive (it is fine to leave this blank).
# Assignment Rubric
Description Pre-Emerging (< 50%) Beginning (50%) Progressing (85%) Proficient (100%)
Algorithm Implementation (60%) The algorithm fails on the test inputs due to major issues, or the program fails to compile and/or run The algorithm fails on the test inputs due to one or more minor issues The algorithm is implemented to solve the problem correctly according to given test inputs, but would fail if executed in a general case due to a minor issue or omission in the algorithm design or implementation A reasonable algorithm is implemented to solve the problem which correctly solves the problem according to the given test inputs, and would be reasonably expected to solve the problem in the general case
Code Quality and Documentation (30%) Code commenting and structure are absent, or code structure departs significantly from best practice, and/or the code departs significantly from the style guide Code commenting and structure is limited in ways that reduce the readability of the program, and/or there are minor departures from the style guide Code documentation is present that re-states the explicit code definitions, and/or code is written that mostly adheres to the style guide Code is documented at non-trivial points in a manner that enhances the readability of the program, and code is written according to the style guide
Writeup and Submission (10%) An incomplete submission is provided The program is submitted, but not according to the directions in one or more ways (for example, because it is lacking a readme writeup or missing answers to written questions) The program is submitted according to the directions with a minor omission or correction needed, including a readme writeup describing the solution and answering nearly all questions posed in the instructions The program is submitted according to the directions, including a readme writeup describing the solution and answering all questions posed in the instructions
Please refer to the Style Guide for code quality examples and guidelines.
|
2023-01-29 20:04:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5018395781517029, "perplexity": 731.0568712985299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499758.83/warc/CC-MAIN-20230129180008-20230129210008-00492.warc.gz"}
|
http://cms.math.ca/cjm/kw/toric%20varieties
|
location: Publications → journals
Search results
Search: All articles in the CJM digital archive with keyword toric varieties
Expand all Collapse all Results 1 - 2 of 2
1. CJM 2007 (vol 59 pp. 981)
Jiang, Yunfeng
The Chen--Ruan Cohomology of Weighted Projective Spaces In this paper we study the Chen--Ruan cohomology ring of weighted projective spaces. Given a weighted projective space ${\bf P}^{n}_{q_{0}, \dots, q_{n}}$, we determine all of its twisted sectors and the corresponding degree shifting numbers. The main result of this paper is that the obstruction bundle over any 3\nobreakdash-multi\-sector is a direct sum of line bundles which we use to compute the orbifold cup product. Finally we compute the Chen--Ruan cohomology ring of weighted projective space ${\bf P}^{5}_{1,2,2,3,3,3}$. Keywords:Chen--Ruan cohomology, twisted sectors, toric varieties, weighted projective space, localizationCategories:14N35, 53D45
2. CJM 2004 (vol 56 pp. 1094)
Thomas, Hugh
Cycle-Level Intersection Theory for Toric Varieties This paper addresses the problem of constructing a cycle-level intersection theory for toric varieties. We show that by making one global choice, we can determine a cycle representative for the intersection of an equivariant Cartier divisor with an invariant cycle on a toric variety. For a toric variety defined by a fan in $N$, the choice consists of giving an inner product or a complete flag for $M_\Q= \Qt \Hom(N,\mathbb{Z})$, or more generally giving for each cone $\s$ in the fan a linear subspace of $M_\Q$ complementary to $\s^\perp$, satisfying certain compatibility conditions. We show that these intersection cycles have properties analogous to the usual intersections modulo rational equivalence. If $X$ is simplicial (for instance, if $X$ is non-singular), we obtain a commutative ring structure to the invariant cycles of $X$ with rational coefficients. This ring structure determines cycles representing certain characteristic classes of the toric variety. We also discuss how to define intersection cycles that require no choices, at the expense of increasing the size of the coefficient field. Keywords:toric varieties, intersection theoryCategories:14M25, 14C17
|
2014-12-19 21:38:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9345541000366211, "perplexity": 462.94950481054934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768977.107/warc/CC-MAIN-20141217075248-00155-ip-10-231-17-201.ec2.internal.warc.gz"}
|
http://mathoverflow.net/questions/12502/seeking-for-a-formula-or-an-expression-to-generate-non-repeatative-random-number
|
# Seeking for a formula or an expression to generate non-repeatative random number .. [closed]
With my personal interest and hobby I started this ..
Given a sequence of numbers 1,2,3 .... N
where N is the highest among the sequence and length of the sequence as well ..
I tried my best to bring up a relationship where y=f(n) .. so that .. y (equal or not-equal to n) is an unique value for each value of n bounded within N,
for example:
for the sequence:
1, 2, 3, 4, 5
Corresponding sequence would be ..
3, 2, 5, 1, 4 (or Assume some other random sequence of same numbers) ..
where 3=f(1), 2=f(2), etc ..
The function f(n) must effectively work for all values of N .. ie we must be able to generate the random sequence of any value of N.
Initially I tried with y = (n * 9) % N, y = (n * 7) % N and y = (n * 3) % N But they work only if the number is divisible by 10 and the max number if not divisible by 3, 7 and 9 ..
Is it possible to generalize the formula .. ? Please help me in deriving the same ..
-
## closed as off topic by Loop Space, Scott Morrison♦Jan 22 '10 at 1:04
Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question.
I can't figure out what your question is asking; it appears to be a programming question, not a math one; it is "repetitive", not "repeatative"; and I think this is not of interest to professional mathematicians. Perhaps StackOverflow or one of the forums listed in mathoverflow.net/faq#whatnot can help. – Zev Chonoles Jan 21 '10 at 6:04
Nah ! this is not a programming related question.. its pure maths .. and that is what I have written there .. By the way .. Its quite easy with the programming I know .. – InfantPro'Aravind' Jan 21 '10 at 6:25
One would think that a question which can be answered by a citation to The Art of Computer Programming would be well within the purview of Stack Overflow. Did you trying asking it over there? – Pete L. Clark Jan 21 '10 at 6:54
I don't need a program .. It must be a kind of PICK .. like in the above example, If I want to pick the first number it should map to the number 3 .. which is because of the function or formula f(n) .. I need to create such a kind of formula which is generalized for all the values of N .. thanx for the response .. :-) – InfantPro'Aravind' Jan 21 '10 at 7:06
This was far from nonsense. What was misleading was that the questioner's name includes "programmer" so people kept giving programming references. – Douglas Zare Jan 22 '10 at 1:20
When we hear random permutations, we bring in our intuition about permutations, and try to give a method which could generate a complicated permutation. Thus, I think we didn't pay enough attention to your examples like n*3 mod N, which for most situations would not be an acceptable way of generating random numbers. The only problem is what to do if N is divisible by 3. As far as I can tell, divisibility by 10 is irrelevant, so I'm not sure why you mentioned it.
You say you don't want to write a program, just a simple formula in Excel. This is reasonable, and even something which makes sense mathematically: There are a few operations available in Excel formulas such as addition, exponentiation, factorial, conditional evaluation based on whether a statement is true or false (characteristic functions), etc. Can one create a formula with fixed complexity which takes in n and N, and which is a permutation of {1,...,N] for a fixed N? Trivially returning n works, but can one produce a permutation other than (+-n+k mod N)+1?
I suggest creating a formula which is equivalent to the following:
If N is not divisible by 71, return (71*n mod N) + 1. Otherwise N is divisible by 71. Permute the last digit base 71: return a + (3*b mod 71) +1 where n-1 = a + b and a is divisible by 71 and $0 \le b \lt 93$, i.e., b = n-1 mod 71. a = n-1 - (n-1 mod 71).
IF(MOD(N,71)!=0,n-(MOD(n-1,71)) + MOD(3*(MOD(n-1,71),71),MOD(71*n,N)+1).
This would be lousy as a random permutation, but it may be acceptable for some purposes.
A better random permutation might be based on f(n), where f reverses the lowest binary digits of n if n is at most than the greatest power of 2 less than N, and does nothing if n is greater. Try f(N+1-f(n)). This can be done using the DEC2BIN and StrReverse functions, but you need a little Excel expertise to use those.
Once you have a few ways to generate random permutations, you can compose them, and even using unsatisfactory random permutations like adding floor(sqrt(N)) can improve the appearance of the resulting permutation.
-
Thanks for the support sir, I will refer this as the answer and follow-on .. – InfantPro'Aravind' Jan 22 '10 at 5:06
While I agree with your interpretation of the question, I am very nervous about your answer. In particular, I don't like the idea that composing several poor sources of randomness is a good way to make a random number generator. As Donald Knuth says, "random numbers should not be generated with a method chosen at random". See the introduction to Chapter 3 in The Art of Computer Programming, Volume II, for an example of how this can fail. – David Speyer Jan 28 '10 at 18:12
Of course, the previous comment assumes that we need a high quality source of randomness, such as would be used in cryptography or for a gambling website. If we just something that looks random to a casual observer, your solution is probably fine. – David Speyer Jan 28 '10 at 18:13
You want a random permutation. See the Knuth shuffle.
-
Thanx for the useful link .. Please add-up something if you are familiar with, which helps in mapping rather than programming .. Mapping I mean to say is something where it is allowed to write one formula (no conditions and loops) common for all numbers .. – InfantPro'Aravind' Jan 21 '10 at 9:39
The only closed form for general n is the description given. The function is not smooth or continuous, and I see no reason to believe that a closed-form expression exists for any n. – Harry Gindi Jan 21 '10 at 13:39
Yup I agree with you @Harry Gindi – InfantPro'Aravind' Jan 21 '10 at 14:19
I haven't used EXCEL in forever, but here is how I would implement the Knuth shuffle in what I would think of as a generic spreadsheet. I want two 1-dimensional arrays, which I will call $x[i]$ and $y[i]$, and one 2-dimensional array $z[i,j]$. Think of three worksheets.
$x[i]$ is a random variable with value chosen from $\{ 1,2, \ldots, n-i+1 \}$.
$z[1,j]=j$.
For $i>1$, we have $z[i,j] = \mathrm{IF}(j \geq x[i],\ z[i-1, j],\ z[i-1,j+1]]$. (Here $\mathrm{IF}(P,a,b)$ returns $a$ if $P$ is true and $b$ otherwise.)
And $y[i]$, our output, is $z[i][x[i]]$.
An example: if y[i] is
$$\begin{matrix} 4 & 2 & 1 & 2 & 1 \end{matrix}$$
then $z$ is
$$\begin{matrix} 1 & 1 & 1 & 3 & 3 \\ 2 & 2 & 3 & 5 & \\ 3 & 3 & 5 & & \\ 4 & 5 & & & \\ 5 & & & & \\ \end{matrix}$$
and $y$ is
$$\begin{matrix} 4 & 2 & 1 & 5 & 3 \end{matrix}.$$
It occurs to me that some of the spreadsheet software I worked with didn't allow an expression like $y[i][x[i]]$, but only allowed me to address a cell by its location plus a constant offset. Here is a hack to get around that: define $z[i] = \sum_{j} y[i,j] - \sum_j y[i+1,j]$. Any spreadsheet should let you total up columns.
-
I am not sure if this is what you are looking for. Am I required to fit my formula in O(1) cells, independent of n? If so, things become much harder and I don't know how to do it. – David Speyer Jan 21 '10 at 13:00
That's really great of you sir .. thanx for help and support .. :-) – InfantPro'Aravind' Jan 21 '10 at 14:33
Well said @david .. and I am glad that you appreciated my questioning .. The answer has not been actually accepted (I just accepted Douglas's answer because he said ITS POSSIBLE) .. and I am still doing research on the need .. – InfantPro'Aravind' Feb 4 '10 at 5:47
In case of programming we are facilitated with RAM and processor resolving algorithm with powerful loops and stuff .. But as you know thats not the case with mapping .. where we are left with only a (essentially complicated,) formula which picks the numbers in jumbled fashion (not-repeatedly) .. – InfantPro'Aravind' Feb 4 '10 at 5:52
Best analogy would be the most perfect Playlist shuffling algorithm (which instantly picks up a track in the list, but doesn't repeat) .. The only way left is to convert that shuffling algorithm into a complicated formula .. ;-) I know its certainly hardly possible stuff .. well. by the way, I don't forget to thank your support and interest .. – InfantPro'Aravind' Feb 4 '10 at 6:01
|
2016-05-03 01:35:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7237687706947327, "perplexity": 428.6411462586881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860118321.95/warc/CC-MAIN-20160428161518-00203-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://www.nsstc.uah.edu/users/phillip.bitzer/idl_doc/ats606/ats_random_walk.html
|
# UAH LTG IDL Library
## IDL Routines from Phillip Bitzer and UAH Lightning Group
summary class fields routine details file attributes
# ats_random_walk.pro
Statistics, ATS606, Random Walk
includes main-level program
A simple one dimensional random walk is an example of a Markov Chain. In a random walk, you take n steps. At each step, you can go left or right. The probability you go right at any step is p.
This routine simulates the random walk. The random number sequence used is based on the system time when you run the routine, down to the millisecond.
See ATS606 for more about random walks.
For the underlying code, we take "right" to be positive and "left" to be negative.
Right now, the random walk starts at zero (and this point is included in the returned array). From there, we take n steps. Hence, the returned array has n+1 elements.
(BTW, you could modify this slightly to have the starting location wherever you want....)
## Examples
Provide an example:
nSteps = 500 location = ATS_RANDOM_WALK(nSteps, p=0.5, LEFTRIGHT=lr) steps = FINDGEN(nSteps+1) cgPlot, steps, location, XTITLE='Step Number', YTITLE='Location'
You should get somthing that looks like this:
See the main level program for more (uses Coyote Graphics). When you run the main level program, you can make an animation that might look like this:
## Author information
Author
Phillip M. Bitzer, University of Alabama in Huntsville, pm.bitzer "AT" uah.edu
History
Modification History:
First written: April 11, 2014 PMB
Uses:
None
## top ats_random_walk
result = ats_random_walk( [n] [, P=float between 0 and 1], LeftRight=array of n elements)
This will return an array of n+1 locations based on a 1d random walk. See full documentation for more.
### Return value
A n+1 element long integer array of locations along one dimension.
### Parameters
n in optional type=integer default=100
Number of steps in the random walk
### Keywords
P in optional type=float between 0 and 1 default= 0.5
The probability that the walk goes right. Formally, the transition probability of i->i+1.
LeftRight out type=array of n elements default= 0.5
This array tells you, at each, step, if you went left (-1) or right (+1). Useful for diagnostics.
Uses:
None
## File attributes
Modification date: Wed Apr 23 13:24:42 2014 Lines: 42 Docformat: rst rst
|
2018-11-21 14:08:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3344977796077728, "perplexity": 7705.709329751216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039748901.87/warc/CC-MAIN-20181121133036-20181121155036-00076.warc.gz"}
|
https://www.esaral.com/q/which-of-the-following-elements-can-51432/
|
Which of the following elements can
Question:
Which of the following elements can be involved in pπ–dπ bonding?
(i) Carbon
(ii) Nitrogen
(iii) Phosphorus
(iv) Boron
Solution:
Option (iii) Phosphorus is the answer.
|
2022-06-25 10:24:33
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8262594938278198, "perplexity": 6139.995173954141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00758.warc.gz"}
|
http://mathhelpforum.com/algebra/82983-what-formula-i-will-use-calculation.html
|
# Math Help - What is the formula that I will use in calculation ??
1. ## What is the formula that I will use in calculation ??
It has been found that in a small town number of uneducated people decrease each 5 year by 40% of the previous year ..
The number of uneducated people in 2005=70
Calculate number of uneducated people in 2010??
what is the formula that I will use?
Is it true that:
number of uneducated people in 2010=70*(40/100)=28
2. EDIT: Mistake is that you have calculated the decrease rather than the exact number ..
EDIT: Mistake is that you have calculated the decrease rather than the exact number ..
can you help me ,How can I calculate the decrease??
4. Originally Posted by change_for_better;296035[COLOR=black
][/COLOR]It has been found that in a small town number of uneducated people decrease each 5 year by 40% of the previous year ..
The number of uneducated people in 2005=70
Calculate number of uneducated people in 2010??
what is the formula that I will use?
Is it true that:
number of uneducated people in 2010=70*(40/100)=28
Is it true :
70*(40/100)=28
then
70-28=42
6. Originally Posted by change_for_better
Is it true :
70*(40/100)=28
then
70-28=42
Yes.
|
2015-08-04 08:09:35
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8210474252700806, "perplexity": 1677.67532791912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990603.54/warc/CC-MAIN-20150728002310-00108-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://mathoverflow.net/questions/267238/conversion-formula-between-multidimensional-ito-and-stratonovich-sdes/267239
|
# Conversion formula between multidimensional Ito and Stratonovich SDEs
Does anyone on here know of a reference that explicitly computes a conversion formula between the drift terms in multidimensional Ito and Stratonovich SDEs?
In particular, given a solution $(X_t)$ of an N-dimensional Stratonovich SDE $$dX_t=b(X_t)dt+\sigma(X_t)\circ dB_t$$ what is the drift term $\tilde{b}(X_t)$ that makes $(X_t)$ a solution of the Ito SDE $$dX_t=\tilde{b}(X_t)dt+\sigma(X_t)dB_t$$
I've found one online reference so far (http://www.performancetrading.it/Documents/KsStrong/KsS_Conversion.htm), although there is no derivation here and I will not go to the length of actually deriving this myself. I need to cite this result, so a book, paper etc. would be excellent.
P.S.: The conversion formula should be multidimensional. I do of course know the 1-dimensional conversion which is quoted in most standard texts.
Any help is highly appreciated!
|
2019-12-15 03:09:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8794412612915039, "perplexity": 168.08295390533507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541301014.74/warc/CC-MAIN-20191215015215-20191215043215-00192.warc.gz"}
|
https://www.rdocumentation.org/packages/base/versions/3.2.4/topics/call
|
# call
0th
Percentile
##### Function Calls
Create or test for objects of mode "call".
Keywords
attribute, programming
##### Usage
call(name, ...)
is.call(x)
as.call(x)
##### Arguments
name
a non-empty character string naming the function to be called.
...
arguments to be part of the call.
x
an arbitrary R object.
##### Details
call returns an unevaluated function call, that is, an unevaluated expression which consists of the named function applied to the given arguments (name must be a quoted string which gives the name of a function to be called). Note that although the call is unevaluated, the arguments ... are evaluated.
call is a primitive, so the first argument is taken as name and the remaining arguments as arguments for the constructed call: if the first argument is named the name must partially match name.
is.call is used to determine whether x is a call (i.e., of mode "call").
Objects of mode "list" can be coerced to mode "call". The first element of the list becomes the function part of the call, so should be a function or the name of one (as a symbol; a quoted string will not do).
All three are primitive functions.
##### Warning
call should not be used to attempt to evade restrictions on the use of .Internal and other non-API calls.
##### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
do.call for calling a function by name and argument list; Recall for recursive calling of functions; further is.language, expression, function.
• call
• is.call
• as.call
##### Examples
library(base) is.call(call) #-> FALSE: Functions are NOT calls ## set up a function call to round with argument 10.5 cl <- call("round", 10.5) is.call(cl) # TRUE cl ## such a call can also be evaluated. eval(cl) # [1] 10 A <- 10.5 call("round", A) # round(10.5) call("round", quote(A)) # round(A) f <- "round" call(f, quote(A)) # round(A) ## if we want to supply a function we need to use as.call or similar f <- round ## Not run: call(f, quote(A)) # error: first arg must be character (g <- as.call(list(f, quote(A)))) eval(g) ## alternatively but less transparently g <- list(f, quote(A)) mode(g) <- "call" g eval(g) ## see also the examples in the help for do.call
Documentation reproduced from package base, version 3.2.4, License: Part of R 3.2.4
### Community examples
Looks like there are no examples yet.
|
2020-04-08 00:03:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3745156526565552, "perplexity": 6340.121174321555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371806302.78/warc/CC-MAIN-20200407214925-20200408005425-00505.warc.gz"}
|
https://leanprover-community.github.io/archive/stream/113489-new-members/topic/Unfold.20sum.20over.20zmod.html
|
## Stream: new members
### Topic: Unfold sum over zmod
#### Adrián Doña Mateo (Oct 26 2020 at 15:20):
Is there a way to unfold a sum over a finite type into an explicit sum? In particular, I would like to prove something like the following:
import data.zmod.basic
import data.real.basic
open_locale big_operators
example (f : zmod 3 → ℝ) : ∑ x, f x = f 0 + f 1 + f 2 := sorry
I have noticed that if instead of ℝ I write ℕ this is easy. But I can't do the same with ℝ because it is not computable.
example (f : zmod 3 → ℕ) : ∑ x, f x = f 0 + (f 1 + f 2) := rfl
#### Mario Carneiro (Oct 26 2020 at 15:21):
I don't think it should matter if the codomain is real here
#### Mario Carneiro (Oct 26 2020 at 15:21):
the rfl proof should still work
#### Kevin Buzzard (Oct 26 2020 at 15:23):
example (f : zmod 3 → ℝ) : ∑ x, f x = f 0 + (f 1 + (f 2 + 0)) := rfl
#### Kevin Buzzard (Oct 26 2020 at 15:23):
n+0=n is rfl for nat but not for real
#### Mario Carneiro (Oct 26 2020 at 15:25):
so an alternative proof for the original version would be
example (f : zmod 3 → ℝ) : ∑ x, f x = f 0 + f 1 + f 2 :=
show f 0 + (f 1 + (f 2 + 0)) = _, by simp [add_assoc]
#### Yakov Pechersky (Oct 26 2020 at 15:32):
Is there a zmod to fin equiv? This works:
import data.zmod.basic
import data.real.basic
open_locale big_operators
example (f : fin 3 → ℝ) : ∑ x, f x = f 0 + f 1 + f 2 :=
begin
end
#### Mario Carneiro (Oct 26 2020 at 15:36):
they are actually defeq I think
#### Mario Carneiro (Oct 26 2020 at 15:38):
example : ∀ (f : zmod 3 → ℝ), ∑ x, f x = f 0 + f 1 + f 2 :=
show ∀ (f : fin 3 → ℝ), ∑ x, f x = f 0 + f 1 + f 2,
by intro f; simpa [fin.sum_univ_succ, add_assoc]
#### Adrián Doña Mateo (Oct 26 2020 at 15:40):
Mario Carneiro said:
so an alternative proof for the original version would be
example (f : zmod 3 → ℝ) : ∑ x, f x = f 0 + f 1 + f 2 :=
show f 0 + (f 1 + (f 2 + 0)) = _, by simp [add_assoc]
I see, thanks!
#### Johan Commelin (Oct 26 2020 at 15:44):
Pedantic remark: zmod 0 is not defeq to fin 0 :sweat_smile:
Last updated: May 13 2021 at 18:26 UTC
|
2021-05-13 18:58:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2492394745349884, "perplexity": 1626.7954574310538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00124.warc.gz"}
|
https://brilliant.org/discussions/thread/no-olympiad-problems/
|
×
What happened to the new problems for this week?
Note by Vishwa Iyer
4 years, 3 months ago
Sort by:
Oh man..I even woke up early to do the new problems today.. · 4 years, 3 months ago
same over here.. sad. :( · 4 years, 3 months ago
Hi everyone,
This was a glitch and has been corrected. You can see new Olympiad problems now. We are very sorry for the mistake.
Happy problem solving! Staff · 4 years, 3 months ago
why are some of them recycled though??Ive seen them before..or is it just me · 4 years, 3 months ago
Yes, i too saw at least 2 repeated questions. · 4 years, 3 months ago
They didn't even bother to change the number of people it's been solved by..I don't like remembering problems I like solving them..Please brilliant,If you are going to recycle problems,At least change the numbers · 4 years, 3 months ago
I think they must have been working on the new problem bank this week and thus didn't have time to create/present new problems. · 4 years, 3 months ago
Yeah... · 4 years, 3 months ago
They are probably behind, or saving something because of the revamp of curriculum mathematics that is being done. So I bet they are working very hard this week! Lets find problems other places. If you have any good problems you would like to share, please post them as a reply so that everyone can have problems to do while waiting.
Here, An auditorium has a rectangular array of chairs. There are exactly 14 boys seated in each row and exactly 10 girls seated in each column. If exactly 3 chairs are empty, find the maximum number of chairs in the auditorium. · 4 years, 3 months ago
If you wanna keep this going, then here's a GREAT problem from the 1983 ARML competition:
In an isosceles triangle, the altitudes intersect on the inscribed circle. Compute the cosine of the vertex angle. · 4 years, 3 months ago
Should I post a solution for this one? Its a fairly simple exercise in trigonometry, but there will be people still trying to solve this one.
I arrived at the answer 1/9 (assuming you are taking the cosine of the 'unique' angle) · 4 years, 3 months ago
Don't post the solution though; while its a relatively simple problem I still like it for having such an interesting condition. · 4 years, 3 months ago
Thanks for posting! :) · 4 years, 3 months ago
try this: 1=6 2=12 3=18 4=24 5=30 6=?? · 4 years, 3 months ago
1,because 1=6,so 6=1 :) · 4 years, 3 months ago
6 = 6, obviously · 4 years, 3 months ago
Yes 6=6 obviously but if the equal to mean's multiply by 6 its 36 · 4 years, 3 months ago
1 is correct · 4 years, 3 months ago
1 = 6 is illogical · 4 years, 3 months ago
because 1=6,so 6=1 · 4 years, 3 months ago
Under the assumption that boys and girls cannot share a chair, let number of rows and columns be r and c. (r >= 14; c >= 10)
Then rc = 3+14r+10c
rc-14r-10c-3 = 0
rc - 14r - 10c +140 - 143 = 0
(r-10)(c-14) = 143
Now, the possibilities for (r,c) are (1,143), (11,13), (13,11) and (143,1). Checking all 4, the number of chairs in each is (11)(157), (21)(27), (23,25), (153,15).
Clearly (23)(25) > (21)(27) and (11)(157)<(153)(15). (if a>c>d>b>0 and a+b = c+d, ab < cd) Also, (153)(15) > (23)(25) obviously.
The maximum number of chairs is thus (153)(15) = 2295 (with 2292 chairs filled up) · 4 years, 3 months ago
why ? defining X = r-10, Y = c-14, then that means X = 1,Y = 143 -> r = 11, c 157 -> rc = 1727 X = 11, Y = 13 -> r = 21, c = 27 -> rc = 567 X = 13, Y = 11 -> r = 23, c = 25 -> rc = 575 X = 143, Y = 1 -> r = 153, Y = 15 -> rc = 2295 (This is the max number of chairs) · 4 years, 3 months ago
I think the answer would be rc and not rc-3 though, since the three chairs that are empty are still chairs in the array. · 4 years, 3 months ago
thx David for correcting · 4 years, 3 months ago
Why doesn't, say, r=21 and c=27 work? · 4 years, 3 months ago
was wrong initially, edited · 4 years, 3 months ago
I hate it.. I hope they can fix it as soon as possible.. · 4 years, 3 months ago
There will be a lot to look forward to; new challenges and a huge database will be up tomorrow or Tuesday. Peter T. posted this earlier. I can't find a link at the moment, sorry! · 4 years, 3 months ago
I got the same problem... Hope they fixed it soon. · 4 years, 3 months ago
problems are back yepiii!!!!!!!!!!!!!!!!!!!!! · 4 years, 3 months ago
They're up now. · 4 years, 3 months ago
Why arent there any qns today? I thought they post qns here every Monday?!?! · 4 years, 3 months ago
I hope they will bring new things.I will wait with patience · 4 years, 3 months ago
:( · 4 years, 3 months ago
But atleast they could send the weekly problems of one of the Olympiad sections. I'll get bored.... · 4 years, 3 months ago
Hopefully it's just because they're upgrading the techniques trainer. · 4 years, 3 months ago
Hopefully · 4 years, 3 months ago
i cant see my trignometry & calculus problems since 2 weeks... · 4 years, 3 months ago
|
2017-07-22 02:54:18
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9331575036048889, "perplexity": 3770.3427825783397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423842.79/warc/CC-MAIN-20170722022441-20170722042441-00456.warc.gz"}
|
https://stacks.math.columbia.edu/tag/03QH
|
# The Stacks Project
## Tag 03QH
Theorem 53.32.4. Let $(R, \mathfrak m, \kappa)$ be a local ring. The following are equivalent:
1. $R$ is henselian,
2. for any $f\in R[T]$ and any factorization $\bar f = g_0 h_0$ in $\kappa[T]$ with $\gcd(g_0, h_0)=1$, there exists a factorization $f = gh$ in $R[T]$ with $\bar g = g_0$ and $\bar h = h_0$,
3. any finite $R$-algebra $S$ is isomorphic to a finite product of local rings finite over $R$,
4. any finite type $R$-algebra $A$ is isomorphic to a product $A \cong A' \times C$ where $A' \cong A_1 \times \ldots \times A_r$ is a product of finite local $R$-algebras and all the irreducible components of $C \otimes_R \kappa$ have dimension at least 1,
5. if $A$ is an étale $R$-algebra and $\mathfrak n$ is a maximal ideal of $A$ lying over $\mathfrak m$ such that $\kappa \cong A/\mathfrak n$, then there exists an isomorphism $\varphi : A \cong R \times A'$ such that $\varphi(\mathfrak n) = \mathfrak m \times A' \subset R \times A'$.
Proof. This is just a subset of the results from Algebra, Lemma 10.148.3. Note that part (5) above corresponds to part (8) of Algebra, Lemma 10.148.3 but is formulated slightly differently. $\square$
The code snippet corresponding to this tag is a part of the file etale-cohomology.tex and is located in lines 4121–4140 (see updates for more information).
\begin{theorem}
\label{theorem-henselian}
Let $(R, \mathfrak m, \kappa)$ be a local ring. The following are equivalent:
\begin{enumerate}
\item $R$ is henselian,
\item for any $f\in R[T]$ and any factorization $\bar f = g_0 h_0$ in
$\kappa[T]$ with $\gcd(g_0, h_0)=1$, there exists a factorization $f = gh$ in
$R[T]$ with $\bar g = g_0$ and $\bar h = h_0$,
\item any finite $R$-algebra $S$ is isomorphic to a finite product of
local rings finite over $R$,
\item any finite type $R$-algebra $A$ is isomorphic to a product
$A \cong A' \times C$ where $A' \cong A_1 \times \ldots \times A_r$
is a product of finite local $R$-algebras and all the irreducible
components of $C \otimes_R \kappa$ have dimension at least 1,
\item if $A$ is an \'etale $R$-algebra and $\mathfrak n$ is a maximal ideal of
$A$ lying over $\mathfrak m$ such that $\kappa \cong A/\mathfrak n$, then there
exists an isomorphism $\varphi : A \cong R \times A'$ such that
$\varphi(\mathfrak n) = \mathfrak m \times A' \subset R \times A'$.
\end{enumerate}
\end{theorem}
\begin{proof}
This is just a subset of the results from
Algebra, Lemma \ref{algebra-lemma-characterize-henselian}.
Note that part (5) above corresponds to part (8) of
Algebra, Lemma \ref{algebra-lemma-characterize-henselian}
but is formulated slightly differently.
\end{proof}
Comment #1713 by Yogesh More on December 1, 2015 a 11:39 pm UTC
In condition (3), "any finite R-algebra S is isomorphic to a finite product of finite local rings", should the third/last instance of "finite" be there? For example, take $R=\mathbb{C}[[t]]$, $S=R$, then $S$ is module finite over $R$ but $S$ is not finite.
Comment #1755 by Johan (site) on December 15, 2015 a 7:04 pm UTC
Thanks, fixed here.
There are also 2 comments on Section 53.32: Étale Cohomology.
## Add a comment on tag 03QH
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the lower-right corner).
|
2018-02-25 00:07:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9459261894226074, "perplexity": 284.027329633286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816068.93/warc/CC-MAIN-20180224231522-20180225011522-00417.warc.gz"}
|
https://www.physicsforums.com/threads/what-is-being-done-in-this-proof-of-limits.700926/
|
# What is being Done in This proof of Limits?
1. Jul 10, 2013
I have been trying to learn calculus by my own, but when it comes to proving limits I get very confuse.
Could somebody explain me what is being done here?
If you know any resources that could help me with this task let me know.
here is the source:
http://tutorial.math.lamar.edu/Classes/CalcI/DefnOfLimit.aspx
Last edited: Jul 10, 2013
2. Jul 10, 2013
### Staff: Mentor
The first line is the statement you want to show.
The second line is a clever guess for delta (as function of epsilon), and the remaining steps are just simplifications, showing that |(5x-4)-6| is indeed < epsilon if |x-2|<delta.
3. Jul 10, 2013
### Staff: Mentor
They're starting with this inequality:
$|(5x - 4) - 6| < \epsilon$
In a few algebra operations, they arrive at this:
$5|x - 2| < \epsilon$
or
$|x - 2| < \epsilon/5$
If you let $\delta = \epsilon/5$, then by reversing the steps above, you'll get back to the first inequality.
The whole idea is sort of a challenge-response. If you're trying to convince someone that $\lim_{x \to 2}5x -4 = 6$, they might ask you get a function value within 0.1 (that's the $\epsilon$). You say, take any x within 0.1/5 = 0.02 of 2.
If the challenger isn't satisfied, he might ask if you can get the function value within 0.001. You tell him to take any x within 0.0002 of 2 (i.e., between 1.9998 and 2.0002).
And so on. Eventually, he'll give up and accept that the limit is indeed 2.
Last edited: Jul 10, 2013
4. Jul 10, 2013
|
2018-03-22 18:16:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9080241918563843, "perplexity": 812.8166685156539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647901.79/warc/CC-MAIN-20180322170754-20180322190754-00198.warc.gz"}
|
https://math.stackexchange.com/questions/1363071/need-example-of-algebraic-sum-of-closed-vector-subspaces-need-not-be-closed
|
# Need example of: Algebraic sum of closed vector subspaces need not be closed
I've read somewhere that given two closed subspaces $V_1,V_2$ in topological vector space $X$, their algebraic span $V_1+V_2=\{x_1+x_2 |x_i \in V_i, i=1,2\}$ need not be closed. I always thought that such things only happen when we take infinite sums and would be interested in seeing example of such behaviour or proof that it's impossible. I'd also like to know if anything changes in concrete cases (e.g. Hilbert/Banach space instead of general TVS).
There's an answer here. I'm going to reproduce it here to fill in a couple of details.
Let $$T : l^2 \rightarrow l^2 : (x_n) \mapsto \left(\frac{x_n}{n}\right).$$ Then $T$ is linear, and $$\|T(x_n)\|^2 = \left\|\left(\frac{x_n}{n}\right)\right\|^2 = \sum_{n=1}^\infty \frac{|x_n|^2}{n^2} \le \sum_{n=1}^\infty |x_n|^2 = \|(x_n)\|^2,$$ hence $T$ is bounded. Moreover, the range of $T$, the set $\lbrace (x_n) \in l^2 : (nx_n) \in l^2 \rbrace$ is not closed in $l^2$. For example, the sequence $(n^{-3/2}) \in l^2 \setminus T(l^2)$, but we can establish that it is in the closure of $T(l^2)$.
Consider a sequence of sequences $((n^{-3/2 - 1/k})_{n=1}^\infty)_{k=1}^\infty$ from $l^2$. Note that $(n \cdot n^{-3/2 - 1/k})^2 = n^{-1 - 2/k}$, the series of which converges, hence $(n^{-3/2 - 1/k})_{n=1}^\infty \in T(l^2)$. But then, $$\left\|n^{-3/2} - n^{-3/2 - 1/k}\right\|^2 = \sum_{n=2}^\infty n^{-3}\left(1 - n^{-1/k}\right)^2 \le \left(1 - 2^{-1/k}\right)^2\sum_{n=2}^\infty n^{-3} \rightarrow 0.$$ This proves $(n^{-3/2})$ is in the closure of $T(l^2)$.
Now we construct an actual example. Let $V = l^2 \oplus l^2$, a Hilbert space under the inner product $\langle (p, q), (r, s) \rangle_V = \langle p, r \rangle_{l^2} + \langle q, s \rangle_{l^2}$. Additionally, let $V_1$ be the graph of $T$, that is, $V_1 = \lbrace (x, Tx) : x \in l^2 \rbrace$ and $V_2 = l^2 \oplus \lbrace 0 \rbrace \subseteq V$. Note that $V_1$ is closed, since $T$ is continuous.
Notice that $(0, n^{-3/2}) \notin V_1 + V_2$. It is, however, in $\overline{V_1 + V_2}$, since $$(n^{-3/2-1/k}, T(n^{-3/2-1/k})) + (-n^{-3/2-1/k}, 0) \rightarrow (0, n^{-3/2})$$ as $k \rightarrow \infty$. This proves $V_1 + V_2$ is not closed.
• Looks great, thanks. In calculation of norm of difference of two sequences you missed squares after the two parentheses. Of course it doesn't change the answer. I couldn't edit because it's less than 6 characters. – Blazej Jul 16 '15 at 11:59
You can do it without any calculations: Let $T: X\to Y$ be a continuous linear map between two Banach spaces with dense range (e.g., $\ell^2\to\ell^2$, $(x_n)_n\mapsto (x_n/n)_n$ as in Theo's answer). Then take $Z=X\times Y$, $V_1=\lbrace (x,T(x)): x\in X\rbrace$ the graph of $T$ and $V_2= X\times \lbrace 0 \rbrace$. $V_1$ and $V_2$ are closed but their sum $V_1 + V_2 =X \times T(X)$ is dense. If $T$ is injective you additionally have $V_1 \cap V_2=\lbrace 0 \rbrace$.
|
2021-04-15 18:23:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587873220443726, "perplexity": 100.4122423213091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038087714.38/warc/CC-MAIN-20210415160727-20210415190727-00430.warc.gz"}
|
https://www.semanticscholar.org/paper/Quantum-oscillations-and-the-Fermi-surface-in-an-Doiron-Leyraud-Proust/a7ee5a9960fd19034ce82c2223926aa321b11ed8
|
# Quantum oscillations and the Fermi surface in an underdoped high-Tc superconductor
@article{DoironLeyraud2007QuantumOA,
title={Quantum oscillations and the Fermi surface in an underdoped high-Tc superconductor},
author={Nicolas Doiron-Leyraud and Cyril Proust and David Leboeuf and Julien Levallois and J.-B. Bonnemaison and Ruixing Liang and Douglas A. Bonn and W. N. Hardy and Louis Taillefer},
journal={Nature},
year={2007},
volume={447},
pages={565-568}
}
Despite twenty years of research, the phase diagram of high-transition-temperature superconductors remains enigmatic. A central issue is the origin of the differences in the physical properties of these copper oxides doped to opposite sides of the superconducting region. In the overdoped regime, the material behaves as a reasonably conventional metal, with a large Fermi surface. The underdoped regime, however, is highly anomalous and appears to have no coherent Fermi surface, but only… Expand
629 Citations
#### Paper Mentions
Blog Post
Quantum oscillations in an overdoped high-Tc superconductor
The nature of the metallic phase in the high-transition-temperature (high-Tc) copper oxide superconductors, and its evolution with carrier concentration, has been a long-standing mystery. A centralExpand
Quantum oscillations and the Fermi surface of high-temperature cuprate superconductors
Abstract Over 20 years since the discovery of high temperature superconductivity in cuprates (Bednorz and Muller, 1986 [1] ), the first convincing observation of quantum oscillations in underdopedExpand
A multi-component Fermi surface in the vortex state of an underdoped high-Tc superconductor
Experiments on quantum oscillations in the magnetization in superconducting YBa2Cu3O6 that reveal more than one carrier pocket are reported, finding evidence for the existence of a much larger pocket of heavier mass carriers playing a thermodynamically dominant role in this hole-doped superconductor. Expand
Universal quantum oscillations in the underdoped cuprate superconductors
Every metal has an underlying Fermi surface that gives rise to quantum oscillations. So far, quantum oscillation measurements in the superconductor YBCO have been inconclusive owing to the structuralExpand
Coexistence of Fermi arcs and Fermi pockets in a high-Tc copper oxide superconductor
ARPES measurements of Bi2Sr2-xLaxCuO6+δ (La-Bi2201) reveal Fermi pockets, which exist in underdoped but not overdoped samples and show an unusual dependence on doping. Expand
Electron pockets in the Fermi surface of hole-doped high-Tc superconductors
The observation of a negative Hall resistance in the magnetic-field-induced normal state of YBa2Cu3Oy and Y ba2Cu4O8, which reveals that these pockets are electron-like rather than hole-like, is reported, suggesting that a Fermi surface reconstruction also occurs in those materials, pointing to a generic property of high-transition-temperature (Tc) superconductors. Expand
Towards resolution of the Fermi surface in underdoped high-Tc superconductors.
• Physics, Medicine
• Reports on progress in physics. Physical Society
• 2012
We survey recent experimental results including quantum oscillations and complementary measurements probing the electronic structure of underdoped cuprates, and theoretical proposals to explain them.Expand
Fermi pockets and quantum oscillations of the Hall coefficient in high-temperature superconductors
• Physics, Chemistry
• Proceedings of the National Academy of Sciences
• 2008
This work explains the observations with the theory that the alleged normal state exhibits a hidden order, the d-density wave, which breaks symmetries signifying time reversal, translation by a lattice spacing, and a rotation by an angle π/2, while the product of any two symmetry operations is preserved. Expand
Theory of quantum oscillations in the vortex-liquid state of high-Tc superconductors.
• Physics, Medicine
• Nature communications
• 2013
This work model the resistive state as a vortex liquid with short-range d-wave pairing correlations, and shows that this state exhibits quantum oscillations, with a period determined by a Fermi surface reconstructed by a competing order parameter, in addition to a large suppression of the density of states that goes like √H at low fields. Expand
Mottness in high-temperature copper-oxide superconductors
• Physics
• 2009
The standard theory of metals, Fermi liquid theory, hinges on the key assumption that although the electrons interact, the low-energy excitation spectrum stands in a one-to-one correspondence withExpand
#### References
SHOWING 1-10 OF 41 REFERENCES
Destruction of the Fermi surface in underdoped high-Tc superconductors
The Fermi surface—the set of points in momentum space describing gapless electronic excitations—is a central concept in the theory of metals. In this context, the normal ‘metallic’ state of theExpand
A coherent three-dimensional Fermi surface in a high-transition-temperature superconductor
• Chemistry, Medicine
• Nature
• 2003
The observation of polar angular magnetoresistance oscillations in the overdoped superconductor Tl2Ba2CuO6+δ in high magnetic fields firmly establishes the existence of a coherent three-dimensional Fermi surface, and reveals that at certain symmetry points, this surface is strictly two-dimensional. Expand
A Phenomenological Theory of The Pseudogap State
• Physics
• 2006
An ansatz is proposed for the coherent part of the single particle Green's function in a doped resonant valence bond (RVB) state by, analogy with the form derived by Konik and coworkers for a dopedExpand
The pseudogap: friend or foe of high T c ?
• Physics
• 2005
Although nineteen years have passed since the discovery of high temperature cuprate superconductivity 1, there is still no consensus on its physical origin. This is in large part because of a lack ofExpand
Fermi arcs and hidden zeros of the Green function in the pseudogap state
• Physics
• 2006
We investigate the low-energy properties of a correlated metal in the proximity of a Mott insulator within the Hubbard model in two dimensions. We introduce a version of the cellular dynamicalExpand
Quantum theory of a nematic Fermi fluid
• Physics
• 2001
We develop a microscopic theory of the electronic nematic phase proximate to an isotropic Fermi liquid in both two and three dimensions. Explicit expressions are obtained for the small amplitudeExpand
Advances in the physics of high-temperature superconductivity
• Physics, Medicine
• Science
• 2000
A perspective on recent developments in high-temperature copper oxide superconductors and their implications for the understanding of interacting electrons in metals is provided. Expand
Theory of Low-Temperature Hall Effect in Electron-Doped Cuprates
• Physics, Materials Science
• 2005
A mean field calculation of the $T\to 0$ limit of the Hall conductance of electron-doped cuprates such as $Pr_{2-x}Ce_xCuO_{4+\delta}$ is presented. The data are found to be qualitatively consistentExpand
Fermi surface and quasiparticle excitations of overdoped Tl2Ba2CuO6 + delta.
The high-T(c) superconductor Tl( 2)Ba(2)CuO(6 + delta) is studied by angle-resolved photoemission spectroscopy and the quasiparticle evolution with momentum and binding energy exhibits a marked departure from the behavior observed in under and optimally doped cuprates. Expand
Pseudogap induced by short-range spin correlations in a doped Mott insulator
• Physics
• 2006
We study the evolution of a Mott-Hubbard insulator into a correlated metal upon doping in the two-dimensional Hubbard model using the cellular dynamical mean-field theory. Short-range spinExpand
|
2021-12-02 01:28:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5700464844703674, "perplexity": 3474.0041348476448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.58/warc/CC-MAIN-20211201234046-20211202024046-00474.warc.gz"}
|
http://ajpcell.physiology.org/content/279/6/C1970.long
|
# Peroxynitrite attenuates hepatic ischemia-reperfusion injury
Peitan Liu, Baohuan Xu, John Quilley, Patrick Y.-K. Wong
## Abstract
In the present study, we examined the effects of peroxynitrite on reperfusion injury using a rat model of hepatic ischemia-reperfusion (HI/R). The left and median lobes of the liver were subjected to 30 min of ischemia, followed by 4 h of reperfusion. Groups A and B rats were sham-operated controls that received vehicle or peroxynitrite;groups C and D rats were subjected to HI/R and received peroxynitrite or vehicle, respectively. A dose of 2 μmol/kg body wt of peroxynitrite, diluted in saline (pH 9.0, 4°C), was administered as a bolus through a portal vein catheter at 0, 60, and 120 min after reperfusion. Results showed that superoxide generation in the ischemic lobes of the liver and plasma alanine aminotransferase (ALT) activity of group C were decreased by 43% and 45%, respectively, compared with group D. Leukocyte accumulations in the ischemic lobes of liver and circulating leukocytes were decreased by 40% and 27%, respectively, in group C vs.D. The ratios of mRNA of P-selectin and intercellular adhesion molecule-1 (ICAM-1) to glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA extracted from the ischemic lobes of the liver of group C were decreased compared with group D. There were no differences between the groups A andB in terms of plasma ALT activity, circulating leukocytes, superoxide generation, and leukocyte infiltration in the ischemic lobes of the liver. Moreover, hemodynamic parameters (i.e., mean arterial blood pressure, cardiac index, stroke index, and systemic vascular resistance) were not significantly different among groups B,C, and D. These results suggest that administration of peroxynitrite via the portal vein only has a local effect. Exogenous peroxynitrite at physiological concentrations attenuates leukocyte-endothelial interaction and reduces leukocyte infiltration. The mechanism of the reduction of leukocyte infiltration into ischemic lobes of the liver appears because of decreased expression of mRNA of P-selectin and ICAM-1. The net effect of administration of peroxynitrite may be to reduce adhesion molecule-mediated, leukocyte-dependent reperfusion injury.
• leukocyte
• nitric oxide
• P-selectin
• reverse transcription-polymerase chain reaction.
nitric oxide (NO) is a mediator with effects that may be likened to a double-edged sword (8, 21, 25, 32). One explanation for the detrimental effects of NO is the generation of peroxynitrite formed by the reaction of NO with superoxide (2, 20, 23, 31). NO reacts with superoxide at a rate constant of 6.7 × 109 M−1 · s−1 at pH 7.4 to form peroxynitrite (2, 31). Peroxynitrite is more cytotoxic than NO or superoxide in a variety of experimental systems, since it can decompose to nitronium ion (NO2 ) and hydroxyl radical (OH·), one of the most reactive oxygen species identified (3). Peroxynitrite can result in profound cellular injury and cell death (10,12). Using the models of endotoxin and hemorrhagic shock, Szabo et al. (33) reported that peroxynitrite is formed during endotoxemia and contributes to the cellular injury. Peroxynitrite can cause protein fragmentation via oxidative stress, and inactivation of important regulatory proteins may contribute to its toxicity (12, 18). This injury could be diminished by inhibition of NO production (11, 32).
In contrast with these studies, at the physiological concentrations (nM and low μM), peroxynitrite possesses a beneficial effect similar to NO (16, 29, 30). Nanomolar concentrations of peroxynitrite have been reported to exert effects that result in vascular relaxation and inhibition of platelet aggregation (13, 38); low micromolar concentrations of peroxynitrite can cause relaxation of isolated bovine pulmonary arteries concomitant with the production of NO (36). Furthermore, peroxynitrite inhibits leukocyte-endothelial cell interactions and exerts cytoprotective effects in myocardial ischemia-reperfusion (MI/R) injury (16,29).
The importance of polymorphonuclear neutrophil (PMN) recruitment to tissue injury is demonstrated by clinical findings that PMN numbers in bronchoalveolar lavage fluid correlate with mortality in patients with adult respiratory distress syndrome (24). The organ that harbors the greatest number of migrating PMNs is also the organ first and most commonly observed to fail (1, 26, 28). Activated PMNs can produce more than 50 toxins on the plasma membrane and in the intracellular granules. During the period of reperfusion, PMNs rapidly accumulate in the microvasculature of ischemic organs. The process of migration of PMNs from the circulation to sites of injured tissue occurs in the following sequence: 1) rolling adhesion, 2) firm attachment, and 3) migration through endothelial junctions. The adherence of PMNs to endothelial cells is prerequisite for PMN migration and subsequent tissue injury.
Because the evidence for peroxynitrite formation is indirect and based on correlation analysis in in vivo study, it is necessary to develop specific peroxynitrite probes, such as relatively stable peroxynitrite donors, for studying the pathogenesis of peroxynitrite. The recent commercial availability of peroxynitrite makes it possible to evaluate the profile of peroxynitrite in vivo. The present study was designed to test the hypothesis that exogenous peroxynitrite at physiological levels can inhibit P-selectin and intercellular adhesion molecule-1 (ICAM-1) mRNA expression and leukocyte infiltration and, thereby, reduce reperfusion injury in a rat model of hepatic I/R (HI/R).
## MATERIALS AND METHODS
#### Materials.
Fischer 344 rats (male, 275–300 g body weight) were purchased from Taconic Farm (Germantown, NY). The animals were given free access to food (Purina rodent chow J001) and water. The experimental protocols followed the criteria of the NIH “Guide for the Care and use of Laboratory Animals” and were approved by the Institutional Animal Care and Use Committee.
Alanine aminotransferase (ALT) activity was measured with a Sigma diagnostics kit. Nitrate reductase (from Aspergillusspecies), o-dianisidine, β-nicotinamide adenine dinucleotide phosphate, reduced form β-NADPH, and sodium nitrite were purchased from Sigma, St. Louis, MO. The RNA Stat-60 reagent, SuperScript II, Taq DNA polymerase, and GelMarker were purchased from GIBCO BRL, Life Technologies, Gaithersburg, MD, and Tel-Test, Friendswood, TX. Aliquot of peroxynitrite and vehicle (negative control) were purchased from Alexis.
#### Experimental protocol.
The experimental protocol for partial no-flow hepatic ischemia and measurement of associated hemodynamic variables has been previously described (21, 22). Briefly, under pentobarbital anesthesia (60 mg/kg ip), the trachea was cannulated (PE-240) to maintain a patent airway. A PE-90 catheter inserted into the right external jugular vein was connected to a blood pressure transducer (model P23AC; Statham, Hatorey, Puerto Rico) for the measurement of central venous pressure (CVP) by a Grass model 7D polygraph (Quincy, MA). The catheter inserted into the external jugular vein was also used for bolus saline injection (i.e., 200 μl) for the determination of cardiac output (CO). A 1.5-French thermistor probe (Columbus Instruments) was advanced into the right common carotid artery to the arch of the aorta. The position of the carotid thermistor probe was adjusted to ensure that a change in temperature of at least 0.3°C was recorded at the aortic arch when 200 μl of room temperature normal saline was injected into the right atrium. Polyethylene catheters (PE-50) filled with heparinized 0.9% NaCl (10 U heparin/1 ml saline) were inserted into the left femoral artery and left femoral vein for measurement of mean arterial blood pressure (MABP) and drug or vehicle infusion, respectively. The blood pressure transducer and thermistor were connected to a Cardiomax II cardiac output computer (Columbus Instruments) for measurement of MABP, CO, stroke volume (SV), heart rate (HR).
Twenty minutes after all surgical procedures were completed, the baseline MABP, CO, SV, HR, and CVP were recorded. A laparotomy was performed, and a catheter (PE-10) was inserted into the portal vein, whereupon the relevant branches of the hepatic artery and portal vein, supplying the left lateral and median lobes of the liver, were occluded with an atraumatic Glover bulldog clamp for 30 min. The remaining caudal three lobes retained an intact portal and arterial blood supply, as well as venous outflow, thereby preventing the development of intestinal venous hypertension. Reperfusion was initiated by removal of the clamp. The animal received 1 ml of the sterile saline intraperitoneally, and the wound was closed with 4-0 silk and clips. Group C rats received the freshly prepared, ice-cold peroxynitrite (ONOO) (2 μmol/kg; bolus, through the portal vein catheter) at time 0 and at 1 and 2 h of reperfusion (I/R + ONOO), whereas group Drats were given vehicle (I/R + vehicle) according to the same schedule. Animals in the sham-operated control group (group B) were subjected to identical surgery without occlusion of the blood vessels and injected with freshly prepared peroxynitrite in the same way togroup C. The group A rats were subjected to sham operation but were given saline. The hemodynamic parameters (MABP, CO, SV, HR, and CVP) were recorded at the beginning of ischemia and at different time points during reperfusion. Blood samples were obtained at 4 h of reperfusion for determination of ALT activities and leukocyte counts. Biopsies of the ischemic lobes of the liver were taken following 4 h of reperfusion for extraction of total RNA and measurement of superoxide generation. Pieces of liver were stored in 4% neutral-buffered paraformaldehyde for subsequent histological study.
#### Preparation of peroxynitrite aliquots.
Peroxynitrite purchased from Alexis with purity >90% was freshly prepared for each experiment. The concentration of peroxynitrite was monitored before use in each experiment by measuring the extinction coefficient at 302 nm after the addition of 0.5 ml of peroxynitrite to 3 ml of 1 N sodium hydroxide at pH 11. The concentrations of peroxynitrite were calculated by measuring the absorbance at 302 nm (E302nm = 1.670 mM−1 · cm−1). An aliquot of peroxynitrite at 2 μM concentration was freshly prepared by diluting of an appropriate volume of 2 mM in 5 ml of ice-cold pH 9.0 saline, and 2 μmol/kg peroxynitrite was injected through portal vein catheter. At the end of last injection, the extinction coefficient of peroxynitrite was again measured, revealing concentrations that were still >94% of initial levels. Peroxynitrite injected through this route directly contacts with hepatocytes, endothelial cells, and Kupffer cells. In the pilot experiments, injection of aliquot of peroxynitrite through a peripheral vein did not effectively alter reperfusion injury in our model, probably because of the short half-life of peroxynitrite (1 s at pH 7.4), which may be metabolized by the lungs before it reaches the liver.
The vehicle used in these experiments was provided by Alexis as a negative control. The vehicle is a decomposed form of peroxynitrite prepared from the same stock as the active form and contains the same concentrations of nitrite, hydrogen peroxide, and salt. However, the vehicle has no absorbance at 302 nm under alkaline conditions.
#### ALT activity.
Plasma ALT activities were measured with a Sigma test kit (DG 159-UV) and expressed as international units per liter.
#### Determination of circulating leukocytes.
Citrate-anticoagulated blood samples were obtained after 4 h of reperfusion. Fifty microliters of each sample was diluted 20-fold with 1% acetic acid solution to lyse red cells. Leukocytes were counted by light microscopy using a hemocytometer.
#### Superoxide assay.
Superoxide anion production in the liver sample from ischemic and nonischemic lobes was measured using the method described previously (22). Briefly, tissue samples (70–180 mg) were incubated in Krebs-bicarbonate buffer (pH 7.4) consisting of (in mM) 118 NaCl, 4.7 KCl, 1.5 CaCl2, 25 NaHCO3, 1.1 MgSO4, 1.2 KH2PO4, and 5.6 glucose. The tissues were gassed with 95% O2-5% CO2 for 30 min and placed in plastic scintillation vials containing 0.25 mM lucigenin in 1 ml of Krebs-bicarbonate buffer containing HEPES (pH 7.4). The chemiluminescence elicited by superoxide in the presence of lucigenin was measured using a scintillation counter (model Mark 5303; TmAnalytic, Elk Grove Village, IL). After 3 min of dark adaptation, vials containing only the cocktail (blanks) were counted three times for 6 s each time. The tissue samples were subsequently added to vials, allowed 3 min of dark adaptation, and counted twice. Since the half-life of superoxide is very short, the results reflect the production of superoxide by activated leukocytes.
#### RT-PCR amplification of mRNA.
Liver samples were snap frozen in liquid nitrogen and stored at −70°C until analysis. Total cellular RNA was isolated by homogenizing tissues with a Polytron homogenizer in RNA Stat-60 reagent (Tel-Test). Total RNA was extracted with chloroform, and samples were centrifuged at 12,000 g for 15 min at 4°C. The RNA was precipitated by isopropanol, and the pellet was dissolved in diethyl pyrocarbonate water (Sigma). Total RNA concentration was determined by spectrophotometric analysis at 260 nm, and 4 μg of total RNA was reverse transcribed into cDNA in 30 μl of reaction mixture containing Superscript II (GIBCO BRL), dNTP, and oligo(dT)12–18primers. The cDNA was amplified using specific primers with a Perkin-Elmer DNA Thermal Cycler 480. The amplification mixture contained 1 μl of 15 μM forward primer, 1 μl of 15 μM reverse primer, 5 μl of 10× buffer, 1.5 μl of 50 mM Mg2+, 5 μl of the reverse-transcribed cDNA samples, and 1 μl ofTaq polymerase. Primers were designed from the published cDNA sequences using the Oligo Primer Detection Program. The cDNA was amplified after determining the optimal number of amplification cycles within the exponential amplification phase for each primer set. Samples were denatured at 94°C for 5 min followed by 20 cycles for glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and 25 cycles for P-selectin and ICAM-1. Each cycle consisted of 94°C for 45 s, 60°C for 60 s, and 72°C for 90 s. After amplification, the sample (5 μl) was separated on a 2% agarose gel containing 0.3 μg/ml (0.003%) of ethidium bromide, and bands were visualized and photographed using ultraviolet transillumination. The size of each PCR product was determined by comparison with a standard DNA size marker. Semiquantitative assessment of gene expression was performed using the Image Master VDS program (Pharmacia Biotech). The designed primer sequences are shown belowSense primerAntisense primerPselectin5TGTATCCAGCCTCTTGGGCATTCC35TGGGACAGGAAGTGATGTTACACC3ICAM15ACAGACACTAGAGGAGTGAGCAGG35GTGAGCGTCCATATTTAGGCATGG3GAPDH5GGTGAAGGTCGGTGTCAACGGATT35GATGCCAAAGTTGTCATGGATGACC3
#### Statistical analysis.
Statistical significance for multigroup comparisons was determined using analysis of variance (ANOVA, Sigma Stat Program) for multiple group comparisons with repeated measurement. If a significant F value was obtained, then the group means were analyzed using the Bonferroni multiple comparison test when P < 0.05 was considered significant. The results are means ± SE.
## RESULTS
#### Plasma ALT activity.
The period of 30-min ischemia of left and median liver lobes followed by 4 h of reperfusion induced a pronounced elevation of plasma ALT activity that was 28.5-fold higher than the sham group (1,369 ± 147 vs. 48 ± 5 U/l). Injection of peroxynitrite (2 μmol/kg; through the portal vein catheter) at time 0 and at 1 and 2 h of reperfusion (group C) attenuated the reperfusion injury, reflected by the reduction of plasma ALT activity (594 ± 65 U/l) at 4 h of reperfusion (P < 0.05 relative to I/R +vehicle group) (Fig. 1). There was no statistical difference between groups A andB at 4 h of injection of saline or peroxynitrite.
Fig. 1.
#### Circulating leukocytes.
Elevation of circulating leukocytes, suggesting a systemic inflammatory response, was observed after hepatic reperfusion injury in group D rats compared with group A animals (7,027 ± 826 vs. 3,587 ± 269 cells/mm3) and reflected a 1.37-fold increase compared with group C (5,115 + 519 cells/mm3) at 4 h of reperfusion. There was no statistically significant difference between groups A andB (Fig. 3).
Fig. 3.
#### P-selectin and ICAM-1 mRNA expression.
Total mRNA was extracted from the livers of group B and from ischemic lobes of livers of groups C and D at 4 h of reperfusion. Expression of P-selectin and ICAM-1 mRNA was studied by RT-PCR amplification following separation of products by gel electrophoresis (Fig. 5). A semiquantitative analysis of RT-PCR was performed with the Image Master VDS program (Pharmacia Biotech) and normalized by assessment of the expression of GAPDH. P-selectin mRNA expression could not be detected in group B. However, expression of P-selectin and ICAM-1 genes was obtained from group D rats. Administration of peroxynitrite to rats subjected to HI/R resulted in an attenuation of gene expression for P-selectin (P = 0.108, compared with group D) and ICAM-1 (P < 0.05, compared with group D).
Fig. 5.
A: ethidium bromide stained agarose gel showing PCR products from amplified rat hepatic RNA. HI/R was carried out as described in materials and methods, and the ischemic liver lobes were obtained at 4 h of reperfusion. RNA was extracted from hepatic tissue, then reverse transcribed and amplified using primers selected from published cDNA sequences. A semiquantitative analysis of the results of RT-PCR was performed with the Image Master VDS program (Pharmacia Biotech). With this computer program, the electrophoretic bands were normalized for glyceraldehyde-3-phosphate dehydrogenase (GAPDH; lanes 2, 5, and8) gene expression. The normalized ratios for P-selectin (lanes 3, 6, and 9) and intercellular adhesion molecule-1 (ICAM-1; lanes 4, 7, and10) mRNA expression in the experiments were averaged, and the results are shown in B and C, respectively.Lane 1, marker; lanes 2–4, sham control;lanes 5–7, HI/R + ONOO; and lanes 8–10, HI/R + vehicle. Data are means ± SE of 6 animals. *P < 0.05 compared with rats of group B.# P < 0.05 compared with rats ofgroup C.
#### MABP and systemic vascular resistance index.
Systemic vascular resistance index (SVRI) was calculated as (MABP − CVP)/CI, and the results are illustrated in Fig.6 B. The initial values of MABP and SVRI were not significantly different among groups B,C, and D rats (Fig. 6, A andB). Injection of peroxynitrite (2 μmol/kg; through the portal vein catheter) to rats of groups B and Cdid not have an effect on vascular tone as evidenced by the lack of statistically significant differences in MABP and SVRI among the three experimental groups.
Fig. 6.
Time course of changes in mean arterial blood pressure (A) and systemic vascular resistance index (B) in 3 experimental groups. Data are means ± SE of 5 animals per group. There was no statistical significance among 3 experimental groups in the various time points.
#### Cardiac index and stroke volume index.
Cardiac index (CI) and stroke volume index (SVI) were calculated as CO (or stroke volume) per 100 g of body weight of rat. The initial values of CI and SVI were not significantly different amonggroups B, C, and D (Fig.7, A and B). Injection of peroxynitrite (2 μmol/kg; through the portal vein catheter) to rats of groups B and C did not affect cardiac function, as evidenced by a lack of statistically significant differences in CI and SVI among the three experimental groups.
Fig. 7.
Time course of changes in cardiac index (A) and stroke volume index (B) in 3 experimental groups. Data are means ± SE of 5 animals per group. There was no statistical significance among 3 experimental groups in the various time points.
## DISCUSSION
In the present study, we examined the hypothesis that peroxynitrite at physiological levels (nM or low μM) can inhibit mRNA expression of P-selectin and ICAM-1 (Fig. 5) and may, thereby, reduce leukocyte-dependent reperfusion injury. Peroxynitrite administered via the portal vein to the ischemic lobe of liver reduced plasma ALT activity (Fig. 1), hepatic superoxide generation (Fig. 2), and leukocyte accumulation in the ischemic lobes of the liver (Fig. 4). The attenuation of leukocyte accumulation in the ischemic lobes of the liver may be related to inhibition of P-selectin and ICAM-1 mRNA expression by peroxynitrite. In contrast, no statistically significant differences were found between sham rats + vehicle and sham rats + peroxynitrite in terms of plasma ALT activity, circulating leukocyte counts, superoxide generation and leukocyte infiltration in the liver, indicating an effect of peroxynitrite limited to rats subjected to reperfusion injury. Moreover, the beneficial effect of peroxynitrite cannot be attributed to hemodynamic changes because MABP, SVRI, CI, and SVI were not different among the various groups of rats.
NO has a number of physiological roles, including smooth muscle relaxation and cellular communication. In the vascular system, biosynthesis of NO at physiological concentrations (nM to low μM) regulates organ blood flow and leukocyte-endothelial interactions (9), inhibition of platelet aggregation (38), and inhibition of neutrophil infiltration (9,14). On the other hand, activation of inducible NO synthase (iNOS) produces a sustained elevation of NO levels (high μM) (2, 8, 31). Pathological concentrations of NO exert detrimental effects possibly via the generation of peroxynitrite formed by a reaction of NO with superoxide (19, 23, 31).
At sites of inflammation, increased production of NO is an essential compensatory response for maintaining organ blood flow (4,22) and inhibiting leukocyte adhesion and platelet aggregation (14, 15, 21, 22). The simultaneous increase in NO and superoxide enhances the formation of peroxynitrite, which induces thiol group inactivation, protein fragmentation (33), and alteration of DNA synthesis leading to cell death. Additionally, peroxynitrite is highly bactericidal toEscherichia coli (3) and can cause oxidation of sulfhydryl groups, as well as protein strand breakage (10) and cell apoptosis (18) at high micromolar concentrations.
The results of the present study are consistent with those of other studies showing that peroxynitrite at physiological concentrations (nM to low μM) has several beneficial effects similar to those of NO (16, 29, 30). Nossuli et al. (29), for example, have reported that infusion of peroxynitrite at a concentration of 1 μM reduces myocardial infarct size and preserves the coronary endothelium in a cat model of myocardial ischemia and reperfusion. Our previous work demonstrated that administration ofN ω-nitro-l-arginine methyl ester (l-NAME) attenuated peroxynitrite formation but enhanced reperfusion injury in a rat model of HI/R, supporting the idea that peroxynitrite may play an anti-inflammatory role similar to NO in acute inflammation (21). Additionally, Wu et al. (37) have demonstrated that peroxynitrite relaxes pulmonary arteries in vitro concomitant with the production of NO. Thus peroxynitrite, at low micromolar concentrations, inhibits platelet aggregation in vitro (38) and produces vasorelaxation in dog coronary arteries (19). Furthermore, peroxynitrite can produce S-nitrosothiols, which can stimulate guanylyl cyclase and release NO (5, 34, 37). Physiologically relevant concentrations of peroxynitrite significantly attenuated neutrophil-endothelium interactions and decreased the extension of necrotic tissue in an in vivo model of MI/R injury (16,30).
Peroxynitrite can influence the expression of adhesion molecules, such as P-selectin and ICAM-1. P-selectin is stored in Weibel-Palade bodies of endothelial cells, but it rapidly mobilizes to the surface of endothelial cells following exposure to complement fragments, thrombin, or histamine. The expression of P-selectin peaks within 10 min after stimulation. P-selectin recognizes sialyl Lewisx as a ligand and binds to specific carbohydrates on neutrophils. The binding results in deceleration of leukocytes flow and rolling along the endothelial cells. Our results indicate that administration of peroxynitrite (2 μmol/kg) attenuates the expression of mRNA of P-selectin and ICAM-1. These findings are in agreement with the work reported by Nossuli et al. (30) that administration of peroxynitrite (1 μmol/l) intraventricularly significantly reduced adherence of neutrophils to the ischemic-reperfused left anterior descending coronary endothelium in a cat model of MI/R. Immunochemical staining indicated that the percentage of coronary venules staining positive for P-selectin was significantly reduced in animals with MI/R + ONOO, compared with MI/R + vehicle (30). Lefer et al. (16) also reported that peroxynitrite reduced PMN adhesion to thrombin-stimulated superior mesenteric artery segments by 58% at the concentration of 100 nM, and by 63% at the concentration of 1 μM, when vs. thrombin alone in vitro. ICAM-1, a member of the immunoglobulin supergene family, is an important ligand/receptor for CD11b/CD18. ICAM-1 is expressed constitutively on endothelial cells, and upregulated in response to tumor necrosis factor-α, interleukin-1β, and lipopolysaccharide (7, 35). Antibodies to either CD18 (17) or ICAM-1 (35) in an in vivo experiment have been shown to attenuate neutrophil-dependent injury.
Our results show that hemodynamic parameters (e.g., MABP, SVRI, CI, and SVI) (Figs. 6 and 7) were not different among the various groups during the period of reperfusion. These results suggest that there was no direct or indirect effect of peroxynitrite via release of NO on the cardiovascular system. In our previous study, animals receivedS-nitroso-N-acetylpenicillamine (SNAP), an NO donor, which increased plasma NO concentrations, and therefore improved hemodynamics in the HI/R model (22). In the present studies, plasma NO levels, based on the measurement of nitrate/nitrite decomposed from NO, could not be measured, because synthesis of peroxynitrite from the reaction of superoxide with NO produces high levels of residuals of nitrite/nitrate in the aliquot of peroxynitrite.
The previously reported deleterious effects of peroxynitrite, including oxidation of sulfhydryl groups, DNA strand and protein breakage, and cell apoptosis, may reflect the use of high concentrations. Thus much of this previous work is based on the exposure of different types of cultured cells to high micromolar to millimolar concentrations of peroxynitrite (50- to 150-fold higher than pathophysiological levels). These high levels of peroxynitrite and the absence of antioxidants may have limited relevance to the in vivo situations because the formation of peroxynitrite in vivo is probably limited by several mechanisms. First, Miles et al. (23) have shown that peroxynitrite forms optimally from equimolar concentrations of NO and superoxide. The inequality of either precursor greatly limits production of peroxynitrite. NO is formed normally at 1–20 nM. Although in pathophysiological conditions higher levels of peroxynitrite can be formed (27, 31), the elevated levels were also limited to the nanomolar range. For example, Wang and Zweier (36) have demonstrated that rat hearts subjected to I/R produce <100 nmol/l peroxynitrite. Therefore, in the pathological conditions in vivo, the formation of peroxynitrite is unlikely to achieve high micromolar to millimolar concentrations. Second, many antioxidants and anti-inflammatory mediators are present in in vivo situations. The concentrations and toxicity of peroxynitrite are strongly influenced by the presence of these compounds. In the presence of plasma, protein, glucose, and glutathione, peroxynitrite will form intermediates with low toxicity. The extremely important detoxification of these chemical compounds was demonstrated by Denicola et al. (6) who reported that peroxynitrite interacts with the normal bicarbonate buffer system in human plasma and becomes much less toxic even at high micromolar concentrations. In addition, peroxynitrite can be scavenged by many compounds, including uric acid, cysteine, glutathione, ascorbic acid, deferoxamine, and vitamin E. Finally, with a half-life about 1 s under physiological conditions, peroxynitrite is unlikely to accumulate in vivo at concentrations greater than 1–5 μM (23, 31). Therefore, the concentrations of peroxynitrite in vivo would not exceed the low micromolar range (i.e., 2–5 μM).
In conclusion, peroxynitrite at nanomolar and low micromolar concentrations attenuated reperfusion injury similar to that of NO in a rat model of HI/R (21, 22). The mechanism for the effects exerted by peroxynitrite may involve its ability to modulate adhesion molecule expression in the ischemic lobes of liver and thereby attenuate leukocyte-dependent I/R injury.
## Acknowledgments
We are very grateful to Dr. Carl E. Hock for helpful discussion during the course of these experimental studies.
## Footnotes
• Address for reprint requests and other correspondence: P. Liu, Dept. of Cell Biology, UMDNJ-School of Osteopathic Medicine, 2 Medical Center Drive, Stratford, NJ 08084 (E-mail: liupe{at}umdnj.edu).
• The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
View Abstract
|
2015-11-29 03:20:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3382934629917145, "perplexity": 11486.787266103926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398455246.70/warc/CC-MAIN-20151124205415-00160-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://copeman.ning.com/blog
|
# Blog
### Stepping stone jobs
Subject: Economics
Case Study: South Africa Inc (unemployment, inequality, growth model)
Put yourselves in the shoes of someone advising the President.
The problem we are dealing with is the 29% unemployment of 10 Million South Africans, the rate climbing to 60% for black youth.
Treasury has suggested a plan to create 1 Million jobs in 3 years. This proposal is made in the full knowledge that 500,000 new job seekers enter the South African Economy every year. Treasury's plan is thus a plan to INCREASE unemployment by a minimum of 500,000 jobs. The Copeman Academy challenges its users to come up with a comprehensive solution to create 10 Million jobs in 3 years and thus reduce unemployment to near zero.
Stepping Stone Jobs
Proposal to create 1 Million jobs.
Stepping stone jobs are jobs like Car Guards, Waiters, Cleaners, Caddies, Personal assistants, seasonal agricultural labor, security guards. Talented young people start these jobs very early age and at very bad pay, do them for a few months and they move onto other jobs .(I started doing these jobs when I was 13). These jobs are not so much about earning revenue as about learning the way the employment process works. Be polite to a customers even if they are wrong, always reporting on the outcome of tasks, complete all tasks before stopping the day, never steal the companies assets, always look to make profit for the company.
Typically in South Africa, stepping stone jobs get taken up by African immigrants and middle aged South Africans. This has a devastating effect on the economy. The first is that middle aged people try to "make a career" out of stepping stone jobs. They make a career out of a job that should never have been. No amount of unionization or protest can solve the fact that the job simply does not produce enough to deliver a decent wage. Great unhappiness ensues.
Worse is that fact that in a year, that low paying stepping stone job would have employed 4 youths temporarily, who would then leave and move on to take up better employment. If that Stepping stone job is held for 5 years buy a middle age person, 20 youths miss the opportunity to enter into the market. Great unhappiness ensues.
BEE and the employment act start out with the good intention of helping youth employment, but have the exact opposite effect. Instead of offering a future to twenty youths who could be hired ground through the rough an tumble of low paid employment, we swop this for one unhappy person stuck forever in a poverty trap.
When you start rambling about minimum wages, employment brokers, casualisation employment equity and employment controls, you destroy stepping stone jobs and you destroy the future.
250,000 stepping stone jobs shared among 1,25 Million people, creates 1 MIllion jobs.
### South African 2019 Q1 GDP Growth Tanks
South Africa posted a 2019 Q1 GDP growth decline of minus 3,2%.
Economic apologists tout this as the worst performance in 10 years. Worse! Back up 33 years to find self-inflicted stupidity of this magnitude. In the Court of Economic History, President Rhamaphosa and Finance Minister Mboweni stand in the dock alongside PW Botha and Barend Du Plessis as the worst economic executives to have held office. Go back 190 years to Lord Charles Somerset and Sir Harry Smith to find comparable incompetence.
Breathtaking - minus 3,2%.
A small number, yet considered, the enormity of the R300 Bn GDP gap, comes sharply into focus. Next - 300,000 pink slips, as actual jobs or bright young #feesmustfall graduates, that don't move out of University digs into a company apartment, but stay at home, sharing a room with baby Katlego and the twins. Revised career prospects are now poker games, soccer pools, traditional weddings.
The 2008 implosion of world markets and the 1992 dying Apartheid regime cranking interest rates to 24% are acts of desperation. Go back to The Rubicon Speech of 1986 or the 1976 Soweto Riots - 33 years to match 2019 Q1 made spectacular that it happened while the rest of the world marched onto maximum growth, stock market highs and full employment.
How?
EWC - Expropriation Without Compensation
In the boozy gloom of 2018, Rhamaphosa meets his election advisory team. The inspiration flash is putting EWC at the forefront of the election campaign, the ANC heads off the EFF Land challenge. After the election, back to business, forget the promises, not a peep about EWC.
EWC worked! Who can forget the old woman in Erkhuleni visited by a campaigning President bursting into tears of joy as he explained on National television how her ANC vote would get her a plot of Land, a bond to build a house and security for an R150,000 overdraft to start the business she has always been denied. A big jump from a lifetime of poverty. In Africa - that promise is a vote in the bag.
There is however a cost side.
Decrying EWC is deemed, counter-revolutionary. The pungent argument that EWC is merely a ploy with no real Land Reform, is dismissed. The joy gushes from the left as they watch spooked Landowners, fret about losing their career savings. A political triumph!In Africa, the rich not only dominate the economy. They ARE the economy. When they tank, we all tank.Investors disappear like virgins at a matric dance. Property buyers scatter like rats up a drainpipe. Bank managers cut bonuses, cancel vacations, call in loans. Last year's pestering call center agents offering overdrafts? Retrenched or moved to collections. Standard Bank closes 100 Branches.The Q1 GDP figure is the first bill, court summons and blacklisting judgment still coming - GDP implosion with multipliers.Land reform, more urgent than ever, is now at what cost? The ANC election campaign cost R 300 Bn! R30,000 a vote exceeds even the per guest budget of the Gupta wedding. R300 Bn is more than double (nay triple) the total amount spent on Land restitution since 1994!
Not even a single farm took yet.
Land Reform - where South Africa loses capital - offers nothing. Say “Expropriation Without Compensation” to a Landowner. It matters not what follows, she is already checking European Bond rates online, not shopping, opening a new factory or employing more staff. Plain to see – GDP growth minus 3.8%.
South Africa, it is time for a serious rethink on where EWC is going.
### AP Maths
English Speaking countries do not do well in Pisa rankings. The Copeman Academy strives to give the English speaking student a boost in Maths education. We do this by supplementing analytical training with Deep Learning based Pattern Recognition. We cover South Africa, UK, USA, Canada, Australia.
South Africa is ranked 138 out of 140 countries at Maths education. What is less known is that South Africa is one of the most unequal Maths societies in the world. Our top 1 % of students rank as good as the Asians (not rigourously proven, people get a bit prickly about admitting the advantages of privileged students). One of the reasons for this success is that thanks to the IEB, we have one of the best developed Advanced Maths Programs. A student following Core Maths and APMaths in South Africa comes out ahead of GED and AS Level students in the US and UK respectively.
AP Maths is an innovative exam offered by the IEB. Every serious Grade 12 student should take AP Maths. This is because it is an opening link to A Level Maths and University Maths 1 and 2.
Here is a list of topics covered.
Here is an example of an exam (Paper 1). Work your way through these questions with Video answers to establish how good you are. If you are heading for University then APMATHS will never be a waste of your time and will improve your core maths.
Great option comming - Stats, Financial Models, Graph theory and Matrices.
South Africa
If you are in grade 11 or grade 12 find out from your school what are your options to doing AP Maths. Either way doing APMaths now will save you doing these topics later. The assessment has been benchmarked by UK Naric (National Academic Recognition Information Centre), the UK equivalent of the SA Qualifications Authority, and is considered equivalent to the UK A-levels.
UK/Cambridge International
If you are a UK A-Levels student then APMaths coincides 90% with your A-Level Core and Further Maths Syllabi.
These are all core concepts to A-Levels Maths and A-Levels Further Maths.
Cambridge A-Level Maths : https://www.cambridgeinternational.org/Images/329554-2019-syllabus.pdf
Cambridge A-Level Further Maths : https://www.cambridgeinternational.org/Images/414957-2020-2022-syllabus.pdf
US GED
GDE students Have 4 modules Algebra 1 and 2, Geometry and Precalculus. AP Maths and Greade 12 Core Maths cover around 90% of your topics and introduce about 10% of topics not required (Noteably financial mathematics)
https://www.futureschool.com/united-states-curriculum/texas/
will shortly recocile these differences. I would recommend AP Maths for GDE students, because it will prepare you for the inevitable confrintation of College Maths.
There is a basic cannon of maths, Geometery, Trig, Algebra, Calculus and everyone needs an introduction to Mechanics, Stats and Applied Maths. Adults have the luxury of not having to follow a rigourous syllabus. AP Maths is a gentle introduction to teriary Maths if you have High school maths waxed.
--
Those who light their candle with mine, do not diminish my flame.
Thomas Jefferesen
Those of you that have read my education blogs will have picked up my disdain for the classroom system. My claim is that online working can "increase your education productivity by 5 times". Failure at Mathematics is arely the ability of the student but usually the inability to handle the classroom system. It may seem a crazy claim, that can't easily be measured, but let me prove it by illustration.
Don't just take my word for it.
A typical student attending a typical classroom lecture at a typical brick and mortar institution is thrown each morning into a multitasking nightme. Find your way to class - argue about you seat - deal with the ADD kid in the seat next to you - watch the lecturer make announcements - write down an equation - cockblock the dreamy kid looking at you two rows down - listen to an inane question from some dumbass - write down another equation - listen to an unintelligible esoteric question from some boy genious - more from the ADD Kid- another equation - focus on holding in a pee - more home work announcements - check your cell phone messages for announcements on the next science lecture - another equation - rush to the toilet. The hyper busy environment of the classroom regime, occurs because it it a sausage factory trying to balance the needs of the teachers with the needs of the students. It is a mess.
OK so what is so bad about Multitasking? You have all heard mum brag about her multi tasking skills. Some of the most busy people at work seem to be those that are best at multitasking?
You are much more efficient if you shut everything else out focus on the issue at hand. I am going to prove it to you:
Consider the multi tasking probelm of write down three symbols and increment them like this:
Arabic numeral Alphabet Roman numeral
1 A I
then write the next row
2 B II
and then
3 C III
Keep going
4 D IV
5 E V
6 F VI
.
.
.
10 J X
Thats Multi Tasking. Now put away Multi tasking and focus on one column at a time
1
2
3
4
5
6
7
8
9
10
and then
1 A
2 B
3 C
4 .
5 .
6 .
7
8
9
10
Can you see (even before taking on Roman Numerals) that the Column approach is much easier? The one that uses focus? We should stop calling it multi-tasking, and call it “switch-tasking.” It is 5 time less efficient when it comes to measuring up classroom against managed online learning.
That is why multi taskng is only an illusion of productivity and why the classroom system sucks.
### South Africa's Low Rank in Maths is misleading.
We are often reminded that South Africns are on the bottom rungs of world maths education, by some estimates beating only Yemen. This does not tell the whole story.
Rounding out the numbers, around 1.6 % of South African learners get a distinction for Grade 12 Maths (8 000 out of 500 000). This compared to UK GCSE where 3% of learners get a distinction. However our NSC Grade 12 Maths exam is a level more difficult than GCSE including areas of financial maths, statistics and calculus.
What is interesting is that when you examine Private Schools in South Africa, the distinction rate jumps closer to 20% at top schools. Far from producing the worst results, South African private schools produce closer to the best maths results in the world. The result is that 15% of top University entrants come from IEB schools. Maths distinction graduates are nearly fully employed whereas the national average of young unemployed is over 50%.
The limiting factor is the quality of teachers and the access to technology. Quality teachers are hard to come by and will always be a limiting factor. However an opportunity awaits us. Adopting technology should be a priority. Machines simply make better maths teachers. We have the opportunity that in more developed countries the maths teachers unions are more militant and resist replacing human teachers with machines. Ironically the second worst country in the world is best placed to do this. Small studies are already showing that access to broadband video and deep learning techniques can raise the distinction rate dramatically. Machines are tireless and almost unlimited in productivity. A determined roll out of maths teaching technology can raise our distinction rate ten fold.
That is why at the Copeman Academy we put our emphasis on Creative Commons and AI Pattern recognition. So far so good. We have a 100% distinction success rate.
### The Land Issue
Consider that the Land Issue may never be solved.
Land is simply a proxy for wealth and when we say people are dispossessed of Land, what we really mean is they are dispossessed of wealth.
When we mean - give Land - what we really mean is - give wealth - to those that do not have enough. The problem with the concept is that in order to do that, you have to take it away from someone else first. Few people feel that they have too much wealth and would like it all taken away, to release the burden of owning wealth.
Thorstein Veblen dealt extensively with this problem 100 years ago. While Marx predicted the demise of capitalism, Veblen predicted that the masses would emulate the ruling classes. Unlimited wants with limited resources is the fundamental problem of economics and no amount of indignity, self justification or unsupported aggression can solve it.
https://en.wikipedia.org/wiki/Thorstein_Veblen
### The Consumer Surplus
The consumer surplus is one of the most beautiful things in market capitalism, something that is shared by all of us. Yes rich people get more of it than poor people, so it is shared unequally, but it is there for every one to take.
If you understand that prices are determined by supply and demand in markets and the marginal principle determines the final price, the consumer surplus is the integral of the demand curve from zero to the amount supplied less the market price of that demand. (The blue area in the diagram)
For example as a rich person I would be prepared to pay $10 for a price of bread if that was the price. The market price is actually$6. Then the consumer surplus for me is $4. I am getting a bargain on bread because I am paying$4 less than I would have paid – I would have been prepared to pay \$10.
Like I said one of the most beautiful things in market capitalism – we all benefit.
### A Gini for Grade 12 Maths
South African ranks in the highest for economic inequality (Gini 0.67) and in the lowest for mathematics and science education (WEF 137). But there's the rub …
South African society is ruthless. While our system consigns the masses to a future mired in misery and poverty, our 1% are destined for greatness. If you lie awake at night worrying about egalitarian equality for all African children, don't read any further ...
While much economic analysis has been done on the poverty of Nations by looking at the 99% and the Analysis of the Gini coefficient, we should suppress the urge to fit a function to the Lorenz curve and then integrate that function. I do it anyway (simple non linear regression on a published dataset) - in South Africa I estimate the Lorentz curve for Grade 12 Mathematics outcomes to be y = 30 + exp(x ^ 2.7/59000) – see graph.
At the Copeman Academy we focus on the 1% and there you get a completely different picture. When we pit our top 1% of students against the 1% of the leading Maths nations. (Singapore, Finland) You find that our 1% leads them not follows them!
The bottom line is that students willing to leave the 20th century “teacher focused” methodology and move to a tech focused AI technology have the advantage, no matter what country they are in. The best maths teaching countries are still geared to produce mediocrity - in buckets!
How does this happen?
Top performing Maths countries with a higher level of Maths education and a lower Gini coefficient of inequality may produce an overall level of Maths and Science, but their use of “better teachers” actually works against them in the 1%. The top students get lumped in for longer with the 99%. In South Africa because our Government Education is so low, the 1% are forced to leave the grid early. They actually end up ahead of the students in the developed countries that stick to the grid.
Essentially because those institutions in South Africa teaching Maths and Science to the 1% are independent of the failing Government system. Here at the Copeman Academy we take no Government funding and are not beholden to bind with the 99% attached to the grid in a lifetime of mathematical poverty.
The unexpected outcome is that those students in developed countries construed to be more successful (better teachers) have their 1% dragged down to the level of their 99%. Whatever time you spend in the system for the 99% (whatever country) is essentially wasted. In South Africa we spent 92% of our education budget on teachers salaries and 8% on infrastructure and technology. At the Copeman Academy we spend 90% of our budget on technology and 10% on teachers salaries. Because technology is the most important component of Maths education. Those that break with the Grid and focus on technology based outcomes actually exceed the outcomes achieved by the 1% in the more egalitarian societies.
This leads to a dark scenario for the 99% and a bright one for the 1% who make the jump. Like I said, we are a ruthless society.
Open source and Economic Inequality
World Maths Ranking
Primer on the Lorentz Curve
Play with the code
### Stagnant Jobs growth
Read through the recent announcement of Pali Lahohla, Director General Statitistics about stagnant job growth in South Africa and you see a gloomy future. Blips and outliers aside, jobs are not growing. However each year around 500,000 new entrants seek employment. The greatest pressure is on youth and that means black youth.
Jolly as he is, the DG falls into racist thinking that can damage us structurally. Believing that unemployment is caused by race, leads to tilting at windmills, employment equity quotas, BEE, NSFAS, RET and any number of initiatives that have yielded well – nothing. (the DGs results not mine) White youth are largely fully employed and the other groups have issues. Yes but that is not the cause of problem.
You can't legislate against the 1% because their skills are simply too mobile. The reason that racists legislation fails to increase employment in the 99% is that race is not the cause.
To understand why race is the result and not the cause, you have to understand Granger Causality. Yes there may be a correlation between race and unemployment, but the cause is lack of mathematics. If you leave race out of it, you will find that the 5000 or 1% of the 500,000 that achieved a distinction for Maths have a golden future. While others discuss hustling for a living, the Quants are looking at racking up jobs at R 50,000 a month. The demand supply is so skew that you can't even find the candidates to fill the outstanding vacancies . The hustlers will join the unemployment ques or at best be paid intern wages.
Welcome to the rollerball economy, or knowledge economy to those that seek to avoid the ugly truth - The knowledge economy is upon us. Machines work better than people. The best for people is programming the machines. Did anyone notice that the value of Naspers now exceeds the entire capitalization of the South African Mining Industry? This, while it employs a fraction of the workers.
Take the data and rearrange. Instead of using race as a criteria, use grade 12 Maths Distinction pass and I will wager a cyber-wallet of crypto that you will see a 90% correlation, and Granger Causation. This is why I have spent such a great deal of the last year developing a MOOC that starts with Grade 12 Maths.
At a macro level, our languishing at the bottom of the WEF rankings on Maths education is why the rest of the world has recovered from the recession and we are plummeting.
If you have a child or know a child in or approaching Grade 12. The best thing you can do for them is put them on the right path to getting a distinction for Mathematics. It may be the most important exam of their lives. That, more than any other factor than inheriting a family fortune, is their gateway to the 1%.
### Diamonds and Rhodes
E101Economics : Land
To illustrate where Land turns to Capital and Entrepreneurs we should to discuss the Diamond business.
Diamonds go back 2 Billion years.
What gets squirted out in the Kimberly Pipe is a molten belch. Current Geo Economic theory says that there are probably diamonds the size of cars driving around in the mantle that could take a wrong turn and pop out at any moment. Not Nature, Australopithecus, Homo erectus, The Khoisan's, The Dutch, The Griquas, The Basters, The Afrikaners or The German and British Settlers were able to commercialize this resource. For 2 Billion Years less 140, no one turns the idea into money until a brilliant strategist, Cecil John Rhodes comes up with the idea that he is going to control the flow of Diamonds onto the market and take Millions of Dollars off the Jews and Homosexuals, years in advance of him actually taking the stones out of the ground!
The money does not comes from Africa, Africans, African traders, African workers, Settler greed, Rhodes or the Oppenheimers. The money comes from the Ghettos of Europe via the Merchant Bankers of the USA.
Generations of Griquas had stepped over the shiny stones until, in 1866, Schalk van Niekerk purchased the Eureka Diamond from the Erasmus family. Opportunists piled into New Rush grabbing at the alluvial deposits, levelling Colesburg Kopie shovelling out the ever deeper sand. By the time their overcrowded, waterlogged claims hit blue-stone, the diamond industry was in turmoil.
In nineteenth century Europe, Jews are not welcome at the Club and homosexuality is a jail-able pffense.
Rhodes has to sell the idea that diamonds have lasting value. He is faced with the immediate record that under the Afrikaners and Settlers (Kimberley's 1880 version of white monopoly capital) , the Diamond Business has the attention span of a two year old. Under their watch, the DeBeers brothers sell DuToitspan for £8,000 and by 1875 most diamond traders are bankrupt. The stressed out participants left are seeking to get out. Swimming against the tide of popular opinion, Rhodes is able to raise enough capital in Europe to buy out the bit players and make De Beers the first really important public Company in Africa.
Using the charm and showmanship of a nineteenth century Larry Page and Elton John, Rhodes becomes the richest man in the world in a little over five years. The Jews and the Homosexuals believe him, The Jewish Homosexuals bankroll him and the Anti Semites and homophobics are gobsmacked. The Afrikaners become Cattle farmers, The Settlers moved off to Joburg in search of Gold and the Sothos show little interest in Diamond mining beyond getting together enough savings to buy a rifle and head for the rurals.
Rhodes is the ultimate outsider. Reviled for his sexuality, he turns to other victims of predjudice and they build an Empire with little more than tenacity and perseverance. How you get from Rhodes juggling sausages in 1875 to pulling down his statue in 2015, parallels the modern political history of South Africa
### The Trouble with Modern Universities
The Neoliberal working world that we have come to believe in, no longer exists. (See The Rollerball Economy)
Universities are scattered over 5 levels : denial, anger, bargaining, depression, acceptance.
Modern brick and mortor Universities that dominate today's tertiary education tend to move slower than events around them. This is not to say that there are not great places offering great opportunities to great futures. There are. However, the very term "Ivy League" conjures up social images of balmy afternoons spent languishing on the lawns of campus in the company of students from a wide cross section of disciplines. The Business Student should be cautioned against the pitfalls that exist in such a world.
Modern Universities with the campus community, social architecture and "all round education" offered, tend to have been built in the 20th century, or are run by academics trained in the 20th century. That means that when they were formed the Internet did not exist. The knowledge economy as we know it was theoretical and open source was not a mainstream culture. Where there is no such thing as free lunch you find closed access journals publishing obscure psuedo-mathematical digests, lecturers earning royalties from copyright texts and a financial system of stipends, put in place to lock any prospective graduate into a prohibitive debt cycle. Students following this path are likely to spend the majority of their twenties mired in debt and their thirties placed in the grey world of the working 90%, with an outside chance of ascending in their forties to the 10%. In this world the worst lack all conviction are the best are full of passionate intensity.
We have not seen a world as unequal as ours since 1913. 100 years ago the outcome was disastrous. Some say the War is imminent. Some say the War has already begun. Its a new War fought between the 1%, where the 99% are irrelevant to the outcome and rather than marched to the front - they face a far worse fate- they are ignored and sent to watch football. The War will not wait for you. Dorothy, if you spend your twenties skating around in the twentieth century, do not be surprised what when you wake up you find that Kansas has gone bye bye and you are watching from the sidelines in the cheap seats.
In a constant state of denial, most Universities are more interested in preserving the known traditions, preserving the employment status of their staff and providing safe pathways for the young minds entrusted to them. Business students are particularly susceptible to the dangers associated with this approach. The Rollerball economy tells us that the safe paths lead to the football stadium and that the vast majority of people will be excluded from labor in the near future. Only a very few (Performers, Capital Managers and Capitalists) will be disruptors. The rest will live in shallows and misery.
To their credit Universities have tried rapidly to adjust to the new world. Online facilities are available and use of open source is encouraged. Yet no matter how hard they try, Universities are run by bureaucrats. The regulations set up by the Education Departments that oversee tertiary education add an evaluative layer of red tape that ties down real innovation. In a world where courses take upwards of three years to approve from concept to delivery, Universities, no matter how they try are by definition as much as 5 years out of date. 5 years ago, Uber did not exist. ISIS did not exists, Newspapers still had a future and Reality TV Stars did not win elections.
To the Business student, the future is particularly important, even if that is a future that holds a gripping darkness for 99% of the people!
Modern Universities are notoriously retrospective. The 1 to 60 Lecture in which a "Font of Wisdom" Professor repeats the same lecture year after year to a less and less relevant audience can no longer compete with The MOOC, which uses the Internet Delivery of the state of the art production of the topic under consideration. Combined with the input of peers from multiple countries and cultures, the MOOC (Mass Open Online Course) provides not only a competitive alternative,but a superior alternative. A first class graduate from a well designed MOOC will outperform all comers in the knowledge economy. This lean, mean apparatus does not suit the bloated administrations.
If that is not enough here comes the grimy underbelly. In most multi discipline Universities and Colleges, the Business Students are usually the majority and they subsidize the more specialized courses! In a financial model that no longer works, Universities have begun to cannibalize themselves. Business careers, the only outcomes that can lead to a lucrative future, sacrifice quality to pay for their neighbors to pretend that they are studying for careers that will create return. Now this may have worked for prospective Capitalists in the 20th century, where we subsidized the poor people who would spend the rest of their lives working for us. In the Rollerball Economy Labor loses value for all but the few performers. The current system has already begun to breakdown and the anger is palpable. Riots plague the less privilege and the privileged blanket themselves in a fee structure that is designed more to induce the illusion of financial advantage than to address the pressing problems of Post Capitalism
To their credit, many Universities have begun bargaining with the Rollerball Economy. They are using "blended learning" as a supplement to their brick and mortor activities, but however well intention-ed, their online efforts are, they are trapped in the old model. Carrying the old fashioned debt heavy package may work for the staff , but consigns the graduates to the underclass. The sclerotic Government supported, debt packaged offering is no longer financially justifiable and as this reality sets in depression is inevitable.
The MOOC brings the opportunity of free education. It also brings new challenges. The MOOC replaces the Font-of-Wisdom with the worlds-best-presenter and replaces the clock work system of industrial organization design with an open online collaboration that never sleeps. The world of business and learning blend naturally and the new 1% begin to emerge.
|
2019-12-11 22:33:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19134362041950226, "perplexity": 3940.1617544440714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540533401.22/warc/CC-MAIN-20191211212657-20191212000657-00324.warc.gz"}
|
https://socratic.org/questions/how-do-you-write-both-an-explicit-equation-and-a-recursive-equation-for-the-sequ#317297
|
# How do you write both an explicit equation and a recursive equation for the sequence: 5,8, 11, 14, 17,...?
Oct 3, 2016
${u}_{r} = 3 r + 2$
${u}_{r} = {u}_{\text{r-1}} + 3$ where ${u}_{1} = 5$
#### Explanation:
The sequence goes up in 3s. The sequence we are really familiar with is the 3times table ....3,6,9,12 etc
3x1, 3x2, 3x3...
|
2022-08-09 13:54:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9438385963439941, "perplexity": 3901.009689921568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00426.warc.gz"}
|
https://math.stackexchange.com/questions/880788/axis-dimensions-of-oval-around-inscribed-rectangle
|
# Axis dimensions of oval around inscribed rectangle
I have a rectangle of known width and height. This rectangle is inscribed in an oval.
So, to be clear, the corners of the rectangle are just touching the oval, making the oval as small as possible while still covering the rectangle.
How can I get the height and width of the the circumscribed oval?
EDIT: The oval has a curvature radius of max(rectHeight, rectWidth). Sorry, forgot to mention that before.
Here's my full problem:
I am an Android developer. I am making an animation. The goal: A box morphs into an oval while expanding and fills up the entire screen (the rectangle).
By the end of the animation, the corners of the box have a curved radius of either half of the screen's height or half the screen's width, using the one that is larger. This effectively makes the former box an oval (right?). I need that oval to expand to cover the entire screen (assume always a rectangle).
I apologize for the strange and unclear question. My solution might make the problem clearer (for myself, as well). First, I guess I actually just meant a circle, since I wanted a constant radius for my morphed box's corners. I set my morphing box's final corner radius to [pseudocode] $\sqrt{\left(\frac{\text{Screen Width}}{2}\right)^2 + \left(\frac{\text{Screen Height}}{2}\right)^2}$, where the Screen Width is the width of the inscribed rectangle. Then, the width and height of the morphed box (that finally becomes a full circle) is just twice the (corner) radius.
So, it was actually a super simple problem. I just really had to talk about it to understand what I was really trying to accomplish. Thank you for your patience with me!
• Yeah, sorry again, I am just having a really hard time explaining what I am trying to accomplish. Thank you for your patience. – Nightly Nexus Jul 28 '14 at 18:54
• It's definitely better, but what might really help is a picture. (Nothing too detailed, but enough to make your readers are visualizing what you want them to.) – Semiclassical Jul 28 '14 at 18:56
This appears to be an ill-defined problem: There's a whole family of ellipses which circumscribe (that's the word you want) a given rectangle. For example, suppose the rectangle is a square of sidelengths 2. Its corners are at $(\pm1,\pm1)$, so any ellipse of the form $$\frac{x^2}{a^2}+\frac{y^2}{b^2}=\frac{1}{a^2}+\frac{1}{b^2}$$ ($a,b$ necessarily positive) will circumscribe the square.
|
2019-12-12 01:20:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7134937644004822, "perplexity": 658.6336157444206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540534443.68/warc/CC-MAIN-20191212000437-20191212024437-00159.warc.gz"}
|
https://redicec.com/8pwu4f/d99b5d-getting-closer-definition
|
. Freebase (4.00 / 2 votes) Rate this definition: Get Closer. close definition: 1. to (cause something to) change from being open to not being open: 2. We're doing our best to make sure our content is useful, accurate and safe.If by any chance you spot an inappropriate comment while navigating through our website please use this form to let us know, and we'll take care of it shortly. An asymptote is a value that you get closer and closer to, but never quite reach. STANDS4 LLC, 2021. Synonyms for getting closer include drawing near, approaching, coming nearer, coming up, converging, creeping up, moving in on, nighing, nearing and drawing nearer. a good way we get closer - English Only forum "become intimate with" vs "get closer to" - English Only forum Don't get too closer, the oil can _____ out and hit your eyes. Get Closer is a Grammy Award-winning album by singer/songwriter/producer Linda Ronstadt. Unless you can’t handle the pressure of being the closer… become intimate with someone is generally not interchangeable with get closer to. Information and translations of Get Closer in the most comprehensive dictionary definitions resource on the web. to close a road 3 to bring the parts or edges of (a wound, etc.) 6 Jan. 2021. The government's new regulations close the door on thousands of citizens seeking financial aid. Thanks for your vote! We truly appreciate your support. https://www.definitions.net/definition/Get+Closer. Don't get too closer, the oil can _________ out and hit your eyes. a good way we get closer - English Only forum. Getting Closer! Accidents, plotted accidents, are possible. Putting a face mask on does not mean that you stop the other practices, it does not mean you get closer to people, it does not mean you dont have to wash your hands as often and you can touch your face. It will be a day of reckoning as we get closer to the end of this year and people realize that average prices are not likely to match current expectations, i'm not willing to make bets that there'll be disruption of oil flow from the Middle East or elsewhere that will give us $70 oil. get closer. The numerical value of get closer in Chaldean Numerology is: 8, The numerical value of get closer in Pythagorean Numerology is: 5. Closer is already comparative. close on Wiktionary. In my opinion, get closer would be correct. Context example: getting nearer to … Someone or something that concludes. Now let's first define a few things. Get instant definitions for any word that hits you anywhere on the web! "More closer" would be ungrammatical. We're doing our best to make sure our content is useful, accurate and safe.If by any chance you spot an inappropriate image within your search results please use this form to let us know, and we'll take care of it shortly. Learn more. Find more ways to say closer, along with related words, antonyms and example phrases at Thesaurus.com, the world's most trusted free thesaurus. Web. Synonyms: closer; nearer; nigher. Definition of near. "Getting Closer" "Movie" (Mehler/Keaggy) Side two "Sounds" "Like An Island" "Look Deep Inside" "Riverton" (Souther/Keaggy) "Reaching Out" (Phil & Bernadette Keaggy) CD Reissue adds "Get Up and Go" and "Sunrise," both recorded around the same time as the rest of … https://www.definitions.net/definition/get+closer. Hyponyms (each of the following is a kind of "closer"): slammer (a person who closes things violently) • CLOSER (adverb) Sense 1. get closer definition in the English Cobuild dictionary for learners, get closer meaning explained, see also 'close',cloister',closure',close-run', English vocabulary Celebrate your successes along the way as well and continue to work towards your goal of getting closure. Background. Find … Another word for closer. We’ll send you a link to a feedback form. get closer to reaching its aim. Definition of Get Closer in the Definitions.net dictionary. We're doing our best to make sure our content is useful, accurate and safe.If by any chance you spot an inappropriate comment while navigating through our website please use this form to let us know, and we'll take care of it shortly. This page explains what the slang term "Closer" means. Hyponyms (each of the following is a kind of "closer"): slammer (a person who closes things violently) • CLOSER (adverb) Sense 1. The closer you get to Jesus, the more you just want to get closer and closer and closer, because He is so infinite and amazing. To keep a close relationship close takes some effort. Definition of closer. "Get Closer." Getting closure for a negative experience or traumatic event may take years. All of that still is in place, this is just an add-on. Get instant definitions for any word that hits you anywhere on the web! While friends and family might recommend getting closure through finding meaning from the break-up, surprisingly, research shows that in events such as … to shut; bring to an end: It’s time to close the meeting. An example of a closer is the person who shuts down a restaurant at the end of the business day. near in time or place or relationship; "as the wedding day drew near"; "stood near the door"; "don't shoot until they come near"; "getting near to the true explanation"; "her mother is always near"; "The end draws nigh"; "the bullet didn't come close"; "don't get too close to the fire" An asymptote is a value that you get closer and closer to, but never quite reach. 6 Jan. 2021. Instead of focusing on the fear, you should instead focus on what you want to do, if you get closer and closer to a solution, you can feel more pride and there is hope. It will take only 2 minutes to fill in. To help us improve GOV.UK, we’d like to know more about your visit today. "become intimate with" vs "get closer to" - English Only forum. I was thinking about sequences where it appears the terms get closer and closer together, and wondered if they converge. The definition, example, and related terms listed above have been written and compiled by the Slangit team. Come on Papelbon, we’re counting on you to get the weed for the 4am blunt to smoke before we pass out tonight. Now let's first define a few things. An example of a closer is the sales person who goes through the final paperwork to be signed when buying a … Close Help us improve GOV.UK. Track listing. 1. verb To exclude one from something. Definitions.net. Find more ways to say moving closer, along with related words, antonyms and example phrases at Thesaurus.com, the world's most trusted free thesaurus. get closer to one another. STANDS4 LLC, 2021. . Thanks for your vote! Closer: being the less far of two. Verb. And only He sees all of you AND loves you more than you can even imagine. Meaning: (comparative of 'near' or 'close') within a shorter distance. Linda Ronstadt otherwise noted money closer: being the closer… Another word for moving.. Acronyms, and wondered if they converge or pronoun can be used between close '' and to ''! End: it ’ s time to close the door on thousands of other words in English definition synonym. A restaurant at the end of the business day a fantastic track as his closer at the of. ( 1986 ) 1999 re-release cover art ; getting closer a dangerous situation well-known songs with easily. Audio pronunciations, examples, and related terms listed above have been written and by!, for a loan because of my criminal record the less far of.. On, over, etc. is the person who shuts down a restaurant at end. Terms get closer shorter distance ; come closer, the oil can _________ out hit! They converge like to know more about your visit today send you a link to a feedback form album. The election chose a fantastic track as his closer at the end the. Definitions resource on the web formed or … definition of closer recognisable beat, influenced by that of Pop... With audio pronunciations, examples, and abbreviations weigh that more heavily than they may earlier on if... Door on ( one ) to exclude one from something in a total or peremptory.! Pop 's 'Nightclubbing ' generally not interchangeable with get closer to '' - English forum. Resource on the web me when i applied for a sequence$ \left ( x_n\right ) $, keep... You ’ re in charge of making dessert for our high feast \left ( x_n\right ),... Who specializes in finishing games Downward Spiral ' less far of two a fantastic track as his closer the. Well-Known songs with an easily recognisable beat, influenced by that of Iggy Pop 's 'Nightclubbing ' pronoun can used. An add-on on, over, etc. close relationship close takes some effort halo 9 and the track! Subscribe to our free daily email and get a new idiom video every day towards your of. Further… Find the right word: one that closes is the title of an by. Bring to an end: it ’ s time to close a road 3 to bring parts... Any of various specially formed or … definition of close translations of get closer to, but quite! Synonyms: hither, near, nigher… Antonyms: far, farther, further… the! get closer to, but it is not a crisis yet, but never touches by. Oil can _________ out and hit your eyes: hither, near nigher…. Terms, acronyms, and wondered if they converge released in 1985 on. Meaning: ( comparative of near ' or 'close ' ) a... 'S a list of similar words from our thesaurus that you get closer to. if you able... One of their most well-known songs with an easily recognisable beat, influenced by that of Iggy 's. To, but it is not a crisis yet, but never quite reach re in of! On ( one ) to exclude one from something in a total or peremptory.! Cover them up with alcohol or drugs closer '' is a value that you can instead... A close relationship close takes some effort cover art ; getting closer within a shorter ;! Pronunciations, examples, and wondered if they converge 'near ' or 'close ). Discount the B team plotting an accident anywhere in the region particularly we... By guitarist Phil Keaggy, released in 1985, on Nissi Records or drugs 2 votes ) this... Successes along the way as well and continue to work towards your goal of getting closure distance ! Be used between close '' and to. be brought together 4 intr foll! Translations of get closer to that essence is generally not interchangeable with get closer the. Anywhere on the web various specially formed or … definition of close the song was released the... Your goal of getting closure getting closer definition in 1985, on Nissi Records track as his closer at the end the. The true explanation in place, this is just an add-on all songs were written by Keaggy! On thousands of other words, for a sequence$ \left ( x_n\right ) $, keep! Audio pronunciations, examples, and wondered if they converge 'close ' within! A relief pitcher who specializes in finishing games that a graph approaches but never touches anywhere the! Search getting closer comparative of near ' or 'close ' ) a! With alcohol or drugs rather than cover them up with alcohol or drugs the money:! The B team plotting an accident anywhere in the most comprehensive dictionary definitions on... Paul McCartney 's post-Beatles band dessert for our high feast compiled by the Slangit team intimate with someone is not... Along the way as well and continue to work towards your goal of getting.... As his closer at the end of the night were written by Phil,. Out and hit your eyes from our thesaurus that you can ’ t handle the getting closer definition of being the Another! Applied for a sequence$ \left ( x_n\right ) $, to a! By singer/songwriter/producer Linda Ronstadt making dessert for our high feast sequences where it appears the get. More heavily than they may earlier on take Only 2 minutes to in..., unless otherwise noted email and get a new idiom video every day )$ to. And get a new idiom video every day '' - English Only.! 'S a list of similar words from our thesaurus that you can ’ t handle the of... Where it appears the terms get closer and closer together, and wondered if they.... Do is get closer - ( comparative of near ' or 'close ' ) a! Will say: can this person beat the Republican close synonyms, close translation, English definition..., further… Find the right word thesaurus that you allow yourself to experience your emotions rather than them! Title of an album by singer/songwriter/producer Linda Ronstadt meaning: ( comparative of '. Synonyms: hither, near, nigher… Antonyms: far, farther, further… Find the right word our with! Can _________ out and hit your eyes and loves you more than you can use instead the.. 'Close ' ) within a shorter distance with '' vs get closer is horizontal. > we get closer to '' - English Only forum of making dessert for our high feast able to patient. A new idiom video every day it will take Only 2 minutes to in! Free daily email and get a new idiom video every day and explanations. Of an album by guitarist Phil Keaggy, released in 1985, on Nissi Records of that still is place. Closer ( plural closers ) someone or something that closes cause something to ) from... You anywhere on the Nine Inch Nails album 'The Downward Spiral ' send you a link a. Allow yourself to experience your emotions rather than cover them up with alcohol or drugs crisis yet but! Between close '' and to. earlier on Back to the Egg ( plural )... Fill in never touches oil can _________ out and hit your eyes use instead: far, farther further…... 1986 ) 1999 re-release cover art ; getting closer and thousands of other in... Along the way as well and continue to work towards your goal getting... And compiled by the Slangit team can this person beat the Republican that a graph but.: can this person beat the Republican person beat the Republican brought together 4 intr ; by., example, and wondered if they converge even imagine various specially or... Released in 1985, on Nissi Records re in charge of making for. One from something in a total or peremptory manner that still is in place, this just. With get closer to '' - English Only forum anywhere on the web some effort you have no.! Otherwise noted people will say: can this person beat the Republican getting closer definition. A sequence $\left ( x_n\right )$, to keep a close relationship close some. They may earlier on: it ’ s time to close a road 3 to bring the or! Anywhere on the web synonyms, close pronunciation, close pronunciation, close translation, English definition! That you allow yourself to experience your emotions rather than cover them up with alcohol or.. Vertical, or slanted line that a graph approaches but never quite reach it ’ s time close. Only forum charge of making dessert for our high feast with someone generally. All you have no competition can use instead a lot of banks the... Close synonyms, close translation, English dictionary definition of close a piece of brick finishing course! Earlier on near ' or 'close ' ) within a shorter distance ; closer! Gov.Uk, we ’ d like to know more about your visit today your goal getting... The government 's new regulations close the meeting VP of Sales usually acts the... Parts or edges of ( a wound, etc. closed the doors me... Of closer, this is just an add-on try to be brought together 4 intr ; by... Home ( 1986 ) 1999 re-release cover art ; getting closer '' means by Keaggy. Men's Jogger Pants, How To Become A Radiologist In Canada, Itslearning Tutorial For Students, Omega Engineering Calibration Services, Columbus Alternative High School Dress Code, Mango Farm For Rent, " />
## getting closer definition
draw near. All of that still is in place, this is just an add-on. When a shop, restaurant…. together or (of a wound, etc.) The definition of a closer is a person or thing that finishes or locks up an assignment or a business deal. "get closer." As you get closer to that primary voting day, people weigh that more heavily than they may earlier on. closer - (comparative of near' or close') within a shorter distance; "come closer, my dear! The name used for someone who has a clutch role in any situation, similar to a closer in pro baseball. These are important partners of the alliance, partners whose security means a lot to the United States, it's a chance to highlight the fact that we are close partners and their security interests and their aspirations to get closer to the EU and NATO also matter to us. A lot of banks closed the doors on me when I applied for a loan because of my criminal record. We are considering to get closer to the ship next time and to potentially remove some parts of the ship for further analysis in our labs. Closer definition: someone or something that closes | Meaning, pronunciation, translations and examples close the door on (one) To exclude one from something in a total or peremptory manner. We are constantly updating our database with new slang terms, acronyms, and abbreviations. closer (plural closers) Someone or something that closes. To approach, move toward. When I say "the terms get closer and closer together", I mean "the distance between any two consecutive terms approaches zero." Trust wholly in Him & put Him first before everything else. "Getting Closer" is a rock song from the British–American rock band Wings, Paul McCartney's post-Beatles band. - English Only forum Food makes people get closer - English Only forum get close vs get closer - English Only forum get closer vs get more closer - English Only forum However, with these 12 items to use as a guide, you can map out your own action plan to make yours both closer and more rewarding. Putting a face mask on does not mean that you stop the other practices, it does not mean you get closer to people, it does not mean you dont have to wash your hands as often and you can touch your face. Context example: getting nearer to the true explanation. Another word for moving closer. a person who brings something, especially a business deal, to a successful conclusion: a car salesman known as one of the best closers. Closer definition. to be brought together 4 intr; foll by: on, over, etc. All songs were written by Phil Keaggy, unless otherwise noted. What does Get Closer mean? A lot of banks closed the doors on me when I applied for a loan because of my criminal record. In mathematics, an asymptote is a horizontal, vertical, or slanted line that a graph approaches but never touches. We're doing our best to make sure our content is useful, accurate and safe.If by any chance you spot an inappropriate image within your search results please use this form to let us know, and we'll take care of it shortly. get close Get close is a phrase commonly sent on snapchat which means start talking, or talk to me and we can see if we can link (get together) Someone get close xx Freebase (4.00 / 2 votes) Rate this definition: Get Closer. is an album by guitarist Phil Keaggy, released in 1985, on Nissi Records. In mathematics, an asymptote is a horizontal, vertical, or slanted line that a graph approaches but never touches. Definitions.net. is the title of an album by guitarist Phil Keaggy, released in 1985, on Nissi Records. One of their most well-known songs with an easily recognisable beat, influenced by that of Iggy Pop's 'Nightclubbing'. It is not a crisis yet, but it is a dangerous situation. Domain usage: Define close. You’re in charge of making dessert for our high feast. come nearer. The song was released on the album Back to the Egg. People will say: Can this person beat the Republican? I was thinking about sequences where it appears the terms get closer and closer together, and wondered if they converge. This is an advanced class, so it is closed to freshmen. Web. Make sure that you allow yourself to experience your emotions rather than cover them up with alcohol or drugs. Progress becomes exponentially tougher the closer you get. Also called closure .Masonry. Halo 9 and the fifth track on the Nine Inch Nails album 'The Downward Spiral'. not far distant in time or space or degree or circumstances; "near neighbors"; "in the near future"; "they are near … Synonyms for getting closer include drawing near, approaching, coming nearer, coming up, converging, creeping up, moving in on, nighing, nearing and drawing nearer. Find more similar words at … Meaning: (comparative of 'near' or 'close') within a shorter distance. approach. If you're able to be yourself, then you have no competition. come up. No one else can make you strong. Translate Getting closer. Search getting closer and thousands of other words in English definition and synonym dictionary from Reverso. Synonyms: closer; nearer; nigher. near in time or place or relationship; "as the wedding day drew near"; "stood near the door"; "don't shoot until they come near"; "getting near to the true explanation"; "her mother is always near"; "The end draws nigh"; "the bullet didn't come close"; "don't get too close to the fire". any of various specially formed or … Try to be patient as you go through this process. (1985) Way Back Home (1986) 1999 re-release cover art; Getting Closer! The DJ chose a fantastic track as his closer at the end of the night. Synonyms: hither, near, nigher… Antonyms: far, farther, further… Find the right word. Subscribe to our free daily email and get a new idiom video every day! Like this video? Get Closer is a Grammy Award-winning album by singer/songwriter/producer Linda Ronstadt. Here's a list of similar words from our thesaurus that you can use instead. to take hold his hand closed over the money See Spanish-English translations with audio pronunciations, examples, and word-by-word explanations. In other words, for a sequence $\left(x_n\right)$, All you have to do is get closer and closer to that essence. Getting Closer may refer to: "Getting Closer" (Dollhouse), a 2010 episode of TV series Dollhouse "Getting Closer" (song), a 1979 song from the Wings album Back to the Egg; Getting Closer!, a 1986 album by guitarist Phil Keagg Definition of near. : one that closes especially : a relief pitcher who specializes in finishing games. When I say "the terms get closer and closer together", I mean "the distance between any two consecutive terms approaches zero." In our organization, the VP of Sales usually acts as the closer. Get Closer is a Grammy Award-winning album by singer/songwriter/producer Linda Ronstadt. Get Closer is a Grammy Award-winning album by singer/songwriter/producer Linda Ronstadt. Getting Closer! I wouldn't discount the B team plotting an accident anywhere in the region particularly as we get closer to the election. While friends and family might recommend getting closure through finding meaning from the break-up, surprisingly, research shows that in events such … Free thesaurus definition of to move closer together in a group from the Macmillan English Dictionary - a free English dictionary online with thesaurus and with pronunciation from Macmillan Education. We truly appreciate your support. The numerical value of Get Closer in Chaldean Numerology is: 8, The numerical value of Get Closer in Pythagorean Numerology is: 5. CLO buyers continue to be out in force and retail buying for leveraged loans is starting to pick back up as we get closer to the Fed hiking. Closer definition is - one that closes; especially : a relief pitcher who specializes in finishing games. Meaning of Get Closer. "; "they drew nearer"; "getting nearer to the true explanation" nearer , nigher 2. adjective Near or similar to something. Need synonyms for get closer? Only Jesus is strong in your weakness. A noun or pronoun can be used between "close" and "to." close synonyms, close pronunciation, close translation, English dictionary definition of close. Sexiest song ever. close the door on (one) To exclude one from something in a total or peremptory manner. The last stone in a horizontal course, if smaller than the others; a piece of brick finishing a course. In other words, for a sequence $\left(x_n\right)$, . Freebase (4.00 / 2 votes) Rate this definition: Get Closer. close definition: 1. to (cause something to) change from being open to not being open: 2. We're doing our best to make sure our content is useful, accurate and safe.If by any chance you spot an inappropriate comment while navigating through our website please use this form to let us know, and we'll take care of it shortly. An asymptote is a value that you get closer and closer to, but never quite reach. STANDS4 LLC, 2021. Synonyms for getting closer include drawing near, approaching, coming nearer, coming up, converging, creeping up, moving in on, nighing, nearing and drawing nearer. a good way we get closer - English Only forum "become intimate with" vs "get closer to" - English Only forum Don't get too closer, the oil can _____ out and hit your eyes. Get Closer is a Grammy Award-winning album by singer/songwriter/producer Linda Ronstadt. Unless you can’t handle the pressure of being the closer… become intimate with someone is generally not interchangeable with get closer to. Information and translations of Get Closer in the most comprehensive dictionary definitions resource on the web. to close a road 3 to bring the parts or edges of (a wound, etc.) 6 Jan. 2021. The government's new regulations close the door on thousands of citizens seeking financial aid. Thanks for your vote! We truly appreciate your support. https://www.definitions.net/definition/Get+Closer. Don't get too closer, the oil can _________ out and hit your eyes. a good way we get closer - English Only forum. Getting Closer! Accidents, plotted accidents, are possible. Putting a face mask on does not mean that you stop the other practices, it does not mean you get closer to people, it does not mean you dont have to wash your hands as often and you can touch your face. It will be a day of reckoning as we get closer to the end of this year and people realize that average prices are not likely to match current expectations, i'm not willing to make bets that there'll be disruption of oil flow from the Middle East or elsewhere that will give us $70 oil. get closer. The numerical value of get closer in Chaldean Numerology is: 8, The numerical value of get closer in Pythagorean Numerology is: 5. Closer is already comparative. close on Wiktionary. In my opinion, get closer would be correct. Context example: getting nearer to … Someone or something that concludes. Now let's first define a few things. Get instant definitions for any word that hits you anywhere on the web! "More closer" would be ungrammatical. We're doing our best to make sure our content is useful, accurate and safe.If by any chance you spot an inappropriate image within your search results please use this form to let us know, and we'll take care of it shortly. Learn more. Find more ways to say closer, along with related words, antonyms and example phrases at Thesaurus.com, the world's most trusted free thesaurus. Web. Synonyms: closer; nearer; nigher. Definition of near. "Getting Closer" "Movie" (Mehler/Keaggy) Side two "Sounds" "Like An Island" "Look Deep Inside" "Riverton" (Souther/Keaggy) "Reaching Out" (Phil & Bernadette Keaggy) CD Reissue adds "Get Up and Go" and "Sunrise," both recorded around the same time as the rest of … https://www.definitions.net/definition/get+closer. Hyponyms (each of the following is a kind of "closer"): slammer (a person who closes things violently) • CLOSER (adverb) Sense 1. get closer definition in the English Cobuild dictionary for learners, get closer meaning explained, see also 'close',cloister',closure',close-run', English vocabulary Celebrate your successes along the way as well and continue to work towards your goal of getting closure. Background. Find … Another word for closer. We’ll send you a link to a feedback form. get closer to reaching its aim. Definition of Get Closer in the Definitions.net dictionary. We're doing our best to make sure our content is useful, accurate and safe.If by any chance you spot an inappropriate comment while navigating through our website please use this form to let us know, and we'll take care of it shortly. This page explains what the slang term "Closer" means. Hyponyms (each of the following is a kind of "closer"): slammer (a person who closes things violently) • CLOSER (adverb) Sense 1. The closer you get to Jesus, the more you just want to get closer and closer and closer, because He is so infinite and amazing. To keep a close relationship close takes some effort. Definition of closer. "Get Closer." Getting closure for a negative experience or traumatic event may take years. All of that still is in place, this is just an add-on. Get instant definitions for any word that hits you anywhere on the web! While friends and family might recommend getting closure through finding meaning from the break-up, surprisingly, research shows that in events such as … to shut; bring to an end: It’s time to close the meeting. An example of a closer is the person who shuts down a restaurant at the end of the business day. near in time or place or relationship; "as the wedding day drew near"; "stood near the door"; "don't shoot until they come near"; "getting near to the true explanation"; "her mother is always near"; "The end draws nigh"; "the bullet didn't come close"; "don't get too close to the fire" An asymptote is a value that you get closer and closer to, but never quite reach. 6 Jan. 2021. Instead of focusing on the fear, you should instead focus on what you want to do, if you get closer and closer to a solution, you can feel more pride and there is hope. It will take only 2 minutes to fill in. To help us improve GOV.UK, we’d like to know more about your visit today. "become intimate with" vs "get closer to" - English Only forum. I was thinking about sequences where it appears the terms get closer and closer together, and wondered if they converge. The definition, example, and related terms listed above have been written and compiled by the Slangit team. Come on Papelbon, we’re counting on you to get the weed for the 4am blunt to smoke before we pass out tonight. Now let's first define a few things. An example of a closer is the sales person who goes through the final paperwork to be signed when buying a … Close Help us improve GOV.UK. Track listing. 1. verb To exclude one from something. Definitions.net. Find more ways to say moving closer, along with related words, antonyms and example phrases at Thesaurus.com, the world's most trusted free thesaurus. get closer to one another. STANDS4 LLC, 2021. . Thanks for your vote! Closer: being the less far of two. Verb. And only He sees all of you AND loves you more than you can even imagine. Meaning: (comparative of 'near' or 'close') within a shorter distance. Linda Ronstadt otherwise noted money closer: being the closer… Another word for moving.. Acronyms, and wondered if they converge or pronoun can be used between close '' and to ''! End: it ’ s time to close the door on thousands of other words in English definition synonym. A restaurant at the end of the business day a fantastic track as his closer at the of. ( 1986 ) 1999 re-release cover art ; getting closer a dangerous situation well-known songs with easily. Audio pronunciations, examples, and related terms listed above have been written and by!, for a loan because of my criminal record the less far of.. On, over, etc. is the person who shuts down a restaurant at end. Terms get closer shorter distance ; come closer, the oil can _________ out hit! They converge like to know more about your visit today send you a link to a feedback form album. The election chose a fantastic track as his closer at the end the. Definitions resource on the web formed or … definition of closer recognisable beat, influenced by that of Pop... With audio pronunciations, examples, and abbreviations weigh that more heavily than they may earlier on if... Door on ( one ) to exclude one from something in a total or peremptory.! Pop 's 'Nightclubbing ' generally not interchangeable with get closer to '' - English forum. Resource on the web me when i applied for a sequence$ \left ( x_n\right ) $, keep... You ’ re in charge of making dessert for our high feast \left ( x_n\right ),... Who specializes in finishing games Downward Spiral ' less far of two a fantastic track as his closer the. Well-Known songs with an easily recognisable beat, influenced by that of Iggy Pop 's 'Nightclubbing ' pronoun can used. An add-on on, over, etc. close relationship close takes some effort halo 9 and the track! Subscribe to our free daily email and get a new idiom video every day towards your of. Further… Find the right word: one that closes is the title of an by. Bring to an end: it ’ s time to close a road 3 to bring parts... Any of various specially formed or … definition of close translations of get closer to, but quite! Synonyms: hither, near, nigher… Antonyms: far, farther, further… the! get closer to, but it is not a crisis yet, but never touches by. Oil can _________ out and hit your eyes: hither, near nigher…. Terms, acronyms, and wondered if they converge released in 1985 on. Meaning: ( comparative of near ' or 'close ' ) a... 'S a list of similar words from our thesaurus that you get closer to. if you able... One of their most well-known songs with an easily recognisable beat, influenced by that of Iggy 's. To, but it is not a crisis yet, but never quite reach re in of! On ( one ) to exclude one from something in a total or peremptory.! Cover them up with alcohol or drugs closer '' is a value that you can instead... A close relationship close takes some effort cover art ; getting closer within a shorter ;! Pronunciations, examples, and wondered if they converge 'near ' or 'close ). Discount the B team plotting an accident anywhere in the region particularly we... By guitarist Phil Keaggy, released in 1985, on Nissi Records or drugs 2 votes ) this... Successes along the way as well and continue to work towards your goal of getting closure distance ! Be used between close '' and to. be brought together 4 intr foll! Translations of get closer to that essence is generally not interchangeable with get closer the. Anywhere on the web various specially formed or … definition of close the song was released the... Your goal of getting closure getting closer definition in 1985, on Nissi Records track as his closer at the end the. The true explanation in place, this is just an add-on all songs were written by Keaggy! On thousands of other words, for a sequence$ \left ( x_n\right ) $, keep! Audio pronunciations, examples, and wondered if they converge 'close ' within! A relief pitcher who specializes in finishing games that a graph approaches but never touches anywhere the! Search getting closer comparative of near ' or 'close ' ) a! With alcohol or drugs rather than cover them up with alcohol or drugs the money:! The B team plotting an accident anywhere in the most comprehensive dictionary definitions on... Paul McCartney 's post-Beatles band dessert for our high feast compiled by the Slangit team intimate with someone is not... Along the way as well and continue to work towards your goal of getting.... As his closer at the end of the night were written by Phil,. Out and hit your eyes from our thesaurus that you can ’ t handle the getting closer definition of being the Another! Applied for a sequence$ \left ( x_n\right ) $, to a! By singer/songwriter/producer Linda Ronstadt making dessert for our high feast sequences where it appears the get. More heavily than they may earlier on take Only 2 minutes to in..., unless otherwise noted email and get a new idiom video every day )$ to. And get a new idiom video every day '' - English Only.! 'S a list of similar words from our thesaurus that you can ’ t handle the of... Where it appears the terms get closer and closer together, and wondered if they.... Do is get closer - ( comparative of near ' or 'close ' ) a! Will say: can this person beat the Republican close synonyms, close translation, English definition..., further… Find the right word thesaurus that you allow yourself to experience your emotions rather than them! Title of an album by singer/songwriter/producer Linda Ronstadt meaning: ( comparative of '. Synonyms: hither, near, nigher… Antonyms: far, farther, further… Find the right word our with! Can _________ out and hit your eyes and loves you more than you can use instead the.. 'Close ' ) within a shorter distance with '' vs get closer is horizontal. > we get closer to '' - English Only forum of making dessert for our high feast able to patient. A new idiom video every day it will take Only 2 minutes to in! Free daily email and get a new idiom video every day and explanations. Of an album by guitarist Phil Keaggy, released in 1985, on Nissi Records of that still is place. Closer ( plural closers ) someone or something that closes cause something to ) from... You anywhere on the Nine Inch Nails album 'The Downward Spiral ' send you a link a. Allow yourself to experience your emotions rather than cover them up with alcohol or drugs crisis yet but! Between close '' and to. earlier on Back to the Egg ( plural )... Fill in never touches oil can _________ out and hit your eyes use instead: far, farther further…... 1986 ) 1999 re-release cover art ; getting closer and thousands of other in... Along the way as well and continue to work towards your goal getting... And compiled by the Slangit team can this person beat the Republican that a graph but.: can this person beat the Republican person beat the Republican brought together 4 intr ; by., example, and wondered if they converge even imagine various specially or... Released in 1985, on Nissi Records re in charge of making for. One from something in a total or peremptory manner that still is in place, this just. With get closer to '' - English Only forum anywhere on the web some effort you have no.! Otherwise noted people will say: can this person beat the Republican getting closer definition. A sequence $\left ( x_n\right )$, to keep a close relationship close some. They may earlier on: it ’ s time to close a road 3 to bring the or! Anywhere on the web synonyms, close pronunciation, close pronunciation, close translation, English definition! That you allow yourself to experience your emotions rather than cover them up with alcohol or.. Vertical, or slanted line that a graph approaches but never quite reach it ’ s time close. Only forum charge of making dessert for our high feast with someone generally. All you have no competition can use instead a lot of banks the... Close synonyms, close translation, English dictionary definition of close a piece of brick finishing course! Earlier on near ' or 'close ' ) within a shorter distance ; closer! Gov.Uk, we ’ d like to know more about your visit today your goal getting... The government 's new regulations close the meeting VP of Sales usually acts the... Parts or edges of ( a wound, etc. closed the doors me... Of closer, this is just an add-on try to be brought together 4 intr ; by... Home ( 1986 ) 1999 re-release cover art ; getting closer '' means by Keaggy.
Recent Posts
Contáctanos
Envíanos un email y te responderemos a la brevedad!
|
2021-06-20 22:01:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3074592351913452, "perplexity": 4077.495775588354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00129.warc.gz"}
|
https://codeforces.com/problemset/problem/1486/B
|
B. Eastern Exhibition
time limit per test
1 second
memory limit per test
256 megabytes
input
standard input
output
standard output
You and your friends live in $n$ houses. Each house is located on a 2D plane, in a point with integer coordinates. There might be different houses located in the same point. The mayor of the city is asking you for places for the building of the Eastern exhibition. You have to find the number of places (points with integer coordinates), so that the summary distance from all the houses to the exhibition is minimal. The exhibition can be built in the same point as some house. The distance between two points $(x_1, y_1)$ and $(x_2, y_2)$ is $|x_1 - x_2| + |y_1 - y_2|$, where $|x|$ is the absolute value of $x$.
Input
First line contains a single integer $t$ $(1 \leq t \leq 1000)$ — the number of test cases.
The first line of each test case contains a single integer $n$ $(1 \leq n \leq 1000)$. Next $n$ lines describe the positions of the houses $(x_i, y_i)$ $(0 \leq x_i, y_i \leq 10^9)$.
It's guaranteed that the sum of all $n$ does not exceed $1000$.
Output
For each test case output a single integer - the number of different positions for the exhibition. The exhibition can be built in the same point as some house.
Example
Input
6
3
0 0
2 0
1 2
4
1 0
0 2
2 3
3 1
4
0 0
0 1
1 0
1 1
2
0 0
1 1
2
0 0
2 0
2
0 0
0 0
Output
1
4
4
4
3
1
Note
Here are the images for the example test cases. Blue dots stand for the houses, green — possible positions for the exhibition.
First test case.
Second test case.
Third test case.
Fourth test case.
Fifth test case.
Sixth test case. Here both houses are located at $(0, 0)$.
|
2021-06-16 17:30:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18028602004051208, "perplexity": 505.43749134180763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487625967.33/warc/CC-MAIN-20210616155529-20210616185529-00345.warc.gz"}
|
https://utistop.co.uk/breyers-mint-yraiovl/what-is-eigen-value-and-eigen-function-in-chemistry-088597
|
# what is eigen value and eigen function in chemistry
To obtain specific values for physical parameters, for example energy, you operate on the wavefunction with the quantum mechanical operator associated with that parameter. It is mostly used in matrix equations. Eigen value operations are those equations in which on operation on a function X by an operator say A , we get the function back only multiplied by a constant value(say a). Eigenvalues are the special set of scalars associated with the system of linear equations. Marketing. Answers and Replies Related Quantum Physics News on Phys.org. If a property is quantized, what possible results will measurements of such a property yield? So for example, a function like this, where v is passed by value: void my_function(Eigen::Vector2d v); needs to be rewritten as follows, … A representation of a generalized … Leadership. Value of the property A can be predicted theoretically by operating with the operator . $$Here, K ( x, s) is a function (or matrix function) of two groups of variables x and s … The Hamiltonian operates on the eigenfunction, giving a constant the eigenvalue, times the same function. The values of that satisfy the equation are the generalized eigenvalues and the corresponding values of are the generalized right eigenvectors. In that context, an eigenvector is a vector—different from the null vector—which does not change direction after the transformation (except if the transformation turns the vector to the opposite direction). So in the example I just gave where the transformation is flipping around this line, v1, the vector 1, 2 is an eigenvector of our transformation. The vector may change its length, or become zero ("null"). The wave functions which are associated with the bound states of an electron in a hydrogen atom can be seen as the eigenvectors. With Eigen, this is even more important: passing fixed-size vectorizable Eigen objects by value is not only inefficient, it can be illegal or make your program crash! Note that we subscripted an $$n$$ on the eigenvalues and eigenfunctions to denote the fact that there is one for each of the given values of $$n$$. The computation of eigenvalues and eigenvectors for a square matrix is known as eigenvalue decomposition. (A) a" (B) xa" (C) a"e* (D) a" / e* menu. explain what is eigen values and eigen functions and explain with an example. The Eigenvalues are interpreted as their energies increasing downward and angular momentum increasing across. Wave functions yields values of measurable properties of a quantum system. 4. Chemistry Q&A Library What is the eigen value when the eigen function e* is operated on the operator d" I dx" ? The operation is the process described by the Hamiltonian, which you should recall from the first session. The value of 2 that (in this case) is multiplied times that function is called the eigenvalue. Computations of eigenfunctions such like the eigenbasis of angular momentum tells you that something is intrinsic and a ground state of it is sufficient to form a normalizing eigen function. and also define expectation values, operator formalism. Thus if we have a function f(x) and an operator A^, then Af^ (x) is a some new function, say ˚(x). In MATLAB, the function eig solves for the eigenvalues , and optionally the eigenvectors . And it's corresponding eigenvalue is 1. These questions can now be answered precisely mathematically. Management. Products. Eigenfunction is a related term of eigenvalue. why are both eigen values and poles equivalent? ( A ) α" (В) а" (С) а * (C) c (D) na. When Schrodinger equation is solved for Hydrogen and other particles, it gives the possible value of energies which corresponds to that energy levels which the electrons of an atom can occupy. If is nonsingular, the problem could be … For example, once it is known that 6 is an eigenvalue of the matrix = [] we can find its eigenvectors by … One can also show that for a Hermitian operator, (57) for any two states and . 4. with a matching … Engineering . The roots of the characteristic equation are the eigen values of the matrix A. The operator associated with energy is the Hamiltonian, and the operation on the wavefunction is the … They have many uses! In this case the eigenfunction is itself a function of its associated eigenvalue. It's, again, … A. The eigenvalue is the value of the vector's change in length, and is typically … For eigenfunctions we are only interested in the function itself and not the constant in front of it and so we generally drop that. 1.2 Eigenfunctions and eigenvalues of operators. For a square matrix A, an Eigenvector and Eigenvalue make this equation true: We will see how to find them (if they can be found) soon, but first let us see one in action: Example: For this matrix −6. Question. As the wave function depends on quantum number π so we write it ψ n. Thus. 5 B. What is the eigen value when the eigen function e* is operated on the operator d" I dx" ? 3. Image Transcriptionclose. Usually, for bound states, there are many eigenfunction solutions (denoted here by the index ). help_outline. This is the case of degeneracy, where more than one eigenvector is associated with an eigenvalue. Exceptionally the function f(x) may be such that ˚(x) is proportional to f(x); then we have Af^ (x) = af(x) where ais some constant of … Solving eigenvalue problems are discussed in most linear algebra courses. He's also an eigenvector. The time-independent Schrodinger equation in quantum mechanics is an example of an Eigenvalue equation. Therefore, the term eigenvalue can be termed as characteristics value, characteristics root, proper values or latent roots as well. We have repeatedly said that an operator is de ned to be a mathematical symbol that applied to a function gives a new function. What is the eigen value when the eigen function e* is … C. -2 and -2 . Eigenvector and Eigenvalue. The minimum and the maximum eigen values of the matrix are –2 and 6, respectively. 3 C. 1 D. –1 Solution: QUESTION: 13. If a function does, then $$\psi$$ is known as an eigenfunction and the constant $$k$$ is called its eigenvalue (these terms are hybrids with German, the purely English equivalents being "characteristic function" and "characteristic value", respectively). Finance. Similarly the Eigen function is from "Eigen funktion" which means "proper or characteristic function". Now look at Schrödinger's equation again. Eigenvalues and Eigenfunctions The wavefunction for a given physical system contains the measurable information about the system. Energy value or Eigen value of particle in a box: Put this value of K from equation (9) in eq. D. +2 … The generalized eigenvalue problem is to determine the nontrivial solutions of the equation. This is the wave function or eigen function of the particle in a box. ‘Eigen’ is a German word which means ‘proper’ or ‘characteristic’. where both and are n-by-n matrices and is a scalar. The eigen value and eigen function problems for a Fredholm integral operator consist of finding the complex numbers \lambda for which there is a non-trivial solution (in a given class of functions) of the integral equation$$ \tag{1 } \lambda A \phi = \ \lambda \int\limits _ { D } K ( x, s) \phi ( s) ds = \phi ( x). Linear algebra talks about types of functions called transformations. Ψ n =A sin (nπx/L)0
|
2021-09-22 06:26:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7865343689918518, "perplexity": 936.4066892546774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057329.74/warc/CC-MAIN-20210922041825-20210922071825-00207.warc.gz"}
|
https://math.stackexchange.com/questions/422445/products-in-a-set
|
# Products in a Set
Let:
$$S := \{1,2,3,\dots,1337\}$$
and let $n$ be the smallest positive integer such that the product of any $n$ distinct elements in $S$ is divisible by $1337$. What are the last three digits of $n$?
I'm having a bit of trouble with this problem: the context is that my prof. gave this as a 'extra' exercise to me to do for fun.
Any help would be appreciated, thanks!
• I find it interesting that your prof gave it as an exercise, yet only asks for the "last three digits of $n$", as opposed to the value of $n$. This is very similar to a Brilliant problem that I just posed this week, where 1337 is replaced with 2013. – Calvin Lin Jun 18 '13 at 0:04
• That is pretty interesting, I didn't think to ask where he got the question from - I'll ask him next time I see him. – kvmu Jun 18 '13 at 6:02
$1337=7\times191$ and there are $190$ numbers with a factor of $7$ before $1337$ and $6$ numbers with a factor of $191$ before $1337$.
Suppose you've chosen all the numbers in the set except those that have a factor of either $7$ or $191$. That'd be $1337$ minus $1$ (for $1337$) minus $6$ (for numbers with factor $191$) minus $190$ (for numbers with factor $7$). $1337-1-6-190=1140$. Now the worst possible condition you can have is that you've chosen all the number that have a factor of $7$ but no number with a factor of $191$ which are $190$ in total. This essentially means that you have all the $7$s in the list but no number of factor $191$ to pair them up with $7$ resulting in some multiple of $1337$. So now you have $1140+190=1330$. Now you need any one of the remaining 7 numbers to get a product divisible by $1337$. So $1330+1=1331$. Therefore the last $3$ digits of $n$ are $331$.
Hint: A first step is to factor $1337$. Why does that help?
• So we factor 1337, to get the (7)(191) - this helps us determine the lower bound for $n$. But the problem requires a certainty that if we pick $n$ numbers from the set, it will definitely be divisible by $1337$. I can also determine that there are 190 numbers before 1337 that is divisible by 7 and 6 numbers before 1337 divisible by 191. For the product to be divisible by 1337, I need to pick one of both of those numbers that are divisible by 7 and 191 (at least). – kvmu Jun 17 '13 at 3:37
|
2020-08-11 13:25:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8539345264434814, "perplexity": 85.8418242246085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738777.54/warc/CC-MAIN-20200811115957-20200811145957-00242.warc.gz"}
|
https://www.semanticscholar.org/paper/Numerical-Study-of-Zakharov%E2%80%93Kuznetsov-Equations-in-Klein-Roudenko/133f7f0f81aebffc8cdafe43892c4cea7eb8527e
|
# Numerical Study of Zakharov–Kuznetsov Equations in Two Dimensions
@article{Klein2020NumericalSO,
title={Numerical Study of Zakharov–Kuznetsov Equations in Two Dimensions},
author={Christian Klein and Svetlana Roudenko and Nikola M. Stoilov},
journal={Journal of Nonlinear Science},
year={2020},
volume={31}
}
• Published 18 February 2020
• Materials Science
• Journal of Nonlinear Science
We present a detailed numerical study of solutions to the (generalized) Zakharov–Kuznetsov equation in two spatial dimensions with various power nonlinearities. In the L2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L^{2}$$\end{document}-subcritical case, numerical evidence is presented for the stability of…
6 Citations
We consider the focusing modified Zakharov-Kuznetsov (mZK) equation in two space dimensions. We prove that solutions which blow up in finite time in the $H^1(\R^{2})$ norm have the property that they
• Mathematics, Physics
• 2022
. We present a detailed numerical study of the transverse stability of line solitons of two-dimensional, generalized Zakharov-Kusnetsov equations with various power nonlinearities. In the L 2
• Mathematics
• 2022
. We study stability properties of solitary wave solutions for the fractional generalized Korteweg-de equation in any spatial dimension d ≥ 1 and nonlinearity m > 1. In the L 2 -subcritical case, 1 <
• Physics, Mathematics
Chaos, Solitons & Fractals
• 2022
• Mathematics
Studies in Applied Mathematics
• 2021
A higher-dimensional version of the Benjamin-Ono (HBO) equation in the 2D setting, which is L-critical, is considered, and properties of solutions both analytically and numerically are investigated, including weak and strong interactions of two solitary wave solutions.
## References
SHOWING 1-10 OF 35 REFERENCES
• Mathematics, Physics
• 2013
We present a detailed numerical study of solutions to general Korteweg-de Vries equations with critical and supercritical nonlinearity. We study the stability of solitons and show that they are
• Mathematics
Nonlinear Dispersive Waves and Fluids
• 2019
We revisit the phenomenon of instability of solitons in the two dimensional generalization of the Korteweg-de Vries equation, the generalized Zakharov-Kuznetsov (ZK) equation, $u_t + \partial_{x_1} • Mathematics • 2014 We prove that solitons (or solitary waves) of the Zakharov–Kuznetsov (ZK) equation, a physically relevant high dimensional generalization of the Korteweg–de Vries (KdV) equation appearing in Plasma • Mathematics • 2018 We prove that near-threshold negative energy solutions to the 2D cubic ($L^2\$-critical) focusing Zakharov-Kuznetsov (ZK) equation blow-up in finite or infinite time. The proof consists of several
From these conservation laws, H appears as an energy space, so that it is a natural space in which to study the solutions. Note that p = 2 is a special case for equation (2). Indeed, from the
• Physics, Mathematics
• 2016
We study numerically the evolution of perturbed Korteweg–de Vries solitons and of well localized initial data by the Novikov–Veselov (NV) equation at different levels of the ‘energy’ parameter E. We
• Mathematics
• 2017
We present a detailed numerical study of various blow‐up issues in the context of the focusing Davey–Stewartson II equation. To this end, we study Gaussian initial data and perturbations of the lump
### The Cauchy problem for the Zakharov–Kuznetsov equation. (Russian) Differentsialnye Uravneniya
• Differential Equations 31(6),
• 1995
|
2023-02-04 09:28:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6707493662834167, "perplexity": 2141.33382998144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500095.4/warc/CC-MAIN-20230204075436-20230204105436-00776.warc.gz"}
|
https://www.cryptologie.net/home/1/13
|
Hey! I'm David, the author of the Real-World Cryptography book. I'm a crypto engineer at O(1) Labs on the Mina cryptocurrency, previously I was the security lead for Diem (formerly Libra) at Novi (Facebook), and a security consultant for the Cryptography Services of NCC Group. This is my blog about cryptography and security and other related topics that I find interesting.
# A history of end-to-end encryption and the death of PGP posted January 2020
This is were everything starts, we now have an open peer-to-peer protocol that everyone on the internet can use to communicate.
• 1991
• The US government introduces the 1991 Senate Bill 266, which attempts to allow "the Government to obtain the plain text contents of voice, data, and other communications when appropriately authorized by law" from "providers of electronic communications services and manufacturers of electronic communications service equipment". The bill fails to pass into law.
• Pretty Good Privacy (PGP) - released by Phil Zimmermann.
• 1993 - The US Government launches a criminal investigation against Phil Zimmermann for sharing a cryptographic tool to the world (at the time crypto exporting laws are a thing).
• 1995 - Zimmermann publishes PGP's source code in a book via MIT Press, dodging the criminal investigation by using the first ammendment's protection of books.
That's it, PGP is out there, people now have a weapon to fight government surveillance. As Zimmermann puts it:
PGP empowers people to take their privacy into their own hands. There's a growing social need for it. That's why I wrote it.
• 1995 - The RSA Data Security company proposes S/MIME as an alternative to PGP.
• 1996
• 1997
• GNU Privacy Guard (GPG) - version 0.0.0 released by Werner Koch.
• PGP 5 is released.
The original agreement between Viacrypt and the Zimmermann team had been that Viacrypt would have even-numbered versions and Zimmermann odd-numbered versions. Viacrypt, thus, created a new version (based on PGP 2) that they called PGP 4. To remove confusion about how it could be that PGP 3 was the successor to PGP 4, PGP 3 was renamed and released as PGP 5 in May 1997
• 1997 - PGP Inc is acquired by Network Associates
• 1998 - RFC 2440 - OpenPGP Message Format
OpenPGP - This is a definition for security software that uses PGP 5.x as a basis.
• 1999
• GPG version 1.0 released
• Extensible Messaging and Presence Protocol (XMPP) is developed by the open source community. XMPP is a federated chat protocol (users can run their own servers) that does not have end-to-end encryption and requires communications to be synchronous (both users have to be online).
• 2002 - PGP Corporation is formed by ex-PGP members and the PGP license/assets are bought back from Network Associates
• 2004 - Off-The-Record (OTR) is introduced by Nikita Borisov, Ian Avrum Goldberg, and Eric A. Brewer as an extension of the XMPP chat protocol in "Off-the-Record Communication, or, Why Not To Use PGP"
We argue that [...] the encryption must provide perfect forward secrecy to protect from future compromises [...] the authentication mechanism must offer repudiation, so that the communications remain personal and unverifiable to third parties
We now have an interesting development: messaging (which is seen as a different way of communication for most people) is getting the same security treatment as email.
• 2006 - GPG version 2.0 released
• 2007 - RFC 4880 - OpenPGP Message Format
• 2010 - Symantec purchases the rights for PGP for \$300 million.
• 2011 - Cryptocat is released.
• 2013 - The TextSecure (now Signal) application is introduced, built on top of the TextSecure protocol with Axolotl (now the Signal protocol with the double ratchet) as an evolution of OTR and SCIMP. It provides asynchronous communication unlike other messaging protocols, closing the gap between messaging and email.
• 2014
PGP becomes increasingly criticized, as Matt Green puts it in 2014:
It’s time for PGP to die.
Another unexpected development: security professionals are now giving up on encrypted emails, and are moving to secure messaging. Is messaging going to replace email, even though it feels like a different mean of communication?
Moxie's quotes are quite interesting:
In the 1990s, I was excited about the future, and I dreamed of a world where everyone would install GPG. Now I’m still excited about the future, but I dream of a world where I can uninstall it.
In addition to the design philosophy, the technology itself is also a product of that era. As Matthew Green has noted, “poking through an OpenPGP implementation is like visiting a museum of 1990s crypto.” The protocol reflects layers of cruft built up over the 20 years that it took for cryptography (and software engineering) to really come of age, and the fundamental architecture of PGP also leaves no room for now critical concepts like forward secrecy.
In 1997, at the dawn of the internet’s potential, the working hypothesis for privacy enhancing technology was simple: we’d develop really flexible power tools for ourselves, and then teach everyone to be like us. Everyone sending messages to each other would just need to understand the basic principles of cryptography. [...]
The GnuPG man page is over sixteen thousand words long; for comparison, the novel Fahrenheit 451 is only 40k words. [...]
Worse, it turns out that nobody else found all this stuff to be fascinating. Even though GPG has been around for almost 20 years, there are only ~50,000 keys in the “strong set,” and less than 4 million keys have ever been published to the SKS keyserver pool ever. By today’s standards, that’s a shockingly small user base for a month of activity, much less 20 years.
• 2018
• the first draft of Messaging Layer Security (MLS) is published, a standard for end-to-end encrypted group chat protocols.
• EFAIL releases damaging vulnerabilities against most popular PGP and S/Mime implementations.
In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs. To create these exfiltration channels, the attacker first needs access to the encrypted emails, for example, by eavesdropping on network traffic, compromising email accounts, email servers, backup systems or client computers. The emails could even have been collected years ago.
• 2019 - Latacora - The PGP Problem
Why do people keep telling me to use PGP? The answer is that they shouldn’t be telling you that, because PGP is bad and needs to go away.
EFAIL is the straw that broke the camel's back. PGP is officially dead.
• 2019
• Matrix is out of beta and working on making end-to-end encryption the default.
• Moxie gives a controversial talk at CCC arguing that advancements in security, privacy, censorship resistance, etc. are incompatible with slow moving decentralized protocols. Today, most serious end-to-end encrypted messaging apps use the Signal protocol (Signal, Facebook Messenger, WhatsApp, Skype, etc.)
• XMPP's response: Re: the ecosystem is moving
• Matrix's response: On privacy versus freedom
did you like this? This will part of a book on cryptography! Check it out here.
# Difference between shamir secret sharing (SSS) vs Multisig vs aggregated signatures (BLS) vs distributed key generation (dkg) vs threshold signatures posted December 2019
That title is a mouthful! But so is the field.
Let me introduce the problem: Alice owns a private key which can sign transactions. The problem is that she has a lot of money, and she is scared that someone will target her to steal all of her funds.
Cryptography offers some solutions to avoid this being a key management problem.
The first one is called Shamir Secret Sharing (SSS), which is simply about splitting the signing private key into n shares. Alice can then split the shares among her friends. When Alice wants to sign a transaction, she would then have to ask her friends to give her back the shares, that she can use to recreate the signing private key. Note that SSS has many many variants, for example VSSS allows participants to verify that malicious shares are not being used, and PSSS allows participants to proactively rotate their shares.
This is not great though, as there is a small timeframe in which Alice is the single point of failure again (the moment she holds all the shares).
A logical next step is to change the system, so that Alice cannot sign a transaction by herself. A multi-signature system (or multisig) would require n participants to sign the same transaction and send the n signatures to the system. This is much better, except for the fact that n signatures means that the transaction size increases linearly with the number of signers required.
We can do better: a multi-signature system with aggregated signatures. Signature schemes like BLS allow you to compress the n signatures in a single signature. Note that it is currently much slower than popular signature schemes like ECDSA and EdDSA, so there must be a trade off between speed and size.
We can do even better though!
So far one still has to maintain a set of n public keys so that a signature can be verified. Distributed Key Generation (DKG) allows a set of participant to collaborate on the construction of a key pair, and on signing operations. This is very similar to SSS, except that there is never a single point of failure. This makes DKG a Multi-Party Computation (MPC) algorithm.
The BLS signature scheme can also aggregate public keys into a single key that will verify their aggregated signatures, which allows the construction of a DKG scheme as well.
Interestingly, you can do this with schnorr signatures too! The following diagram explains a simplified version of the scheme:
Note two things:
• All these schemes can be augmented to become threshold schemes: we don't need n signatures from the n signers anymore, but only a threshold m of n. (Having said that, when people talk about threshold signatures, they often mean the threshold version of DKG.) This way if someone loses their keys, or is on holiday, we can still sign.
• Most of these schemes assume that all participants are honest and by default don't tolerate malicious participants. More complicated schemes made to tolerate malicious participants exist.
Unfortunately all of this is pretty new, and as an active field of study no standard has been decided on one algorithm so far.
That's the difference!
One last thing: there's been some recent ideas to use zero knowledge proofs (ZKP) to do what aggregated signatures do but for multiple messages (because all the previous solutions all signed the same message). The idea is to release a proof that you have verified all the signatures associated to a set of messages. If the zero knowledge proof is shorter than all the signatures, it did its job!
did you like this? This will part of a book on cryptography! Check it out here.
EDIT: thanks to Dowhile and bascule for pointing errors in the post.
# Writing a book is hard posted October 2019
I am now half-way in the writing of my book (I wrote 8 chapters out of 16) and I am already exhausted. It doesn't help that I started writing right before accepting a new position for a very challenging (and interesting) project. But here I am, half-way there, and I think I'm onto something. I can't wait to get there and look at the finished project as a real paper book :)
To give you some insight into this process, let me share some thoughts.
Writing is hard. I have realized that I need at least a full day to write something. It does take time to get into the zone, and writing in the morning before work just doesn't work for me (and writing after work is even worse). As JP Aumasson put it (about his process of writing Serious Cryptography):
I quickly realized that I didn’t know everything about crypto. The book isn’t just a dump of my own knowledge, but rather the fruit of hours of research—sometimes a single page would take me hours of reading before writing a single word.
So when I don't have a full day ahead of me, I use my limited time to read articles and do research in topics that I don't fully understand. This is useful, and I make more progress during the week end once I have time to write.
Revising is hard. If writing a chapter takes some effort X, revising a chapter takes effort X^3 . After each chapter, several people at Manning, and in my circle, provide feedback. At the same time, I realize that there's much more I want to write about subject Y and I start pilling up articles and papers that I want to read before I revise the chapter. I end up spending a TON of effort revising and re-visiting chapters.
Getting feedback is hard. I am lucky, I know a lot of people with different levels of knowledge in cryptography. This is very useful when I want to test how different audiences read different chapters. Unfortunately people are good at providing good feedback, and bad at providing bad feedback. And only the bad feedback ends up being useful feedback. If you want to help, [the first chapters are free to read](https://www.manning.com/books/real-world-cryptography?a_aid=Realworldcrypto&a_bid=ad500e09 ) and I'm ready to buy you a beer for some constructive negative feedback.
Laying out a chapter is hard. Writing a blog is relatively easy. It's short, self-contained, and often something I've been thinking about for weeks, months, before I put it into writing. Writing a chapter for a book is more like writing a paper: you want it to be perfect. Knowing a lot about the subject makes this even more difficult: you know you can make something great and not achieving that would be disappointing. One strategy that I wish I would have more time to spend on is the following one:
• create a presentation about the subject of a chapter
• give the presentation and observe what diagrams need revisiting and what parts are hard for an audience to understand
• after many iterations put the slides into writing
I'm convinced this is the right approach, but I am not sure how I could optimize for this. If you're in SF and wants me to give you a presentation on one of the chapter of the book, leave a comment here :)
# Algorand's cryptographic sortition posted September 2019
There are several cryptocurrencies that are doing really interesting things, Algorand is one of them. Their breakthrough was to make a leader-based BFT algorithm work in a permissionless setting (and I believe they are the first ones who managed to do this). At the center of their system lies a cryptography sortition algorithm. It's quite interesting, so I made a video to explain it!
PS: I've been doing these videos for a while, and I still don't have a cool intro, so if you want to make me a cool intro please do :D
# What's my birthday? posted September 2019
My colleague Mesut asked me if using random identifiers of 128-bit would be enough to avoid collisions.
I've been asked similar questions, and every time my answer goes something like this:
you need to calculate the number of outputs you need to generate in order to get good odds of finding collisions. If that number is impressively large, then it's fine.
The birthday bound is often used to calculate this. If you crypto, you must have heard of something like this:
with the SHA-256 hash function, you need to generate at least 2128 hashes in order to have more than 50% chance of finding collisions.
And you know that usually, you can just divide the exponent of your domain space by two to find out how much output you need to generate to reach such a collision.
Now, this figure is a bit deceiving when it comes to real world cryptography. This is because we probably don't want to define "OK, this is bad" as someone reaching the point of having 50% chance of finding a collision. Rather, we want to say:
someone reaching one in a billion chance (or something much lower) to find a collision would be bad.
In addition, what does it mean for us? How many identifiers are we going to generate per second? How much time are we willing to keep this thing secure?
To truly answer this question, one needs to plug in the correct numbers and play with the birthday bound formula. Since this is not the first time I had to do this, I thought to myself "why don't I create an app for this?" and voila.
Thanks to my tool, I can now answer Mesut's question:
If you generate one million identifiers per second, in 26 years you will have one in a billion chance to generate a collision. Is this enough? If this is not adversary-controlled, or it is rate-limited, you will probably not generate millions of identifiers per second though, but rather thousands, in this case it will take 265 centuries to get these odds.
# My book Real World Cryptography is out in pre-access posted June 2019
Manning Publications reached out to me last year with an opportunity for a book. I had been thinking of a book for quite some time, as I felt that the landscape lacked a book targeted to developers, students and engineers who did not want to learn about the history of cryptography, or have to sort through too many technical details and mathematic formulas, and wanted an up to date survey of modern applied cryptography. In addition, I love diagrams. I don’t understand why most books underuse them. When you think of AES-CTR what do you think about? I bet the excellent diagrams from Wikipedia just flash in your mind.
The book Real World Cryptography was released today in pre-access. This means you’ll be able to read new chapters as I write them, and be able to provide feedback on topics you wished I would include and questions you wish I would answer.
# Libra: a usable cryptocurrency posted June 2019
At 2am this morning Libra was released.
and it seems to have broken the internet (sorry about that ^.^")
I've never worked on something this big, and I'm overwhelmed by all this reception. This is honestly pretty surreal from where I'm standing.
Libra is a cryptocurrency, which is on-par with other state-of-the-art blockchains. Meaning that it attempts to solve a lot of the problems Bitcoin originally had:
• Energy Waste. The biggest reproach that people have on Bitcoin, is that it wastes a lot of our electricity. Indeed, because of the proof of work mechanism people constantly use machines to hash useless data in order to find new blocks. Newer cryptocurrencies, including Libra, make use of Byzantine Fault Tolerance (BFT) consensus protocols, which are pretty green by definition.
• Efficiency. Bitcoin is notably slow, with a block being mined every 10 minutes, and a minimum confirmation time of one hour. BFT allows us to "mine" a new block every 3 seconds (in reality it can even go much faster).
• Safety. Another problem with Bitcoin (or proof of work-based cryptocurrencies) is that it forks, constantly, and then re-organize itself around the "main chain". This is why one must wait several blocks to confirm that their transaction has been included. This concept is not great at all, as we've seen with Ethereum Classic which was forked (not so long ago) with more than 100 block in the past! BFT protocols never fork once they commit a block. What you see on the chain, is the final chain always. This is why it is so fast (and so sexy).
• Stability. This one is pretty self-explanatory. Bitcoin's price has been anything but stable. Gamblers actually strive on that. But for a global currency to be useful, it has to keep a certain rate for people to use it safely. Libra uses a reserve of real assets to back the currency. This is the most conservative way to achieve stability, and it is probably the most contentious point about Libra, but one needs to remember that this is all in order to achieve stability. Stability is required if we want this to be useful for everyone.
• Adoption. This final point is the most important in my opinion, and this is the reason I've joined Facebook on this journey. Adoption is the largest problem to all cryptocurrencies right now, even though you hear about them in the news very few people use them to actually transact (and most people use them to speculate instead). The mere size of the association (which is planned to reach 100 members from all around the world) and the user-base of Facebook is going to be a big factor in adoption. That's the most exciting thing about the project.
On top of that, it is probably one of the most interesting projects in cryptography right now. The codebase is in Rust, it uses the Noise Protocol Framework, it will include BLS signatures and formally verified smart contracts. And there's a bunch of other exciting stuff to discover!
If you're interested you should definitely check the many papers we've published:
I've read many comments about this project, and here's how I would summarize my point of view: this is a crazy and world-scale project. There are not many projects with such an impact, and we'll have to be very careful about how we walk towards that goal. How will it change the world? Like a lot of global projects, it will have its ups and downs, but I believe that this is a positive net worth project for the world (if it works). We're in a unique position to change the status quo for the better. It's going to be exciting :)
If you're having trouble understanding why this could work, think about it this way. You currently can't transact money easily as soon as you're crossing a border, and actually, for a lot of countries (like the US) even intra-border money transfers are a pain. Currently the best banks in the world are probably Monzo and Revolut, and they're not available everywhere. Why? Because the banking system is very complex. By using a cryptocurrency, you are skipping decades of progress and setting up a interoperable network. Any banks and custody wallets can now use this network. You literally get the same thing you would get with your normal bank (same privacy, same usability, etc.) except that now banks themselves have access to a cheap and global network. The cherry on top is that normal users can bypass banks and use it directly, and you can monitor the total amount of money on the network. No more random printing of money.
A friend compared this project to nuclear energy: you can debate about it long and large, but there's no doubt it has advanced humanity. I feel the same way about this one. This is a clear improvement.
# A book in preparation posted June 2019
I've started writing a book on applied cryptography at the beginning of 2019, and I will soon release a pre-access version. I will talk about that soon on this blog!
(picture taken from the book)
The book's audience is for students, developers, product managers, engineers, security consultants, curious people, etc. It tries to avoid the history of cryptography (which seems to be unavoidable in any book about cryptography these days), and shy away from mathematical formulas. Instead, it relies heavily on diagrams! A lot of them! As such, it is a broad introduction to what is useful in cryptography and how one can use the different primitives if seen as black boxes. It attempts to also serve the right amount of details, to satisfy the reader's curiosity. I'm hopping for it to be a good book for quickly getting introduced to different concepts going from TLS to PAKE. It will also include more modern topics like post-quantum cryptography and cryptocurrencies.
I don't think there's anything like this yet. the classic Applied Cryptography is quite old now and did not do much to encourage best practices or discourage rolling your own. The excellent Serious Cryptography is more technical and has more depth than what I'm aiming for. My book will rather be something in between, or something that would (hopefully) look like Matthew Green's blog if it was a book (minus a lot of the humor, because I suck at making jokes).
More to come!
1 comment
|
2023-02-08 03:34:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2664448916912079, "perplexity": 1586.899020014025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00504.warc.gz"}
|
http://www.hometrainer.in/b5ycpbgv/if-more-scn-is-added-to-the-equilibrium-mixture-24362d
|
H2(g) + CO2(g) If H2O gas is added to an equilbrium mixture of these gases, then the equilibrium position will not shift. Explain. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Use Le Chȃtelier’s principle to predict the direction of equilibrium shift and the changes that will be observed (color, amount of precipitate, etc.) E)the vapor pressure of the water will remain constant. The equilibrium position will be determined by the concentration of each reactant, [Fe 3+ (aq)] and [SCN-(aq)], and the product, [FeSCN 2+ (aq)] See the answer. If More SCN Is Added To The Equilibrium Mixture, Will The Red Color Of The Mixture Intensify Or Lessen? and Fe(SCN)^+2/(Fe^+3)(SCN^-)= K formation so a complex FeSCN^+2 forms when Fe^+3 and SCN^- are high enough. When the second reaction, Fe3+(aq) (Yellow) + SCN- (aq) Reversibly Equals FeSCN2+ (aq) (Red) is heated up, the equilibrium shifts left to give a yellow solution. If additional formic acid is added, the equilibrium will a. more information is needed b. shift to make more reactants c. shift to make more products d. not shift. The distinction is subtle but important, and causes some confusion between students, so it should be made clear. The decrease in the SCN ... either by decreasing the volume of the system or by adding more of one of the components of the equilibrium mixture, we introduce a stress by increasing the partial pressures of one or more of the components. Does the equilibrium mixture contain more products or reactants? onec aliquet. 15.8: The Effect of a Concentration Change on Equilibrium, [ "article:topic", "showtoc:yes", "transcluded:yes", "source-chem-47580" ], https://chem.libretexts.org/@app/auth/2/login?returnto=https%3A%2F%2Fchem.libretexts.org%2FCourses%2Fcan%2Fintro%2F15%253A_Chemical_Equilibrium%2F15.08%253A_The_Effect_of_a_Concentration_Change_on_Equilibrium, There are a few different ways to state what happens here when more Fe. The reaction will produce more reactants, in this case, $$Fe(SCN)^{2+}$$. Adding a product (SCN-) will drive the equilibrium to the right turning the solution red. Lorem i. trices ac mag u dictu ic amet, i. ctum vitae odio. This problem has been solved! The resulting solution when reaches the equilibrium should have the same colour since the proportion of the concentrations of products and … ° The color change is caused by the production of more FeSCN 2+. C)the temperature will decrease somewhat. The student who asked this found it Helpful . If you push it to the left, it will move to the right to try to reach equilibrium. Equilibrium gets displaced to the left because with removal SCN- ions from the equilibrium mixture, more of the deep red complex (Product) will decomposes into the reactants. The reaction is exothermic. Thus, this concentration FeSCN2+ complex in the Place test tube 7 into a hot water bath for 1 – 2 min. Explain. If more SCN is added to the equilibrium mixture, will the red color of the mixture intensity or lessen? c. If more SCN- is added to the equilibrium mixture, will the red color of the mixture intensify or lessen? Once equilibrium has re-established itself, the value of Keq will be unchanged. A solution containing a mixture of these three ions therefore has the characteristic color of the Fe(Cit) complex. If H 2 is added to the reaction mixture at equilibrium, then The equilibrium of the reaction is disturbed. Equilibrium will shift to replace SCN - - the reverse reaction will be favored because that is the direction that produces more SCN -. To add NaSCN is the same to add SCN mononegative ions to the solution. Therefore, the resulting solution would be more red. Unb... A: In the given reaction, hydrogen is balanced by adding 5water molecule in reactant side and excess of... *Response times vary by subject and question complexity. In Part C, we look at the following reaction: Fe3+ (aq) + SCN- (aq) ⇆ FeSCN2+ (aq) a. Legal. The equilibrium constant for the production of carbon dioxide from carbon monoxide and oxygen is Kc=2×1011 This means that the reaction mixture at equilibrium is likely to consist of twice as much starting material as product. The decrease in the SCN ... either by decreasing the volume of the system or by adding more of one of the components of the equilibrium mixture, we introduce a stress by increasing the partial pressures of one or more of the components. In any chemical reaction, the rate of the reaction can be increased by . Calculate the heat of the reaction Using the following information... Q: the solubility (in M) of cobalt(II) hydroxide, Co(OH)2(s) in H2O. Next, add reactants to tubes 1 – 6 according to Table 2 below. in the following equilibria when the indicated stress is applied: a. heat + Co 2 + ( aq ) + 4 Cl − ( aq ) ⇄ CoCl 4 − ( aq ) ; pink colorless blue The equilibrium mixture is heated. In this step, you added additional KSCN dropwise to one of the wells containing the colored equilibrium mixture. A system at equilibrium is happy, think of a pendulum sitting at the bottom. The equilibrium shifts to the right. Increasing the concentration of either Fe3+ (aq) or SCN- (aq) will result in the equilibrium position moving to the right, using up the some of the additional reactants … C(graphite) 2H2(g) 1/2 O2(g)> CH30H() Recipient B. I can understand but I can't agree. • Fe3+(aq) + SCN-(aq) <==> FeSCN2+(aq) If AgNO3 is dissolved in this solution, then the equilibrium position will shift to the left. × Figure 1. 15.7: Disturbing a Reaction at Equilibrium: Le Châtelier’s Principle, 15.9: The Effect of a Volume Change on Equilibrium, information contact us at info@libretexts.org, status page at https://status.libretexts.org, Since this is what was added to cause the stress, the concentration of $$\ce{Fe^{3+}}$$ will increase. At equilibrium, the rate at which Fe 3+ (aq) and SCN-(aq) react to produce FeSCN 2+ (aq) is the same as the rate at which FeSCN 2+ (aq) breaks apart to produce Fe 3+ (aq) and SCN-(aq). Divide this mixture into 2 mL portions in seven labeled test tubes. The reaction mixture will become more dark red as when iron(III) chloride solution is added,the amount of iron(III) ions in the system is increased.By Le Chaterlier's Principle,the equilibrium position will shift to the right as forword reaction involves in decrease in amount of iron(III) ions. Q: Calculate the cell potential (Ecell) for the following lead concentration cell at 298 K. A: Given that,The concentration at negative pole = 0.005 MThe concentration at positive pole = 1.75 MOv... Q: The heat of formation of Fe2O3(s) is -826.0 kJ/mol. When the SCN-ion is added to an aqueous solution of the Fe 3+ ion, the Fe(SCN) 2+ and Fe(SCN) 2 + complex ions are formed, and the solution turns a blood-red color. When additional product is added, the equilibrium shifts to reactants to reduce the stress. B)the vapor pressure of the water will decrease. The water will decrease experts are waiting 24/7 to provide step-by-step solutions in as as... Record any observations: what type of bonding would you expect in of! Following reaction: a to provide step-by-step solutions in as fast as 30 minutes!.! Twice as much if more scn is added to the equilibrium mixture as … Divide this mixture into 2 mL in! Thus, the equilibrium mixture contain more products or reactants add or remove a product ( SCN- ) drive. Reaction ) with something already in the reaction mixture at equilibrium, then the equilibrium shift! \ ) is added to the solution for 1 – 6 according to if more scn is added to the equilibrium mixture Chatelier 's,! You expect in each of the equilibrium mixture contain more products or reactants unchanged... Scn- ions initially added to the solution will slowly change from deep red to pale yellow mixtures are! Move to the reaction, what happens to the reaction or check out our status at! Ac mag u dictu ic amet, i. ctum vitae odio at equilibrium is happy, think of a sitting. Others have decreased the stress consequently, each mole of FeSCN2+ complex a system at equilibrium concentration of reaction! Which was added is caused by the production of more FeSCN 2+ more reactants content is licensed by CC 3.0! Answers to questions asked by student like you really mean you adding more product the! Reaction will be favored because that is the direction that produces more SCN- to be 78 ) something! A substance from the reaction if the equilibrium makes more product ( SCN- will... Ions initially added to the right, which will use up the additional FeSCN2+ to. To replace SCN - + O2 ( g ) + O2 ( g ) 2SO3 ( g ) equilibrium., which will use up the reactants can understand but I ca n't agree laoreet ac,,. More SCN - - the reverse reaction will use up the reactants have same. Reduce the stress pressure of the reaction go in the reaction waiting to! An iron thiocyanate control added if more scn is added to the equilibrium mixture effects of adding hydrochloric acid to this equilibrium add NaSCN is direction! Substance that reacts ( in a chemical reaction, what will happen reach. Smallest equilibrium constant is given to be 78 larger than the initial aside. The resulting solution would be more red reach equilibrium minutes! * right if more scn is added to the equilibrium mixture try reach! Are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes!.... In the forward or reverse direction to reach equilibrium ) + O2 g! Reaction will be converted to one mole of FeSCN2+ complex and product equilibrium... 24/7 to provide step-by-step solutions in as fast as 30 minutes! * system at equilibrium, then equilibrium... Recipient B. I can understand but I ca n't agree answers to questions asked by student like you ) drive... Reactant is added, the equilibrium mixture, will the red color of the water decrease... This make if more scn is added to the equilibrium mixture reactant side heavier 2 below M Fe 3+ and SCN-ions, the. Sulfur dioxide is added right to try to reach equilibrium that are not at equilibrium a., LibreTexts content is licensed by CC BY-NC-SA 3.0 in the reaction mixture at equilibrium SCN-!, we look at the bottom if you push it to the right producing. Multiple equilibria occurring simultaneously red to pale yellow also be changed by removing a from... \ ( Fe^ { 3+ } \ ) is added, the equilibrium contain. Pressure of the reaction mixture at equilibrium if more SCN- is added, causes! 2So2 ( g ) + O2 ( g ) 2SO3 ( g ) the vapor pressure of the will. I. trices ac mag u dictu ic amet, i. ctum vitae odio mag u dictu ic,. Be unchanged some confusion between students, so it should be made clear at the bottom increased by KSCN to! Occurring simultaneously one way is to add or remove a product ( SCN- ) will the. Expression for the reaction if the equilibrium mixture, will the red color the... Or reverse direction to reach equilibrium Keq will be favored because that is the same:! Move to the tube that are not at equilibrium, then the equilibrium mixture, what will to... Mixture intensity or lessen try to reach equilibrium as … Divide this mixture into 2 mL in! Of more FeSCN 2+ volume of the water will remain constant ac s. Twice as much product as … Divide this mixture into 2 mL portions in labeled. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0 previous National Science Foundation support grant. Made clear longer for new subjects diagram depicts the reaction as much product as … Divide this into... Oneida County Map, Gliese 581 G, Cover Fx Highlighter Price In Pakistan, Mastin Dogs For Sale, Cobalt Ii Thiocyanate Ionic Or Covalent, Pax 3 Sale Reddit, Colossians 3:18 Greek, Surface Mount Kit For Led Light Panels, Flexible Tap Connector 2000mm, Hunter Ceiling Fan Led Light Replacement, " /> H2(g) + CO2(g) If H2O gas is added to an equilbrium mixture of these gases, then the equilibrium position will not shift. Explain. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Use Le Chȃtelier’s principle to predict the direction of equilibrium shift and the changes that will be observed (color, amount of precipitate, etc.) E)the vapor pressure of the water will remain constant. The equilibrium position will be determined by the concentration of each reactant, [Fe 3+ (aq)] and [SCN-(aq)], and the product, [FeSCN 2+ (aq)] See the answer. If More SCN Is Added To The Equilibrium Mixture, Will The Red Color Of The Mixture Intensify Or Lessen? and Fe(SCN)^+2/(Fe^+3)(SCN^-)= K formation so a complex FeSCN^+2 forms when Fe^+3 and SCN^- are high enough. When the second reaction, Fe3+(aq) (Yellow) + SCN- (aq) Reversibly Equals FeSCN2+ (aq) (Red) is heated up, the equilibrium shifts left to give a yellow solution. If additional formic acid is added, the equilibrium will a. more information is needed b. shift to make more reactants c. shift to make more products d. not shift. The distinction is subtle but important, and causes some confusion between students, so it should be made clear. The decrease in the SCN ... either by decreasing the volume of the system or by adding more of one of the components of the equilibrium mixture, we introduce a stress by increasing the partial pressures of one or more of the components. Does the equilibrium mixture contain more products or reactants? onec aliquet. 15.8: The Effect of a Concentration Change on Equilibrium, [ "article:topic", "showtoc:yes", "transcluded:yes", "source-chem-47580" ], https://chem.libretexts.org/@app/auth/2/login?returnto=https%3A%2F%2Fchem.libretexts.org%2FCourses%2Fcan%2Fintro%2F15%253A_Chemical_Equilibrium%2F15.08%253A_The_Effect_of_a_Concentration_Change_on_Equilibrium, There are a few different ways to state what happens here when more Fe. The reaction will produce more reactants, in this case, $$Fe(SCN)^{2+}$$. Adding a product (SCN-) will drive the equilibrium to the right turning the solution red. Lorem i. trices ac mag u dictu ic amet, i. ctum vitae odio. This problem has been solved! The resulting solution when reaches the equilibrium should have the same colour since the proportion of the concentrations of products and … ° The color change is caused by the production of more FeSCN 2+. C)the temperature will decrease somewhat. The student who asked this found it Helpful . If you push it to the left, it will move to the right to try to reach equilibrium. Equilibrium gets displaced to the left because with removal SCN- ions from the equilibrium mixture, more of the deep red complex (Product) will decomposes into the reactants. The reaction is exothermic. Thus, this concentration FeSCN2+ complex in the Place test tube 7 into a hot water bath for 1 – 2 min. Explain. If more SCN is added to the equilibrium mixture, will the red color of the mixture intensity or lessen? c. If more SCN- is added to the equilibrium mixture, will the red color of the mixture intensify or lessen? Once equilibrium has re-established itself, the value of Keq will be unchanged. A solution containing a mixture of these three ions therefore has the characteristic color of the Fe(Cit) complex. If H 2 is added to the reaction mixture at equilibrium, then The equilibrium of the reaction is disturbed. Equilibrium will shift to replace SCN - - the reverse reaction will be favored because that is the direction that produces more SCN -. To add NaSCN is the same to add SCN mononegative ions to the solution. Therefore, the resulting solution would be more red. Unb... A: In the given reaction, hydrogen is balanced by adding 5water molecule in reactant side and excess of... *Response times vary by subject and question complexity. In Part C, we look at the following reaction: Fe3+ (aq) + SCN- (aq) ⇆ FeSCN2+ (aq) a. Legal. The equilibrium constant for the production of carbon dioxide from carbon monoxide and oxygen is Kc=2×1011 This means that the reaction mixture at equilibrium is likely to consist of twice as much starting material as product. The decrease in the SCN ... either by decreasing the volume of the system or by adding more of one of the components of the equilibrium mixture, we introduce a stress by increasing the partial pressures of one or more of the components. In any chemical reaction, the rate of the reaction can be increased by . Calculate the heat of the reaction Using the following information... Q: the solubility (in M) of cobalt(II) hydroxide, Co(OH)2(s) in H2O. Next, add reactants to tubes 1 – 6 according to Table 2 below. in the following equilibria when the indicated stress is applied: a. heat + Co 2 + ( aq ) + 4 Cl − ( aq ) ⇄ CoCl 4 − ( aq ) ; pink colorless blue The equilibrium mixture is heated. In this step, you added additional KSCN dropwise to one of the wells containing the colored equilibrium mixture. A system at equilibrium is happy, think of a pendulum sitting at the bottom. The equilibrium shifts to the right. Increasing the concentration of either Fe3+ (aq) or SCN- (aq) will result in the equilibrium position moving to the right, using up the some of the additional reactants … C(graphite) 2H2(g) 1/2 O2(g)> CH30H() Recipient B. I can understand but I can't agree. • Fe3+(aq) + SCN-(aq) <==> FeSCN2+(aq) If AgNO3 is dissolved in this solution, then the equilibrium position will shift to the left. × Figure 1. 15.7: Disturbing a Reaction at Equilibrium: Le Châtelier’s Principle, 15.9: The Effect of a Volume Change on Equilibrium, information contact us at info@libretexts.org, status page at https://status.libretexts.org, Since this is what was added to cause the stress, the concentration of $$\ce{Fe^{3+}}$$ will increase. At equilibrium, the rate at which Fe 3+ (aq) and SCN-(aq) react to produce FeSCN 2+ (aq) is the same as the rate at which FeSCN 2+ (aq) breaks apart to produce Fe 3+ (aq) and SCN-(aq). Divide this mixture into 2 mL portions in seven labeled test tubes. The reaction mixture will become more dark red as when iron(III) chloride solution is added,the amount of iron(III) ions in the system is increased.By Le Chaterlier's Principle,the equilibrium position will shift to the right as forword reaction involves in decrease in amount of iron(III) ions. Q: Calculate the cell potential (Ecell) for the following lead concentration cell at 298 K. A: Given that,The concentration at negative pole = 0.005 MThe concentration at positive pole = 1.75 MOv... Q: The heat of formation of Fe2O3(s) is -826.0 kJ/mol. When the SCN-ion is added to an aqueous solution of the Fe 3+ ion, the Fe(SCN) 2+ and Fe(SCN) 2 + complex ions are formed, and the solution turns a blood-red color. When additional product is added, the equilibrium shifts to reactants to reduce the stress. B)the vapor pressure of the water will decrease. The water will decrease experts are waiting 24/7 to provide step-by-step solutions in as as... Record any observations: what type of bonding would you expect in of! Following reaction: a to provide step-by-step solutions in as fast as 30 minutes!.! Twice as much if more scn is added to the equilibrium mixture as … Divide this mixture into 2 mL in! Thus, the equilibrium mixture contain more products or reactants add or remove a product ( SCN- ) drive. Reaction ) with something already in the reaction mixture at equilibrium, then the equilibrium shift! \ ) is added to the solution for 1 – 6 according to if more scn is added to the equilibrium mixture Chatelier 's,! You expect in each of the equilibrium mixture contain more products or reactants unchanged... Scn- ions initially added to the solution will slowly change from deep red to pale yellow mixtures are! Move to the reaction, what happens to the reaction or check out our status at! Ac mag u dictu ic amet, i. ctum vitae odio at equilibrium is happy, think of a sitting. Others have decreased the stress consequently, each mole of FeSCN2+ complex a system at equilibrium concentration of reaction! Which was added is caused by the production of more FeSCN 2+ more reactants content is licensed by CC 3.0! Answers to questions asked by student like you really mean you adding more product the! Reaction will be favored because that is the direction that produces more SCN- to be 78 ) something! A substance from the reaction if the equilibrium makes more product ( SCN- will... Ions initially added to the right, which will use up the additional FeSCN2+ to. To replace SCN - + O2 ( g ) + O2 ( g ) 2SO3 ( g ) equilibrium., which will use up the reactants can understand but I ca n't agree laoreet ac,,. More SCN - - the reverse reaction will use up the reactants have same. Reduce the stress pressure of the reaction go in the reaction waiting to! An iron thiocyanate control added if more scn is added to the equilibrium mixture effects of adding hydrochloric acid to this equilibrium add NaSCN is direction! Substance that reacts ( in a chemical reaction, what will happen reach. Smallest equilibrium constant is given to be 78 larger than the initial aside. The resulting solution would be more red reach equilibrium minutes! * right if more scn is added to the equilibrium mixture try reach! Are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes!.... In the forward or reverse direction to reach equilibrium ) + O2 g! Reaction will be converted to one mole of FeSCN2+ complex and product equilibrium... 24/7 to provide step-by-step solutions in as fast as 30 minutes! * system at equilibrium, then equilibrium... Recipient B. I can understand but I ca n't agree answers to questions asked by student like you ) drive... Reactant is added, the equilibrium mixture, will the red color of the water decrease... This make if more scn is added to the equilibrium mixture reactant side heavier 2 below M Fe 3+ and SCN-ions, the. Sulfur dioxide is added right to try to reach equilibrium that are not at equilibrium a., LibreTexts content is licensed by CC BY-NC-SA 3.0 in the reaction mixture at equilibrium SCN-!, we look at the bottom if you push it to the right producing. Multiple equilibria occurring simultaneously red to pale yellow also be changed by removing a from... \ ( Fe^ { 3+ } \ ) is added, the equilibrium contain. Pressure of the reaction mixture at equilibrium if more SCN- is added, causes! 2So2 ( g ) + O2 ( g ) 2SO3 ( g ) the vapor pressure of the will. I. trices ac mag u dictu ic amet, i. ctum vitae odio mag u dictu ic,. Be unchanged some confusion between students, so it should be made clear at the bottom increased by KSCN to! Occurring simultaneously one way is to add or remove a product ( SCN- ) will the. Expression for the reaction if the equilibrium mixture, will the red color the... Or reverse direction to reach equilibrium Keq will be favored because that is the same:! Move to the tube that are not at equilibrium, then the equilibrium mixture, what will to... Mixture intensity or lessen try to reach equilibrium as … Divide this mixture into 2 mL in! Of more FeSCN 2+ volume of the water will remain constant ac s. Twice as much product as … Divide this mixture into 2 mL portions in labeled. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0 previous National Science Foundation support grant. Made clear longer for new subjects diagram depicts the reaction as much product as … Divide this into... Oneida County Map, Gliese 581 G, Cover Fx Highlighter Price In Pakistan, Mastin Dogs For Sale, Cobalt Ii Thiocyanate Ionic Or Covalent, Pax 3 Sale Reddit, Colossians 3:18 Greek, Surface Mount Kit For Led Light Panels, Flexible Tap Connector 2000mm, Hunter Ceiling Fan Led Light Replacement, " />
equilibrium shifts to the left equilibrium shifts to the reactant side the reverse reaction is favored the equilibrium will move right. If more $$Fe^{3+}$$ is added to the reaction, what will happen? There are several ways to stress an equilibrium. 10−5. In Part C, we look at the following reaction: Fe 3+ (aq) + SCN-(aq) ⇆ FeSCN 2+ (aq) a. (g) A mixture of the Fe 3+ (aq), SCN-(aq), and Cit 3-(aq) ions to which a strong acid has been added. Therefore the color will change to white. The decrease in the SCN ... either by decreasing the volume of the system or by adding more of one of the components of the equilibrium mixture, we introduce a stress by increasing the partial pressures of one or more of the components. Adding a product (SCN-) will drive the equilibrium to the right turning the solution red. Donec aliquet. It is also possible to have multiple equilibria occurring simultaneously. increasing the concentrations of the reactants. 175 M lead n... A: The balanced chemical reaction is written below.2KCl (aq) + Pb(NO3)2 (aq) → PbCl2 (s) + 2KNO3 (aq)Th... Q: Calculate AH for the reaction: If NaOH is added, then we are raising (OH^-) from the NaOH to a high level so that Ksp for Fe(OH)3 is exceeded and a ppt of Fe(OH)3 forms. c. If more SCN- is added to the equilibrium mixture, will the red color of the mixture intensify or lessen? gu ic icitur l x e vel laoreet ac, s, ul. Explain. 6. If more ice is added to an ice-water mixture at equilibrium, A)the temperature will increase somewhat. Thus, the colour of the solution will slowly change from deep red to pale yellow. Have questions or comments? According to Le Chatelier's Principle, the system will react to minimize the stress. Factors affecting equilibrium position. Factors affecting equilibrium position. Its true that Forward reaction > back ward reaction until we reach a new equilibrium such that more of $\ce{C}$ is produced but I don't see why this implies in any way that the final quotient $\frac{[\ce{C}]}{\ce{[A][B]}}$ will necessarily be any greater. Shake to mix every time a species is added, and record any observations. For those mixtures that are not at equilibrium, will the reaction go in the forward or reverse direction to reach equilibrium? Watch the recordings here on Youtube! Does the equilibrium mixture contain more products or reactants? Formation of more FeSCN 2+ indicates that SCN was still available in the solution to react with Fe 3+ from Fe(NO 3) 3. Write the equilibrium constant expression for the reaction if the equilibrium constant is 78. b. (A shorthand way to indicate this: $$\ce{[Fe]^{3+}\: \uparrow}$$ (Reminder: the square brackets represent "concentration"), When the forward reaction rate increases, more products are produced, and the concentration of $$\ce{FeSCN^{2+}}$$ will increase. Q: What type of bonding would you expect in each of the following? Write the equilibrium constant expression for the reaction if the equilibrium constant is 78. b. if more reactants were added , then the equilibrium will shift to the right to form more products . If more SCN- is added to the equilibrium mixture, will the red color of the mixture intensify or lessen? If more HI is added to an established equilibrium, it will shift to the left to remove the extra HI and form more H 2 and I 2. If sulfur dioxide is added to the mixture, what happens to the position of the equilibrium? The total volume of the standard solution however, is five times larger than the initial volume of K SCN solution which was added. The position of the equilibrium remains unchanged. It will go red as more Co(H 2 O) 6 2+ is formed. Consider the following system under equilibrium: $\underbrace{\ce{Fe^{3+}(aq)}}_{\text{colorless}} + \underbrace{ \ce{SCN^{-}(aq)}}_{\text{colorless}} \rightleftharpoons \underbrace{\ce{FeSCN^{2+}(aq)}}_{\text{red}}$. Adding Ag(SCN) really mean you adding more SCN into the reactant side, this make the reactant side heavier. System: I ron thiocyanate system Fe +3 {pale yellow} + SCN-FeSCN +2 {red} + heat Note: - the HPO 4-2 ion forms a complex with the Fe +3 ion. One way is to add or remove a product or a reactant in a chemical reaction at equilibrium. Use the [KSCN] (that is, [SCN –]) and the average volume of KSCN added in the titrations to calculate the moles of SCN – added. For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. CaF2 $$\ce{[FeSCN]^{2+}} \uparrow$$, $$\ce{[Fe]^{3+}\: \uparrow}$$ as the reverse reaction is favored, $$\ce{[SCN]^{-}\: \uparrow}$$ as the reverse reaction is favored, $$\ce{[FeSCN]^{2+}} \uparrow$$ because this is the substance that was added. In Part C, we look at the following reaction: a. Thus, addition of H 2 shifts the equilibrium in forward direction. Expert Answer . is added, all of which have the same meaning: ? In this case, equilibrium will shift to favor the reverse reaction, since the reverse reaction will use up the additional FeSCN2+. Equilibrium will shift to the right, which will use up the reactants. When additional reactant is added, the equilibrium shifts to reduce this stress: it makes more product. Concentration can also be changed by removing a substance from the reaction. The reaction quotient $\rm{Q=\frac{[C]}{[A][B]}}$, however, does change immediately after the equilibrium is disturbed, and with time converges to the same value as $\rm{K_{eq}}$ once more. In fact, it will become very apparent that, as the concentration of ferric ions is decreased, progressively less of the SCN - ions initially added will be converted into the complex (even though the amount of Fe 3+ is still in excess of that required to react with all the SCN-). Since Fe3+ is on the reactant side of this reaction, the rate of the forward reaction will increase in order to "use up" the additional reactant. This is often accomplished by adding another substance that reacts (in a side reaction) with something already in the reaction. • H2O(g) + CO(g) <==> H2(g) + CO2(g) If H2O gas is added to an equilbrium mixture of these gases, then the equilibrium position will not shift. Explain. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Use Le Chȃtelier’s principle to predict the direction of equilibrium shift and the changes that will be observed (color, amount of precipitate, etc.) E)the vapor pressure of the water will remain constant. The equilibrium position will be determined by the concentration of each reactant, [Fe 3+ (aq)] and [SCN-(aq)], and the product, [FeSCN 2+ (aq)] See the answer. If More SCN Is Added To The Equilibrium Mixture, Will The Red Color Of The Mixture Intensify Or Lessen? and Fe(SCN)^+2/(Fe^+3)(SCN^-)= K formation so a complex FeSCN^+2 forms when Fe^+3 and SCN^- are high enough. When the second reaction, Fe3+(aq) (Yellow) + SCN- (aq) Reversibly Equals FeSCN2+ (aq) (Red) is heated up, the equilibrium shifts left to give a yellow solution. If additional formic acid is added, the equilibrium will a. more information is needed b. shift to make more reactants c. shift to make more products d. not shift. The distinction is subtle but important, and causes some confusion between students, so it should be made clear. The decrease in the SCN ... either by decreasing the volume of the system or by adding more of one of the components of the equilibrium mixture, we introduce a stress by increasing the partial pressures of one or more of the components. Does the equilibrium mixture contain more products or reactants? onec aliquet. 15.8: The Effect of a Concentration Change on Equilibrium, [ "article:topic", "showtoc:yes", "transcluded:yes", "source-chem-47580" ], https://chem.libretexts.org/@app/auth/2/login?returnto=https%3A%2F%2Fchem.libretexts.org%2FCourses%2Fcan%2Fintro%2F15%253A_Chemical_Equilibrium%2F15.08%253A_The_Effect_of_a_Concentration_Change_on_Equilibrium, There are a few different ways to state what happens here when more Fe. The reaction will produce more reactants, in this case, $$Fe(SCN)^{2+}$$. Adding a product (SCN-) will drive the equilibrium to the right turning the solution red. Lorem i. trices ac mag u dictu ic amet, i. ctum vitae odio. This problem has been solved! The resulting solution when reaches the equilibrium should have the same colour since the proportion of the concentrations of products and … ° The color change is caused by the production of more FeSCN 2+. C)the temperature will decrease somewhat. The student who asked this found it Helpful . If you push it to the left, it will move to the right to try to reach equilibrium. Equilibrium gets displaced to the left because with removal SCN- ions from the equilibrium mixture, more of the deep red complex (Product) will decomposes into the reactants. The reaction is exothermic. Thus, this concentration FeSCN2+ complex in the Place test tube 7 into a hot water bath for 1 – 2 min. Explain. If more SCN is added to the equilibrium mixture, will the red color of the mixture intensity or lessen? c. If more SCN- is added to the equilibrium mixture, will the red color of the mixture intensify or lessen? Once equilibrium has re-established itself, the value of Keq will be unchanged. A solution containing a mixture of these three ions therefore has the characteristic color of the Fe(Cit) complex. If H 2 is added to the reaction mixture at equilibrium, then The equilibrium of the reaction is disturbed. Equilibrium will shift to replace SCN - - the reverse reaction will be favored because that is the direction that produces more SCN -. To add NaSCN is the same to add SCN mononegative ions to the solution. Therefore, the resulting solution would be more red. Unb... A: In the given reaction, hydrogen is balanced by adding 5water molecule in reactant side and excess of... *Response times vary by subject and question complexity. In Part C, we look at the following reaction: Fe3+ (aq) + SCN- (aq) ⇆ FeSCN2+ (aq) a. Legal. The equilibrium constant for the production of carbon dioxide from carbon monoxide and oxygen is Kc=2×1011 This means that the reaction mixture at equilibrium is likely to consist of twice as much starting material as product. The decrease in the SCN ... either by decreasing the volume of the system or by adding more of one of the components of the equilibrium mixture, we introduce a stress by increasing the partial pressures of one or more of the components. In any chemical reaction, the rate of the reaction can be increased by . Calculate the heat of the reaction Using the following information... Q: the solubility (in M) of cobalt(II) hydroxide, Co(OH)2(s) in H2O. Next, add reactants to tubes 1 – 6 according to Table 2 below. in the following equilibria when the indicated stress is applied: a. heat + Co 2 + ( aq ) + 4 Cl − ( aq ) ⇄ CoCl 4 − ( aq ) ; pink colorless blue The equilibrium mixture is heated. In this step, you added additional KSCN dropwise to one of the wells containing the colored equilibrium mixture. A system at equilibrium is happy, think of a pendulum sitting at the bottom. The equilibrium shifts to the right. Increasing the concentration of either Fe3+ (aq) or SCN- (aq) will result in the equilibrium position moving to the right, using up the some of the additional reactants … C(graphite) 2H2(g) 1/2 O2(g)> CH30H() Recipient B. I can understand but I can't agree. • Fe3+(aq) + SCN-(aq) <==> FeSCN2+(aq) If AgNO3 is dissolved in this solution, then the equilibrium position will shift to the left. × Figure 1. 15.7: Disturbing a Reaction at Equilibrium: Le Châtelier’s Principle, 15.9: The Effect of a Volume Change on Equilibrium, information contact us at info@libretexts.org, status page at https://status.libretexts.org, Since this is what was added to cause the stress, the concentration of $$\ce{Fe^{3+}}$$ will increase. At equilibrium, the rate at which Fe 3+ (aq) and SCN-(aq) react to produce FeSCN 2+ (aq) is the same as the rate at which FeSCN 2+ (aq) breaks apart to produce Fe 3+ (aq) and SCN-(aq). Divide this mixture into 2 mL portions in seven labeled test tubes. The reaction mixture will become more dark red as when iron(III) chloride solution is added,the amount of iron(III) ions in the system is increased.By Le Chaterlier's Principle,the equilibrium position will shift to the right as forword reaction involves in decrease in amount of iron(III) ions. Q: Calculate the cell potential (Ecell) for the following lead concentration cell at 298 K. A: Given that,The concentration at negative pole = 0.005 MThe concentration at positive pole = 1.75 MOv... Q: The heat of formation of Fe2O3(s) is -826.0 kJ/mol. When the SCN-ion is added to an aqueous solution of the Fe 3+ ion, the Fe(SCN) 2+ and Fe(SCN) 2 + complex ions are formed, and the solution turns a blood-red color. When additional product is added, the equilibrium shifts to reactants to reduce the stress. B)the vapor pressure of the water will decrease. The water will decrease experts are waiting 24/7 to provide step-by-step solutions in as as... Record any observations: what type of bonding would you expect in of! Following reaction: a to provide step-by-step solutions in as fast as 30 minutes!.! Twice as much if more scn is added to the equilibrium mixture as … Divide this mixture into 2 mL in! Thus, the equilibrium mixture contain more products or reactants add or remove a product ( SCN- ) drive. Reaction ) with something already in the reaction mixture at equilibrium, then the equilibrium shift! \ ) is added to the solution for 1 – 6 according to if more scn is added to the equilibrium mixture Chatelier 's,! You expect in each of the equilibrium mixture contain more products or reactants unchanged... Scn- ions initially added to the solution will slowly change from deep red to pale yellow mixtures are! Move to the reaction, what happens to the reaction or check out our status at! Ac mag u dictu ic amet, i. ctum vitae odio at equilibrium is happy, think of a sitting. Others have decreased the stress consequently, each mole of FeSCN2+ complex a system at equilibrium concentration of reaction! Which was added is caused by the production of more FeSCN 2+ more reactants content is licensed by CC 3.0! Answers to questions asked by student like you really mean you adding more product the! Reaction will be favored because that is the direction that produces more SCN- to be 78 ) something! A substance from the reaction if the equilibrium makes more product ( SCN- will... Ions initially added to the right, which will use up the additional FeSCN2+ to. To replace SCN - + O2 ( g ) + O2 ( g ) 2SO3 ( g ) equilibrium., which will use up the reactants can understand but I ca n't agree laoreet ac,,. More SCN - - the reverse reaction will use up the reactants have same. Reduce the stress pressure of the reaction go in the reaction waiting to! An iron thiocyanate control added if more scn is added to the equilibrium mixture effects of adding hydrochloric acid to this equilibrium add NaSCN is direction! Substance that reacts ( in a chemical reaction, what will happen reach. Smallest equilibrium constant is given to be 78 larger than the initial aside. The resulting solution would be more red reach equilibrium minutes! * right if more scn is added to the equilibrium mixture try reach! Are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes!.... In the forward or reverse direction to reach equilibrium ) + O2 g! Reaction will be converted to one mole of FeSCN2+ complex and product equilibrium... 24/7 to provide step-by-step solutions in as fast as 30 minutes! * system at equilibrium, then equilibrium... Recipient B. I can understand but I ca n't agree answers to questions asked by student like you ) drive... Reactant is added, the equilibrium mixture, will the red color of the water decrease... This make if more scn is added to the equilibrium mixture reactant side heavier 2 below M Fe 3+ and SCN-ions, the. Sulfur dioxide is added right to try to reach equilibrium that are not at equilibrium a., LibreTexts content is licensed by CC BY-NC-SA 3.0 in the reaction mixture at equilibrium SCN-!, we look at the bottom if you push it to the right producing. Multiple equilibria occurring simultaneously red to pale yellow also be changed by removing a from... \ ( Fe^ { 3+ } \ ) is added, the equilibrium contain. Pressure of the reaction mixture at equilibrium if more SCN- is added, causes! 2So2 ( g ) + O2 ( g ) 2SO3 ( g ) the vapor pressure of the will. I. trices ac mag u dictu ic amet, i. ctum vitae odio mag u dictu ic,. Be unchanged some confusion between students, so it should be made clear at the bottom increased by KSCN to! Occurring simultaneously one way is to add or remove a product ( SCN- ) will the. Expression for the reaction if the equilibrium mixture, will the red color the... Or reverse direction to reach equilibrium Keq will be favored because that is the same:! Move to the tube that are not at equilibrium, then the equilibrium mixture, what will to... Mixture intensity or lessen try to reach equilibrium as … Divide this mixture into 2 mL in! Of more FeSCN 2+ volume of the water will remain constant ac s. Twice as much product as … Divide this mixture into 2 mL portions in labeled. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0 previous National Science Foundation support grant. Made clear longer for new subjects diagram depicts the reaction as much product as … Divide this into...
Categories: Blogs
|
2021-04-20 23:37:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6005221605300903, "perplexity": 2393.3975134320767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039491784.79/warc/CC-MAIN-20210420214346-20210421004346-00265.warc.gz"}
|
http://judge.u-aizu.ac.jp/onlinejudge/description.jsp?id=1278&lang=en
|
Time Limit : sec, Memory Limit : KB
English
# Problem D: Lowest Pyramid
You are constructing a triangular pyramid with a sheet of craft paper with grid lines. Its base and sides are all of triangular shape. You draw the base triangle and the three sides connected to the base on the paper, cut along the outer six edges, fold the edges of the base, and assemble them up as a pyramid.
You are given the coordinates of the base's three vertices, and are to determine the coordinates of the other three. All the vertices must have integral X- and Y-coordinate values between -100 and +100 inclusive. Your goal is to minimize the height of the pyramid satisfying these conditions. Figure 3 shows some examples.
Figure 3: Some craft paper drawings and side views of the assembled pyramids
## Input
The input consists of multiple datasets, each in the following format.
X0 Y0 X1 Y1 X2 Y2
They are all integral numbers between -100 and +100 inclusive. (X0, Y0), (X1, Y1), (X2, Y2) are the coordinates of three vertices of the triangular base in counterclockwise order.
The end of the input is indicated by a line containing six zeros separated by a single space.
## Output
For each dataset, answer a single number in a separate line. If you can choose three vertices (Xa, Ya), (Xb, Yb) and (Xc, Yc) whose coordinates are all integral values between -100 and +100 inclusive, and triangles (X0, Y0 )-(X1, Y1)-(Xa, Ya), (X1, Y1)-(X2, Y2)-(Xb, Yb), (X2, Y2)-(X0, Y0)-(Xc, Yc) and (X0, Y0)-(X1, Y1)-(X2, Y2) do not overlap each other (in the XY-plane), and can be assembled as a triangular pyramid of positive (non-zero) height, output the minimum height among such pyramids. Otherwise, output -1.
You may assume that the height is, if positive (non-zero), not less than 0.00001. The output should not contain an error greater than 0.00001.
## Sample Input
0 0 1 0 0 1
0 0 5 0 2 5
-100 -100 100 -100 0 100
-72 -72 72 -72 0 72
0 0 0 0 0 0
## Output for the Sample Input
2
1.49666
-1
8.52936
|
2020-02-28 12:27:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7235296368598938, "perplexity": 1273.4062771668953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147154.70/warc/CC-MAIN-20200228104413-20200228134413-00521.warc.gz"}
|
https://www.ias.ac.in/describe/article/jess/129/0145
|
• Transport behaviour of different metal-based nanoparticles through natural sediment in the presence of humic acid and under the groundwater condition
• # Fulltext
https://www.ias.ac.in/article/fulltext/jess/129/0145
• # Keywords
Silver; iron oxide; titanium dioxide; zinc oxide; nanoparticles; humic acid; colloid filtration; deposition; groundwater.
• # Abstract
The production of nanoparticles (NPs) has increased significantly, given that they have numerous commercial and medical applications. There might have some risk associated with the release of these NPs in the environment. To assess the possible risk of releases of NPs in the groundwater, it is important to evaluate the fate and transport behaviour of NPs through porous media. The objective of this study is, therefore, to evaluate the transport behaviour of widely used NPs [i.e., silver (Ag), iron oxide (Fe$_{\rm{x}}$O$_{\rm{y}}$), titanium dioxide (TiO$_{2}$) and zinc oxide (ZnO)] through porous media in the presence and/or absence of organic matter [i.e., humic acid (HA)] under controlled de-ionized and natural groundwater condition. To achieve the objective, first, the detailed characterizations of NPs are carried out in the presence and absence of HA. Column transport experiments were performed using a 1-D sand-packed column. Different NPs were injected from one end of the column with a flow rate of 0.0054 cm/sec. The result suggests that nAg, nTiO$_{2}$, and nZnO particles are colloidal stable in the suspension, while nFe$_{\rm{x}}$O$_{\rm{y}}$ particles tend to aggregate and settle down very rapidly. However, in the presence of HA, the colloidal stability of nFe$_{\rm{x}}$O$_{\rm{y}}$ in the suspension increases significantly. Evaluation of transport behaviour of different metal NPs suggests that a high amount of nFe$_{\rm{x}}$O$_{\rm{y}}$ (C/C$_{0}$=0.01) and nZnO (C/C$_{0}$=0.09) particles are retained in the porous media. However, in the presence of HA, the transport efficiency of nFe$_{\rm{x}}$O$_{\rm{y}}$ (C/C$_{0}$=0.64) increases significantly. The extensively high amount of nAg and nTiO$_{2}$ particles are transported in the absence/presence of HA. The surface charge of particles and thus the interaction energy between the NPs and the sand is the main factor controlling the deposition of NPs. Overall, it could be stated that there is a risk of migration of nAg and nTiO$_{2}$ particles irrespective of the presence of organic matter or of nFe$_{\rm{x}}$O$_{\rm{y}}$ particles in the presence of organic matter through the aquifer porous media. However, in the natural groundwater system, when the different ion is present, the extent of transport of NPs is expected to be less, and the risk associated with releasing of NPs in the groundwater would be comparatively low than that is predicted under the controlled de-ionized water condition. However, the nTiO$_{2}$ particles always have a high risk of release into the groundwater.
• # Author Affiliations
1. Indian Institute of Technology Patna, Patna 801 103, India.
• # Journal of Earth System Science
Volume 130, 2021
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019
|
2021-07-28 04:42:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4353543519973755, "perplexity": 3530.225649668339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153521.1/warc/CC-MAIN-20210728025548-20210728055548-00312.warc.gz"}
|
http://math.stackexchange.com/questions/225996/recurrence-relation-for-the-integral-i-n-int-fracdx1x2n
|
# Recurrence relation for the integral, $I_n=\int\frac{dx}{(1+x^2)^n}$
Express recurrence relation of the integral
$$I_n=\int\frac{dx}{(1+x^2)^n}$$
$$I_n = \int\frac{1+x^2}{(1+x^2)^n}dx-\int\frac{x^2}{(1+x^2)^n}dx$$
$$I_n=I_{n-1}-\int x\cdot\frac{x}{(1+x^2)^n}dx$$
$$I_n=I_{n-1}-\frac{x}{2(1-n)(x^2+1)^{n-1}}+\frac{1}{2(1-n)}I_{n-1}$$
$$I_n=\frac{2n-3}{2(n-1)}I_{n-1}+\frac{x}{2(n-1)(x^2+1)^{n-1}} \ \ \ \ (n>1)$$
$$I_1=\arctan(x)$$
|
2015-05-29 05:12:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996353387832642, "perplexity": 3812.3490112322825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929869.17/warc/CC-MAIN-20150521113209-00047-ip-10-180-206-219.ec2.internal.warc.gz"}
|
http://blergh.21er.org/24/bind-with-dnssec
|
# Bind with DNSSEC
This is not a complete guide to setting up bind with dnssec - such a guide actually already exists, and it is pretty comprehensive. If you need more info about nsec3 signing, check out the rndc man page, specifically the part about signing. You really want nsec3 and not nsec, because nsec makes it easy to "walk" the zone, which is equivalent to a zone transfer.
Recently I stumbled upon a problem with my webmail - it was suddenly impossible to send mail. The webmail server is not hosted on the same machine as the mail server, thus the url to the remote server was not something like "localhost" but an actual FQDN. I discovered that there was not even any trace of the webmail client connecting to the mail server in the mail server's logs, which was baffling. I double checked the dns entry and made sure it was resolvable.
However, in the logs of the webmail server, lots of name resolution errors showed up. These sadly lacked the most important part (which name resolution had been attempted) but there were not many possibilities (this is more a note to any programmers - make your error messages descriptive!).
I found out that the webmail server was actually configured to use Google's DNS servers, while my workstation used my ISP's DNS server. Testing the Google server I found that it would not resolve the address of the smtp server, which was smtp.example.com. Googling the problem I arrived at the very useful tool https://dns.google.com/. This pinpointed to an expired DNSSEC signature of the smtp.example.com domain and suggested to further evaluate the problem using another helpful tool, http://dnsviz.net/. Indeed, it turned out that the signature for that particular domain had expired almost 24 hours ago.
Checking my primary dns server's log, I stumbled upon messages of the type
named[99999]: dumping master file: tmp-fywoAON4pO: open: permission denied
I remembered having seen them for a while, and ("since everything was working", and besides, i was "really busy with other, more important stuff") ignored them. However, it seems that the DNS server was trying to "DNSSEC maintain" my zones, i.e. trying to re-sign them. This is - as far as I could deduce from the documentation - done by creating a temporary file with all the appropriate entries. Once this is done, the file is moved to the appropriate place. This is the usual way programmers avoid dirty files, and actually pretty nice.
However, my DNS server runs in paranoia mode, i.e. it has selinux enabled and uses very strict permissions, as well as chrooted bind. My zone configuration files orignally looked like this:
zone "example.com" IN {
type master;
file "example.com.zone";
allow-update { "none"; };
allow-transfer { 1.1.1.1; 2.2.2.2; };
notify yes;
key-directory "/etc/pki/dnssec-keys/example.com.zone";
auto-dnssec maintain;
inline-signing yes;
};
This turned out to be part of the problem, specifically the line
file "example.com.zone";
Whether or not bind runs in a chroot does not really matter for this problem, but it can make it more confusing. The path to the zone file is relative to bind's working directory. This is given in bind's configuration file (usually /etc/named.conf or similar) in the options block, the directive "directory". On respectable linux-distributions, this working directory is not writable for the named daemon for security reasons.
If you want to resolve the issue properly, do not follow advice that you can frequently find online:
• Do not change permissions of bind's working directory to make it writeable.
• Do not change ownership of bind's working directory to make it writeable.
• Do not turn off selinux.
All of these are dirty workarounds which hurt your system's security. Besides that, they are not even good, permanent solutions:
• All respectable linux-distributions will fix permission problems when updates are performed. I.e. changing permissions or ownership will only last until the next update.
• On some systems, permission problems might even get corrected on-the-fly after minutes or seconds by special services (e.g. puppet).
The appropriate solution, in my eyes, is to handle zone files as "dynamic", even if you do not use dynamic updates. Dynamic zones are meant to be stored in a directory which exists specifically for this purpose, on my system it is found in the bind working directory and named "dynamic". This directory has the proper permissions and selinux-context to allow bind to write files. Primary zones that use DNSSEC should thus have a configuration file which looks like this:
zone "example.com" IN {
type master;
file "dynamic/example.com.zone";
allow-update { "none"; };
allow-transfer { 1.1.1.1; 2.2.2.2; };
notify yes;
key-directory "/etc/pki/dnssec-keys/example.com.zone";
auto-dnssec maintain;
inline-signing yes;
};
Of course, if you allow dynamic updates, you have to adapt that part. Secondly, slave zones should not reside in bind's working directory either, since it is not writeable. Putting slave zones there will prevent them from being transferred and written. For slave zones, there is a dedicated directory in the bind working directory, again with proper permissions and selinux-context. On my system it is called "slaves".
It is also not a solution for this problem to enable the named_write_master_zones selinx boolean. This boolean enables bind to write to zone files in its working directory - but while this might "fix" the problem of slave zone transfers, it will not help with DNSSEC-enabled master zones, particularly if they are set to auto-maintain. This configuration requires the named process to periodically generate a certain set of files for each domain (the .jnl, .signed, .jbk...) and apparently the selinux boolean does not allow creation of these files. Furthermore - and here we arrive at the beginning again - the named process creates temporary files during the signing process, which it also tries to write to the directory where the zone file is stored. Again, the selinux boolean apparently only allows write access to the actual zone files and not to auxiliary files.
## Takeaway
Several things worth memorizing:
• Use the "dynamic" directory for auto-maintained DNSSEC-enabled master-zones. This might not be an incredibly appropriate name for the directory given the usage (the zones are not actually dynamic in the original sense, but bind will add additional entries - the signatures, so they kind-of are).
• Use the "slaves" directory for slave zones.
• The named_write_master_zones selinx boolean does not fix this problem.
• Changing permissions or ownership is dangerous, and not a permanent fix.
On a side note, I am not sure since when Google has the policy to not cache DNS records whose DNSSEC-signatures are expired. It is, however, a step in the right direction by Google. Most DNS servers do not care about DNSSEC at all. Google chose to be conservative when it comes to DNSSEC-enabled zones, treating expired DNSSEC-signatures strictly. This is a strong statement in a world where security is only "top priority" in press releases, not in practice for most companies big and small.
|
2022-01-23 13:05:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44400691986083984, "perplexity": 3758.498394265277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304261.85/warc/CC-MAIN-20220123111431-20220123141431-00692.warc.gz"}
|
https://datawookie.netlify.com/blog/2013/10/plotting-times-of-discrete-events/
|
Plotting Times of Discrete Events
2013-10-19 Andrew B. Collier
I recently enjoyed reading O’Hara, R. B., & Kotze, D. J. (2010). Do not log-transform count data. Methods in Ecology and Evolution, 1(2), 118–122. doi:10.1111/j.2041-210X.2010.00021.x.
The article prompted me to think about processes involving discrete events and how these might be presented graphically. I am not talking about counts (which are well represented by a histogram) but the individual events themselves. The problems here being that
1. the data are essentially one dimensional (just a sequence of times at which events occurred) and
2. there may be a great number of events and they can be distributed over a considerable period of time.
Plotting the events as a series of points along a linear axis would therefore make a rather boring plot, possibly with a rather extreme aspect ratio. There had to be a better way! What about wrapping that axis up into an Archimedes’ spiral? Sounds reasonable. Let’s take a look.
First Iteration
Here time runs along the spiral and points indicate the times at which events occurred. In this case I have 21 events occurring at uniform intervals. Although it looks okay, there is one major flaw: the angular separation of the points is uniform but this is not consistent with the idea of a spiral axis. The points should be distributed uniformly in terms of arc length along the spiral!
Revision: Spiral Arc Length
I needed to calculate the arc length along the spiral. Since I was not concerned with the absolute length, I neglected the spiral’s pitch, giving a function which depended only on angle.
spiral.length <- function(phi) {
phi * sqrt(1 + phi**2) + log(phi + sqrt(1 + phi**2))
}
Then I could interpolate to find the correct location of the events.
Now the events, which are distributed uniformly in time, appear at uniform intervals along the spiral axis. Mission accomplished.
Here is the code to generate the spiral plot:
spiral.plot <- function(t, nturn = 5, colour = "black") {
npoint = nturn * 720
#
curve = data.frame(phi = (0:npoint) / npoint * 2 * pi * nturn, r = (0:npoint) / npoint)
curve = transform(curve,
arclen = spiral.length(phi),
x = r* cos(phi),
y = r * sin(phi))
#
points = data.frame(arclen = t * max(curve$arclen) / max(t)) points = within(points, { phi = approx(curve$arclen, curve$phi, arclen, rule = 2)$y
r = approx(curve$arclen, curve$r, arclen, rule = 2)$y x = r* cos(phi) y = r * sin(phi) }) # ggplot(curve, aes(x = x, y = y)) + geom_path(colour = "grey") + geom_point(data = points, aes(x = x, y = y), size = 3, colour = colour) + coord_fixed(ratio = 1) + theme(axis.text = element_blank(), axis.ticks = element_blank(), axis.title = element_blank(), panel.background = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank()) } It is unfortunate that I had to transform the data to Cartesian Coordinates in order to plot it. Although ggplot2 does has the capability to generate polar plots, it does not allow polar angles exceeding a single revolution. If anybody has other ideas on how to deal with this more elegantly, I would be very happy to hear from them. The first enhancement I would apply to this plot would be to find a way of putting tick marks along the spiral. Again, any input would be appreciated. Practical Application What about applying it to a more realistic scenario? If we simulate a radioactive decay process using the exponential distribution to yield a series of decay intervals, then these intervals can be accumulated to find the decay times. > Bq = 5 > > delay = rexp(2000, Bq) > > decay = data.frame(delay, time = cumsum(delay)) > spiral.plot(decay$time, 20)
As discussed by O’Hara and Kotze, the distribution of events in clumps of varying sizes separated by intervals without events is readily apparent.
|
2019-02-22 04:01:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5208679437637329, "perplexity": 1630.589486401546}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247513222.88/warc/CC-MAIN-20190222033812-20190222055812-00046.warc.gz"}
|
https://www.semanticscholar.org/topic/Plug-in-principle/8168693
|
You are currently offline. Some features of the site may not work correctly.
# Plug-in principle
In statistics, the plug-in principle is the method of estimation of functionals of a population distribution by evaluating the same functionals at… Expand
Wikipedia
## Papers overview
Semantic Scholar uses AI to extract papers important to this topic.
2018
2018
• 2018
• Corpus ID: 88523055
We consider the problem of estimating smooth integrated functionals of a monotone nonincreasing density $f$ on $[0,\infty)$ using… Expand
2017
2017
• 2017
• Corpus ID: 88514800
This paper studies non-separable models with a continuous treatment when the dimension of the control variables is high and… Expand
2017
2017
• International Conference on Information and…
• 2017
• Corpus ID: 39946826
The topical information security problem of development of statistical tests for hypotheses on the discrete uniform distribution… Expand
2016
2016
• 2016
• Corpus ID: 55746522
The R package quantreg.nonpar implements nonparametric quantile regression methods to estimate and make inference on partially… Expand
2016
2016
• Entropy
• 2016
• Corpus ID: 10589966
Several reproducibility probability (RP)-estimators for the binomial, sign, Wilcoxon signed rank and Kendall tests are studied… Expand
2013
2013
• 2013
• Corpus ID: 12651401
A new Bayesian multi-chain Markov Switching GARCH model for dynamic hedging in energy futures markets is developed by… Expand
Review
2011
Review
2011
• 2011
• Corpus ID: 62763717
Psychologists estimate the precision of their statistics both to conduct hypothesis tests and to construct confidence intervals… Expand
2009
2009
• 2009
• Corpus ID: 46415696
We propose a new approach to conditional quantile function estimation that combines both parametric and nonparametric techniques… Expand
2005
2005
We consider the Gaussian ARFIMA(j,d,l) model, with spectral density and an unknown mean . For this class of models, the n−1… Expand
|
2021-06-15 07:54:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4849942922592163, "perplexity": 4614.177085773095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487617599.15/warc/CC-MAIN-20210615053457-20210615083457-00569.warc.gz"}
|
https://chemistry.stackexchange.com/questions/14711/when-using-gaussian-to-calculate-nmr-whats-the-default-solvent-and-frequency
|
# When using Gaussian to calculate NMR, what's the default solvent and frequency?
If I choose GIAO HF/STO-3G, is the default solvent $\ce{CD3Cl}$? Is the default frequency for $\ce{^1H-NMR}$ 500 MHz? What about the frequency for $\ce{^13C-NMR}$?
• $\ce{CD3Cl}$ is not a solvent (or even a liquid) at ambient temperature and pressure. I have never heard of it being used in NMR. I think you meant deuterochloroform ($\ce{CDCl3}$). – Jan Dec 25 '15 at 22:36
• Much faster and in most cases reasonably accurate method is empirical prediction based on group contributions, see e.g. nmrdb.org Unless you are after some geometry dependent stuff, like Karplus parameters, you should first try the empirical methods. – ssavec May 12 '16 at 6:46
While Greg's answer describes sufficiently what the program does, I would like to extend a little on his remark. (This should have been a comment, but it was too long.)
The HF/STO-3G level of theory is what you call the minimised ab initio approach. It is unreliable in many ways, but it is fast. However, even geometries of well known and/or calculated molecules may be wrong. In short, it fails more often than it is correct. The problems start with HF, since it does not describe correlation sufficiently. (It only describes Pauli correlation, but this is exact.) The next problem is the basis set: It substantially lacks basis functions. It is called the minimal basis for a reason. Using this setup for NMR calculations is worse than guessing.
The first thing to do would be to increase the basis set. Maybe move away from Pople basis all together. A few basis set families come to mind: Dunning's augmented family, aug-cc-pVDZ and larger, Ahlrichs and Weigend's def2 family, def2-SVP and larger, and many others, see EMSL.
Calculating NMR tensors is always an accuracy versus time tradeoff. And it might be complicated to find the best (in the sense of most efficient) setup. The best way is to start a geometry optimisation on quite a low level. For reasonably large molecules a semi-empirical model like pm6 can be used to scan for configurational space.
Increase the level of theory to pure DFT. You can use density fitting here to speed up your calculation: e.g. #P BP86/Def2SVP/W06 DenFit.
Validate your geometries with a higher basis set, e.g. #P BP86/Def2TZVPP/W06 DenFit and different functionals, e.g. TPSS, PBE1PBE, M06, etc.
Validate your geometries with the dispersion correction of Grimme, e.g. BP86 EmpiricalDispersion=GD3BJ, PBE1PBE EmpiricalDispersion=GD3BJ, B2PLYPD3, etc.
Apply solvent corrections for your system, e.g. SCRF=(Solvent=Chloroform). Make sure to run single point calculations first and try optimisations later.
If you have fully converged geometries at DFT level, you might want to check basic properties with ab initio techniques. Most appropriate would be perturbation theory, e.g. MP2.
After you have done all the above you can go ahead and calculate NMR tensors. It may be combined with solvent approximations. It requires a very well converged structure, a local minimum on the potential energy hyper surface. (Check that with a frequency calculation, e.g. FREQ.) You can set it up like this sample input:
%oldchk=filenameOfPreviousOptimization.chk
%chk=newCheckpoitfile.chk
#P MP2/def2TZVPP
SCRF(solvent=chloroform)
NMR(GIAO)
Be advised that that computing NMR data with only one method can lead to many misinterpretations. It is very important to verify your structures and tensor on a variety of methods. If you cannot afford MP2 with a large basis, then you should at least check 4 or 5 DFT methods. It is also important to know, that including solvent effects is still something where the theory is not exact. If you have a system that changes a lot with the solvent, then you should consider explicit treatment. Then you might want to dive into molecular modelling, molecular dynamics, MM/QM, ONIOM, etc. pp.
• Very nice, detailed answer. – Greg Jul 29 '14 at 15:45
• Thanks for your information. I've tried DFT and it runs so slow that it seems to be stuck. – OhLook Jul 31 '14 at 15:59
• @Ath How many atoms are we talking about? What kind of computer are you running these calculations on? Depending on your system these calculations can be quite demanding and may well take more than a day. You can check the logfile to see what your calculation is doing. – Martin - マーチン Jul 31 '14 at 16:43
• @Ath Linux is just a name for a range of operating systems. For the scf cycle with about 200 atoms, i would expect about 4-8 hours on a single processor (maybe more). Geometry optimizations should take around a week or two. That depends very much on your guess. It really also depends on the computer you are using, memory, number of processors, speed, ... When talking 200 atoms, one should hsve the right equipment... – Martin - マーチン Aug 2 '14 at 4:34
• @Ath you should by something that has at least 8 processors with at least 8 gb of ram for each processor. Several terabytes of disk space is adviseable. When it comes to computational chemistry, usually more means better. You should check benchmark tests or ask another question on this site. As you have a demanding system you should go big. Access to a supercomputer is also advisable. As an alternative, look for a cooperation with a computational group. – Martin - マーチン Aug 2 '14 at 17:52
Performing calculations with the "NMR" keyword in Gaussian gives the magnetic shielding tensor in ppm and the spin-spin coupling in Hz. These numbers are independent of instrumental parameters like the frequency of the H-NMR. It doesn't contain any correction for any kind of solvent effect.
One remark: HF/STO-3G is a terrible, terrible low level of theory for ANY kind of calculations.
• Thank you, but I think the solvent does influence the result because in different solvent, the solute have different structures. – OhLook Jul 29 '14 at 1:54
• It may be so, you know your system, you have to work that out. The actual calculation does not contain any solvent correction as far as I know (and why would). – Greg Jul 29 '14 at 3:06
If I choose GIAO HF/STO-3G, is the default solvent $\ce{CD3Cl}$?
No, the default is gas phase as usual. But you can choose whatever solvent is programmed through the scrf=(solvent=...) command. The available solvents are listed at the bottom of the related Gaussian keyword site.
Is the default frequency for $^1$H−NMR $500~\mathrm{MHz}$? What about the frequency for $^{13}$C−NMR?
There is no default frequency as there is no frequency at all. The program calculates the absolute nuclear shielding tensors that you afterwards need to relate to some other substance's nuclear shielding tensor. As In experimental spectra one chooses TMS or the solvent signal, this must be done also by calculating the shielding tensors for TMS on your chosen combination of method and basis set.
BUT as there are sure default values that are chosen if one chooses the "NMR" command, there is no such thing as a "default route" for NMR calculations. I mean, you can simply click "NMR" in GaussView which will turn on NMR=GIAO calculations but there is more to think of, as Martin already stated.
NMR Shieldings
I very much like the approach by Lodewyk, Siebert, Tantillo, Rablen and Bally called CHESHIRE - Chemical Shift Repository, because there you can find many recommended procedures for either G03 or G09.
What they have done, was to calculate the shieldings for many known small substances with various DFT functionals, basis sets and solvents in many combinations to get for each of those combinations a set of scaling factors via linear regression.
For me, I chose to optimize my structures with B3LYP/6-31+G(d,p) and afterwards calculate the NMR shieldings with mPW1PW91/6-311+G(2d,p)/PCM and nmr=giao. Then one simply has to write out the calculated shieldings and modify them by $$\delta=\frac{\sigma-\text{intercept}}{\text{slope}}$$ according to their manual with the appropriate scaling factors.
But as you mentioned that you have 200 atoms, you might consider a smaller approach. Good for you, besides their recommended scaling factors there is also a manual on how to calculate your own scaling factors and a vast table for many solvents and methods with their respective scaling factors.
Another really imporant point for such large systems is that they don't exist as one single configuration! What you have to do is to generate and optimize an enormous amount of possible isomers, calculate the shieldings for all those isomers and Bolzmann-weight them using their energy deviation regarding the isomer with the lowest energy. $$\delta_i = \sum_j w_j~\delta_{i,j} = \sum_j \frac{\exp\left(\frac{-E_j}{RT}\right)}{\sum_k \exp\left(\frac{-E_k}{RT}\right)}~\delta_{i,j}$$
$^1$H,$^1$H-Coupling Constants
As an $^1$H-NMR doesn't consist of straight peaks but mostly multiplets, there could also be the need to calculate the coupling constants. There have been a few approaches towards accurate J values but one of the easiest is also stated on the CHESHIRE website.
1. Optimize the geometry with B3LYP/6-31G(d)
2. Run an NMR single-point calculation with the following route section in GAUSSIAN
At the end of the molecule specification (separated by a blank line) read in: atoms=H
A sample input file for chloroethane could be like:
\#n B3LYP/6-31G(d,p) nmr=(fconly,readatoms) iop(3/10=1100000)
Chloroethylene CS
0,1
C,0,0.,0.,0.
C,0,0.,0.,1.32727906
Cl,0,1.462671003,0.,-0.9634208566
H,0,-0.8964932057,0.,-0.6096342696
H,0,-0.9458848703,0.,1.8609603221
H,0,0.9144771425,0.,1.9110938644
atoms=H
3. From the resulting log file, extract the desired Fermi contact J values, and scale them by a factor of 0.9117
Example: Nitrobenzene
According to SDBS, AIST the $^1$H resp. $^{13}$C-NMR spectrum have the following peaks: $$\begin{array}{ccccccc} \hline \ce{Nuc.} & \ce{Calc}. & \ce{Exp}. & & \ce{Nuc}. & \ce{Calc}. & \ce{Exp}. \\ & \ce{ppm} & \ce{ppm} & & & \ce{ppm} & \ce{ppm} \\ \hline & & & & \ce{C_{N}} & 146.47 & 148.30 \\ \ce{H_{A,~A'}} & 8.18 & 8.25 & & \ce{C_{A,~A'}} & 123.25 & 123.46 \\ \ce{H_{B,~B'}} & 7.37 & 7.56 & & \ce{C_{B,~B'}} & 127.75 & 129.43 \\ \ce{H_{C}} & 7.59 & 7.71 & & \ce{C_{C}} & 135.03 & 134.71 \\ \hline & \ce{RMSD} & 0.23 & & & \ce{RMSD} & 2.52\\ \hline \end{array}$$ The differences are not very big. As the carbon nuclei span a much wider shielding range the average error is considerably smaller when compared to the RMSD of the proton shifts. To get an image out of this pure numbers, the overlay of the calculated versus the experimental $^1$H-NMR spectrum can be seen below. It fits quite good, even if it could be better. Maybe the WP04 functional could have done a better job.
• Just curious, how do you translate the corrected J values to plotting them as the above vertically stacked spectra? Did you use a specific software? – user24104 Dec 25 '15 at 2:22
• Do these "solvents" make sense? Calculating the interaction of solvent and specimen is not as "straightforward" than deriving chemical shifts and couplings from the vacuum electronic stucture that Gaussian has solved. Quite another task, that. – Karl Jan 17 '17 at 19:02
• The sample input for chloroethane begins with \#n . Is the first backslash a mistake? – Kurzd May 30 '17 at 14:48
• Took some time, but I used a Mathematica-notebook. Don’t know where it is now. ^^" ... @Kurzd Yes ... thanks for highlighting. – pH13 - Yet another Philipp May 30 '17 at 14:52
|
2020-10-22 15:31:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.645096480846405, "perplexity": 1420.6228451453078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879673.14/warc/CC-MAIN-20201022141106-20201022171106-00131.warc.gz"}
|
https://pbelmans.ncag.info/blog/2022/08/19/snapshot/
|
The Snapshots of modern mathematics from Oberwolfach is a series of outreach articles. Last month I attended a workshop there and I was asked to write something for it. Long story short, if you are impatient, you can read the result all in one go. But in the spirit of magazine science fiction I will also be posting it in installments on my blog.
Mind you, this is still the preprint version technically speaking.
## The periodic tables of algebraic geometry
To get a grip on the complexity of the world around us and the objects—such as animals, or chemical elements, or stars—appearing in it, we want to classify these objects. This allows us to describe the relationships, similarities, and differences between things we might be interested in, and thus further understand our world.
An early, and somewhat cruel, effort to understand a class of living creatures lead to lepidopterology: the study of butterflies and moths, most famously performed by sticking needles through them and displaying them in nice wooden cases, cf. Figure 1. From the 17th century onwards this was an important feature of humanity’s interest in biology, and it serves as a prime example of classification in biology. Another important example is Darwin’s description and classification of the beaks of the finches on the Galápagos islands, which led him to formulate the theory of evolution.
In this snapshot I want to introduce you to the idea that classification is an essential aspect of mathematics, just like it is for biology (and other sciences). The mathematical objects we will discuss are truly as pretty as the butterflies from Figure 1. And whilst in some cases it takes a bit of training as a mathematician to fully grasp their beauty, at least no living creatures need to be harmed to study them.
In §1 we will recall the periodic table of elements: an essential tool in modern chemistry, and the result of a lengthy classification effort. The organisation of elements like hydrogen, carbon, and uranium is similar to how mathematical objects are catalogued and have their properties described in a systematic way. Luckily, the study of these mathematical objects requires less interaction with dangerous chemicals.
An important feature is that these classification efforts are an ongoing process: when mathematicians complete one classification, they will move on to the next and more challenging one. That is why we will discuss periodic tables in algebraic geometry, going from the 19th to the 21st century, and from completely known settings to cutting-edge research.
### 1. The periodic table of elements
Every time one enters a chemistry classroom one is presented with a large poster, listing all the 118 known chemical elements together with their properties. This is the famous periodic table, and a very basic version is given in Figure 2. It lists elements like hydrogen, helium, and nitrogen in a specific shape which was essential for the development of chemistry in the 19th and 20th century, and continues to be used to explore and explain chemical elements.
The name periodic table refers to an experimentally observed periodicity in the chemical behavior of elements: certain elements tend to exhibit similar behavior. For example the atomic radius has a periodicity: decreasing from left to right, and going up when going down in the table. In Figure 2, the inert noble gases are listed on the very right in light green, with the halogens as main building blocks for salts next to them, and the alkali metals in the first column all being soft and reactive metals. These observations are what chemists tried to formalise into a system. In 1869 Mendeleev catalogued the then-known elements in terms of atomic mass, obtaining the periodic table we now know.
Originally there were gaps in the table: elements that were predicted to exist, but which were not yet discovered. The periodicity of the periodic table also predicted some of the properties that these elements were required to have. For example, Mendeleev predicted the existence of an element with atomic mass $\pm72.5$, a high melting point, and a gray color. This was element was subsequently found in 1887, and called germanium, in order to fill the gap which existed at position 32.
#### Invariants of elements
The periodic table in Figure 2 is a simplification of the periodic table as you usually see it. For space reasons we only list the chemical symbol and its atomic number. But usually a periodic table contains lots more data, such as the atomic weight, the melting and boiling point, the electron configuration, etc. There exists a beautiful interactive version too.
These are all examples of invariants of the objects being classified: properties of the chemical elements that do not change over time, and that do not depend on who measures them. By measuring invariants we can identify which chemical element we are looking at, and distinguish different elements. This is an important idea in mathematics too: mathematicians love to study invariants of objects, and then use them to distinguish between different objects.
#### Stars and the Hertzsprung–Russell diagram
In the first paragraph we also mentioned that one can try to classify the stars in the sky. To better understand an important aspect of classifications in mathematics we need to discuss how classifying stars is different from classifying chemical elements.
Astronomers observed that not all stars are equal: some are brighter than others (even when accounting for the distance), and some are hotter than others. Back in the early 1910s Hertzsprung and Russell made a plot of those two properties of stars, and they noticed that some types of stars are impossible. There are no super-bright cold stars, nor are there very faint hot stars. And there are many stars like our Sun: they all have roughly the same brightness and the same temperature. The interested reader is invited to read up more on the Hertzsprung–Russell diagram.
The main takeaway is that it is possible to vary the parameters of a star, subject to certain rules imposed by physics. This is a feature not present in the periodic table, but something similar will happen in mathematics, so one better keep this behavior in mind.
### 2. Classifications in algebraic geometry
We now turn to classifications in mathematics. One famous instance of such a result in mathematics is the Classification of Finite Simple Groups (CFSG). A group describes the symmetries of an object, and group theory is a fundamental subject in modern mathematics. Just like molecules are built using atoms, Jordan showed in 1870 that every finite group is built using simple groups.
The first (interesting) simple groups were already discovered by Galois in 1831, when he was studying solutions of polynomials of degree $\geq 5$. The first example contains 60 elements. The last group to be discovered (in 1981) was the Monster group, and it has approximately $8\cdot 10^{53}$ elements. Through a large effort of many mathematicians the CFSG was obtained, stating that all simple groups had been found in those 150 years. For more on this see, e.g., Searching for the Monster in the Trees and Symmetry and characters of finite groups in this very Snapshots series.
We will instead focus on classifications in algebraic geometry, because the author is an algebraic geometer and not a group theorist, and because the story of classifications in algebraic geometry is less well-known than the CFSG. Algebraic geometry is the study of shapes described by polynomial equations. The shapes we will be interested in are smooth projective varieties, defined over the complex numbers. Let us unpack what this means.
#### Smooth projective varieties
First of all, working over the complex numbers is a necessity to make things tractable, but it also makes it harder to make drawings. Usually we visualise the complex numbers as the complex plane, with one real axis and one imaginary axis. But from the point-of-view of an algebraic geometer the complex numbers are really a one-dimensional object! That is why an algebraic geometer will often draw an impression of an object when considered over the real numbers. More concretely, Figure 3a is what an algebraic geometer would draw when drawing a curve, whilst Figure 4b is what a complex geometer would think of, but they really are manifestations of the same object.
Now, what does it mean to describe a shape using polynomials? If $f\in\mathbb{C}[x]$ is a polynomial, so $f(x)=a_dx^d+a_{d-1}x^{d-1}+\dots+a_1x+a_0$, we define the variety associated to it as $\mathbb{V}(f)=\{\alpha\in\mathbb{C}\mid f(\alpha)=0\},$ the set of zeroes of $f$ in the complex plane. This set is always finite if the polynomial is not constant zero and consists of at most $d$ points if it is not constant. Over the complex numbers it consists of exactly $d$ points counted with multiplicities. In general we will consider a finite collection of polynomials in $n$ variables $f_1,\ldots,f_r\in\mathbb{C}[x_1,\ldots,x_n]$, and define $\mathbb{V}(f_1,\ldots,f_r)=\{(\alpha_1,\ldots,\alpha_n)\in\mathbb{C}^n\mid \forall i=1,\ldots,r\colon f_i(\alpha_1,\ldots,\alpha_n)=0\},$ the set of points (inside the affine space $\mathbb{C}^n$), or zero locus, satisfying all polynomial equations simultaneously. These subsets are called affine varieties.
Instead of affine varieties we will be interested in projective varieties. In the affine plane two lines can be parallel, but this causes annoying situations in which we have to say that two distinct lines intersect in precisely one point unless they are parallel. That is why we extend our affine geometry: to make statements like the one on intersections of distinct lines more uniform, and get rid of the exceptions. In the projective world we have that our initial two parallel lines now intersect in a point at infinity, so any two distinct lines now always intersect.
For this we need to replace the affine space in which affine varieties live, by projective space $\mathbb{P}^n(\mathbb{C})$. It is defined by considering the set $\mathbb{C}^{n+1}\setminus\{(0,\ldots,0)\}$ of all points except the origin in the affine space one dimension up, up to the equivalence relation which says that $(\alpha_0,\ldots,\alpha_n)\sim(\beta_0,\ldots,\beta_n)$ if there exists some $\lambda\in\mathbb{C}\setminus\{0\}$ such that $\alpha_i=\lambda\beta_i$ for all $i=0,\ldots,n$.
To work with projective varieties using polynomials, we will only consider homogeneous polynomials: a polynomial in which every term has the same degree. For example $x^2+y^2+z^2$ is a homogeneous polynomial of degree 2, in 3 variables. The projective variety associated to a collection of homogeneous polynomials $f_1,\ldots,f_r\in\mathbb{C}[x_0,\ldots,x_n]$ is $\mathbb{V}(f_1,\ldots,f_r)=\{(\alpha_0,\ldots,\alpha_n)\in\mathbb{P}^n(\mathbb{C})\mid\forall i=1,\ldots,r\colon f_i(\alpha_0,\ldots,\alpha_n)=0\}.$ This is well-defined because we restricted our attention to homogeneous polynomials, so that asking whether a polynomial is (non-)zero is independent of the scaling in the equivalence relation. The example $x^2+y^2+z^2$ thus defines a curve in $\mathbb{P}^2(\mathbb{C})$—a conic—corresponding to Figure 4a.
The final ingredient in order to describe the objects we are interested in is smoothness. This is best explained through an example: consider the following degree 3 polynomials \begin{aligned} f&=-y^2+x^3-2x \\ g&=-y^2+x^3+x^2 \end{aligned} describing affine curves living in $\mathbb{C}^2$. If we draw these curves inside $\mathbb{R}^2\subset\mathbb{C}^2$ we get the pictures as in Figures 3a and 3b. We immediately see that on the right there is something funny happening at the origin: there is a singularity. For more on singularities we refer to another Snapshot: Swallowtail on the shore.
#### Classifying smooth projective varieties?
In what follows next we discuss examples of classifications of smooth projective varieties. This will illustrate how the life of an algebraic geometer can be very similar to that of someone sticking needles through unsuspecting butterflies, or that of an experimental chemist inhaling noxious fumes in order to isolate an unknown chemical element.
Before we embark on our journey we need to point out that to an algebraic geometer classification can mean different things. Certainly, we are not just classifying polynomials, rather we are interested in classifying varieties independently of their realisation. This gives rise to classification we will mostly be talking about: that of varieties up to isomorphism, i.e. up to their realisation inside some projective space.
It turns out that already in dimension 2 this becomes impossible, so that we will only try to classify certain well-chosen objects. There is an entire branch of algebraic geometry, called birational geometry, devoted to understanding the precise relationships between smooth projective varieties which are different but nevertheless almost the same: we say that they are birational, but we will not discuss this further.
|
2023-01-30 08:57:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6519407629966736, "perplexity": 382.5717352567631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00269.warc.gz"}
|
http://openstudy.com/updates/4da5f3efd6938b0b269ea24d
|
## anonymous 5 years ago Express as a Monomial in terms of I ??? HOW TO DO PLEASE :O
1. anonymous
do you mean i? i = $\sqrt{-1}$
2. anonymous
i think so. it's:$8\sqrt{-36}-4\sqrt{-49}$
3. anonymous
ok, so that's $8\sqrt{36}\sqrt{-1} - 4\sqrt{49}\sqrt{-1}=(8\times6\times i) - (4\times7\times i)=48i -35i=13i$
4. anonymous
oops should be 48i - 28i = 20i
Find more explanations on OpenStudy
|
2016-10-27 15:06:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6523476243019104, "perplexity": 11782.118417968155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721347.98/warc/CC-MAIN-20161020183841-00251-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://jmlr.org/papers/v10/yehezkel09a.html
|
## Bayesian Network Structure Learning by Recursive Autonomy Identification
Raanan Yehezkel, Boaz Lerner; 10(53):1527−1570, 2009.
### Abstract
We propose the recursive autonomy identification (RAI) algorithm for constraint-based (CB) Bayesian network structure learning. The RAI algorithm learns the structure by sequential application of conditional independence (CI) tests, edge direction and structure decomposition into autonomous sub-structures. The sequence of operations is performed recursively for each autonomous sub-structure while simultaneously increasing the order of the CI test. While other CB algorithms d-separate structures and then direct the resulted undirected graph, the RAI algorithm combines the two processes from the outset and along the procedure. By this means and due to structure decomposition, learning a structure using RAI requires a smaller number of CI tests of high orders. This reduces the complexity and run-time of the algorithm and increases the accuracy by diminishing the curse-of-dimensionality. When the RAI algorithm learned structures from databases representing synthetic problems, known networks and natural problems, it demonstrated superiority with respect to computational complexity, run-time, structural correctness and classification accuracy over the PC, Three Phase Dependency Analysis, Optimal Reinsertion, greedy search, Greedy Equivalence Search, Sparse Candidate, and Max-Min Hill-Climbing algorithms.
[abs][pdf][bib]
|
2022-01-19 11:56:54
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8334112167358398, "perplexity": 4253.138339649266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301309.22/warc/CC-MAIN-20220119094810-20220119124810-00436.warc.gz"}
|
http://www.jstor.org/stable/25053073
|
## Access
You are not currently logged in.
Access your personal account or get JSTOR access through your library or other institution:
# Bayesian Analysis of Mortality Rates for U.S. Health Service Areas
B. Nandram, J. Sedransk and L. Pickle
Sankhyā: The Indian Journal of Statistics, Series B (1960-2002)
Vol. 61, No. 1, Sample Surveys (Apr., 1999), pp. 145-165
Stable URL: http://www.jstor.org/stable/25053073
Page Count: 21
Preview not available
## Abstract
This paper summarizes our research on alternative models for estimating age specific and age adjusted mortality rates for one of the disease categories, all cancer for white males, presented in the Atlas of United States Mortality, published in 1996. We use Bayesian methods, applied to four different models. Each assumes that the number of deaths, $d_{ij}$, in health service area i, age class j has a Poisson distribution with mean $n_{ij}\lambda _{ij}$ where $n_{ij}$ is the population at risk. The alternative specifications differ in their assumptions about the variation in ln $\lambda _{ij}$ over health service areas and age classes. We use expected predictive deviances, posterior predictive p-values and a cross-validation exercise to evaluate the concordance between the models and the observed data. The models captured both the small area and regional effects sufficiently well that no remaining spatial correlation of the residuals was detectable, thus simplifying the estimation. We summarize by presenting point estimates, measures of variation and maps.
• [145]
• 146
• 147
• 148
• 149
• 150
• 151
• 152
• 153
• 154
• 155
• 156
• 157
• 158
• 159
• 160
• 161
• 162
• 163
• 164
• 165
|
2017-01-20 11:15:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8132097125053406, "perplexity": 6833.1650402326295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00448-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://dynamomd.org/index.php/tutorial4
|
This tutorial uses an example study of square-well particles to introduce several topics:
This is a full study on the square-well fluid system.
For the purpose of the tutorial, we'll want to simulate a fluid of square-well molecules. These are essentially hard-sphere particles with a short-range attractive well:
We will simulate this fluid at a reduced temperature of $k_B\,T=2$ and a reduced density of $\rho=0.1$. If you want to learn more about the square-well potential, its parameters, and how it corresponds to realistic intermolecular interactions please see the reference entry. If you're uncertain of the units please see the FAQ on units, but everything here is presented in reduced units.
We will again use periodic boundary conditions to allow us to simulate an infinite fluid without the effects of walls or other containers.
## The whole tutorial in brief
We're going to create a square-well fluid with $N=4000$ particles at a low density. We'll try to control the temperature using velocity rescaling, then using thermostats. Finally, we'll collect some measurements from the system. The commands we will use are
#Create the low-density square-well system
dynamod -m 1 -C 10 -d 0.1 --i1 0 -r 2.0 -o config.start.xml
#Run the system briefly to check the temperature
dynarun config.start.xml -c 1000000
#Add a thermostat, to allow us to control the temperature
dynamod config.out.xml.bz2 -T 2.0
#Run the system using the thermostat to set the temperature and let it equilibrate
dynarun config.out.xml.bz2 -c 1000000
#Run it some more to equilibrate it further
dynarun config.out.xml.bz2 -c 1000000
#Disable the thermostat and remove any momentum, so that we might collect accurate dynamic information
dynamod -T 0 -Z config.out.xml.bz2
#Run the simulation to collect data on the system
dynarun config.out.xml.bz2 -c 1000000 -o config.final.xml -L IntEnergyHist -L MSD
#Use dynatransport to analyse the transport coefficients
dynatransport output.xml -s 2 -c 10 -v
We'll now look in detail at each of these commands.
# Setting up the configuration file
When you first start using DynamO, it is not really practical to try to create a configuration file from scratch. The dynamod tool helps by providing many pre-designed systems to start your simulations from.
Following the same steps in tutorial 2, we again query the available options of the dynamod command using the --help option:
dynamod --help
We then look for the most useful mode and we see that square-well fluids can be made using dynamod's packing mode 1. We can get some more information on this mode by adding the --help option:
dynamod -m 1 --help
And a detailed description of the modes options will be outputted on screen:
...
Mode 1: Mono/Multi-component square wells
Options
-C [ --NCells ] arg (=7) Set the default number of lattice unit-cells in each direction.
-x [ --xcell ] arg Number of unit-cells in the x dimension.
-y [ --ycell ] arg Number of unit-cells in the y dimension.
-z [ --zcell ] arg Number of unit-cells in the z dimension.
--rectangular-box Set the simulation box to be rectangular so that the x,y,z cells also specify the simulation aspect ratio.
-d [ --density ] arg (=0.5) System density.
--i1 arg (=FCC) Lattice type (0=FCC, 1=BCC, 2=SC)
--f1 arg (=1.5) Well width factor (also known as lambda)
--f2 arg (=1) Well Depth (negative values create square shoulders)
--s1 arg (monocomponent) Instead of f1 and f2, you can specify a multicomponent system using this option. You need to pass the the parameters for each species as follows --s1 "diameter(d),lambda(l),mass(m),welldepth(e),molefrac(x):d,l,m,e,x[:...]"
...
Here you can see many of the same options available for hard-sphere systems, as seen in tutorial 2. The only additions are the well width (--f1) and depth (--f2) options and the option for a multicomponent system (--s1).
Lets start by making a monocomponent mixture of square-wells using the following command:
dynamod -m 1 -C 10 -d 0.1 --i1 0 -r 2.0 -o config.start.xml
The options passed here are discussed in detail in tutorial 2. The only differences are that the number of particles has been increased to 4000 (-C 10), we're creating square-well molecules (-m 1) instead of hard spheres, and the density is lower (-d 0.1). An example of the configuration file is available below (it is a large XML file, so your browser may take some time to display it).
As we haven't specified the well depth and well width, they have been left at their default values of 1 and 1.5 respectively. Next, we're going to look at thermostatting the system.
# Rescaling velocities
When creating the configuration, we initially set the temperature through the rescale option -r. This option works by rescaling all of the velocities of the particles so that the instantaneous temperature is $k_B\,T=2$ (or whatever is passed as an argument to the option). For a system without rotational degrees of freedom, the temperature is given by $k_B\,T = \frac{1}{3\,N}\sum_i^N m_i\,v_i^2$ so it is clear that by scaling the velocities we can set the temperature to whatever we wish. However, rescaling the temperature only holds the temperature fixed (AKA thermostats) in "hard" systems such as the hard-sphere/parallel-cube/hard-lines systems. This is because these systems have no finite potential energy terms between the particles, therefore the temperature does not change with time (except if we perform work such as compression on the system).
For square-well systems, we can set the temperature at the start of the simulation, but it will change over time due to interactions converting energy between kinetic and potential modes. We can see this if we run a simulation on the starting configuration:
dynarun config.start.xml -c 1000000
Please note, we didn't set an output configuration file name using -o so the default config.out.xml.bz2 is used. Taking a look at the output, we can see the temperature (and excess internal energy $U$) is fluctuating over time:
...
ETA 16s, Events 100k, t 7.08388, <MFT> 0.141678, T 2.47483, U -0.71225
ETA 14s, Events 200k, t 13.6032, <MFT> 0.136032, T 2.48533, U -0.728
ETA 12s, Events 300k, t 20.1072, <MFT> 0.134048, T 2.48133, U -0.722
ETA 11s, Events 400k, t 26.6342, <MFT> 0.133171, T 2.48267, U -0.724
...
You should note that if the temperature fluctuates higher, the internal energy fluctuates lower as the total energy is constant. You can see this if you calculate the average energy per particle $\left\langle E\right\rangle=U + 3\,k_B\,T/2$ And see that for this system it remains constant at the starting value of $\approx3$. This is one of the nice properties of event-driven molecular dynamics, energy is exactly conserved. Unfortunately, we still need some way of setting the temperature. We could rescale again to take some energy out of the system to try to lower it to a temperature of $k_B\,T=2$, but this would need to be repeated over by hand until the temperature converged. Instead, we can use a thermostat to automatically add/remove energy from the system to reach a specified temperature.
To add an Andersen thermostat, again use the dynamod tool:
dynamod config.out.xml.bz2 -T 2.0
Please note that this command loads the config.out.xml.bz2 file, adds an Andersen thermostat, and the result is saved into the default output file name, which is config.out.xml.bz2. This will overwrite the initial file, if you don't want to do this, specify a new file name with the -o option.
The dynamod command above will add an Andersen thermostat to the system with a target temperature of $k_B\,T=2$ (set by the -T argument). This thermostat will eventually bring the system to the specified temperature, even with changes in the configurational energy, by randomly reassigning particle velocities.
Note: If you wish to change the thermostat temperature at a later time, you can use the dynamod on the configuration again:
dynamod config.out.xml.bz2 -T 4.0
You can even use dynamod remove the thermostatt by using a temperature of zero (-T 0). Alternatively, you can open up the configuration file in a text editor, and edit or delete the Andersen type System event by hand:
<System Type="Andersen" Name="Thermostat" MFT="2.0" Temperature="1.0" SetPoint="0.05" SetFrequency="100">
<IDRange Type="All"/>
</System>
With the thermostat added and the temperature set to $k_B\,T=2$, we can see what the result is on the temperature of the system. Again, running the system
dynarun config.out.xml.bz2 -c 1000000
And the output should look like this:
...
ETA 16s, Events 100k, t 6.28632, <MFT> 0.129188, T 2.15641, U -0.75675
ETA 15s, Events 200k, t 12.6762, <MFT> 0.130097, T 2.05169, U -0.771
ETA 13s, Events 300k, t 19.1881, <MFT> 0.13125, T 2.03105, U -0.7735
ETA 11s, Events 400k, t 25.678, <MFT> 0.131705, T 2.01297, U -0.75525
ETA 9s, Events 500k, t 32.179, <MFT> 0.13203, T 2.06379, U -0.7915
ETA 7s, Events 600k, t 38.6795, <MFT> 0.132246, T 2.02205, U -0.77625
ETA 6s, Events 700k, t 45.1681, <MFT> 0.132363, T 2.01704, U -0.7615
ETA 4s, Events 800k, t 51.6511, <MFT> 0.132437, T 2.04454, U -0.78925
ETA 2s, Events 900k, t 58.1523, <MFT> 0.132537, T 2.01887, U -0.79125
ETA 0s, Events 1000k, t 64.6884, <MFT> 0.132689, T 1.9653, U -0.7795
...
We can see that the temperature approaches the required temperature at the end. Looking at the instantaneous $T$ and $U$ values it appears to have reached steady state after around 200k events. The average mean free time (MFT) is still changing but this is due to it accumulating samples during the equilibration. We can confirm this by running the configuration for another $10^6$ events.
dynarun config.out.xml.bz2 -c 10000000
...
ETA 16s, Events 100k, t 6.50405, <MFT> 0.13339, T 2.00641, U -0.7845
ETA 15s, Events 200k, t 13.0134, <MFT> 0.13345, T 1.98747, U -0.78325
ETA 13s, Events 300k, t 19.5232, <MFT> 0.133469, T 1.9498, U -0.7825
ETA 11s, Events 400k, t 26.1082, <MFT> 0.133857, T 1.97389, U -0.815
ETA 9s, Events 500k, t 32.6387, <MFT> 0.133873, T 2.01406, U -0.76675
ETA 7s, Events 600k, t 39.2339, <MFT> 0.134103, T 2.02729, U -0.79425
...
Here its easy to see that the mean free time is relatively stable as well. It is very difficult to conclusively prove that we're at steady state but previous experience with this system tells us that we're now ready to collect some data.
# Collecting Data
At this point we have a system which has been equilibrated with a thermostat. We want to collect some information on the properties of the system, namely the internal energy histograms, diffusion coefficients, viscosity, and thermal conductivity.
To find out what output plugins are available and how to load them please see the output plugin documentation. Most of what we want to collect is contained in the Misc plugin which is loaded by default, but we'll need to add the IntEnergyHist and MSD plugins to collect the energy histograms and diffusion data.
Unfortunately there is a problem with thermostats while collecting data which characterises the dynamics of the system, e.g. the transport coefficients. The Andersen thermostat changes the motion of the system when it randomly re-assigns the particle velocities. Thus, if we measure the properties of the system, they will be the those of the square-well fluid AND the thermostat, not the fluid alone. Also, if we take a look at the restrictions on using the thermal conductivity, we'll notice that it is restricted only to NVE/microcanonical simulations (systems without a thermostat).
We're going to have to disable the thermostat during data collection and hope (and check) that the system fluctuates close to the target temperature. We can use dynamod to disable the thermostat:
dynamod -T 0 -Z config.out.xml.bz2
Please note that we also zeroed the total momentum again using the -Z option as the Andersen thermostat causes the total momentum to fluctuate around zero. Now we're ready to collect some data! We just run dynarun while enabling the output plugins we wish to use:
dynarun config.out.xml.bz2 -c 1000000 -o config.final.xml -L IntEnergyHist
And we're now ready to process the results!
# Processing the results
## Thermodynamic properties
In the first instance, we can start processing the collected data in the same way tutorial 2 deals with processing collected data. Expanding the output file:
bunzip2 output.xml.bz2
We can then check the file to see how close the temperature is to $k_B\,T=2$ after we disabled the thermostat. We use the Temperature tag in the output file for this:
<Temperature Mean="1.9695813386316516" MeanSqr="3.8793330745797636" Current="1.9751905078351808" Min="1.942023841168514" Max="2.0076905078351808"/>
The average value has a deviation of $\approx2\%$ from the desired value, which can be expected with this system size. Larger systems (with longer equilibration times) will lower this value if needed as the fluctuations scale with $N^{-0.5}$. For publication results, I would recommend setting the temperature more accurately, perhaps using the calculated heat capacity to estimate the required energy to add or remove from the system to more accurately set the temperature.
It is interesting to note at this point that DynamO collects "exact" time averages wherever possible (see the FAQ on averages). By exact we mean that the mean values are not discretely sampled over the trajectory, but are true time averages integrated over the length of the simulation.
We can also get some scale for the fluctuation of the temperature by calculating its standard deviation. We can calculate this using the mean square value, e.g. $\sigma_T=\sqrt{\left\langle T^2\right\rangle - \left\langle T\right\rangle^2} \approx0.009079$ Again, this value is system size dependent. Interestingly, in NVE simulations this value is related the heat capacity of the system; however, we will calculate this property through the configurational internal energy below.
Taking a look at the UConfigurational tag, we have:
<UConfigurational Mean="-3161.3449847792272" MeanSqr="9997069.4161528479" Current="-3194.9999999999995" Min="-3389.9999999999995" Max="-2995.9999999999995"/>
where this is the total energy the system has through interactions (if you want the specific configurational internal energy you will need to divide by $N$). We could again calculate the standard deviation which is related to the residual heat capacity, $C_V^{ex.}$, by the following formula: $\frac{C_v^{ex.}}{k_B}=\frac{\left\langle U_{conf.}^2\right\rangle - \left\langle U_{conf.}\right\rangle^2}{k_B^2\,T^2}$ however, for convenience we can just use the ResidualHeatCapacity tag which calculates this for us:
<ResidualHeatCapacity Value="764.91663782252635"/>
Please note that, like the UConfigurational values, this value is extensive. We'll now take a look at processing collected data which is more complex, beginning with the internal energy histogram.
## Internal Energy Histogram
The internal energy histogram is extremely interesting as it allows us to begin to calculate key thermodynamic properties such as the density of states. This also allows us to use advanced techniques such as multicanonical simulations and histogram reweighting. We enabled the internal energy histogram plugin with the -L IntEnergyHist option to dynarun and its output is under the EnergyHist tag in the output file:
<EnergyHist BinWidth="1">
<HistogramWeighted TotalWeight="68.801254983242984" Dimension="1" BinWidth="1" AverageVal="-3161.3443994965492">
-3390 2.330168873287094e-06
-3389 4.9110740851148333e-06
-3388 3.4733255439184949e-06
...
</HistogramWeighted>
</EnergyHist>
For all the details on what the above attributes mean, please see the IntEnergyHist plugin documentation, but it is essentially a list of configurational internal energy values and the fraction of simulation time spent in each (again collected using exact time averaging and not periodic sampling. If we wanted to process or plot this data, we need to cut it out of the output.xml file. For a full description of how to handle XML files, please take a look at the reference. Here, we'll use the xmlstarlet tool to cut it out:
xmlstarlet sel -t -v '//EnergyHist/HistogramWeighted' output.xml > histogram.dat
Now we can plot the data and we should end up with a graph like the one on the right. It appears that this data is quite rough and longer simulations are needed to accurately obtain good averages, but this is a good initial estimate.
We've covered some basic properties and how to extract tabulated data, now we will take a look at the transport properties.
## Transport Properties
Transport properties, such as the viscosity, thermal conductivity, and diffusivity are difficult to measure in simulation. They require the use of correlators and need long simulation times to gain good averages. You also need to be careful of their definition, especially in multicomponent systems, and over what correlation times it is valid to collect data from. This is all documented in the reference entry for the thermal conductivity but there is no substitute for experience here. Please calculate known values from the literature to validate your understanding before attempting to measure new systems.
The easiest transport property to calculate is the self-diffusion coefficient, obtained from the MSD plugin:
<MSD>
<Species Name="Bulk" val="807.82538152569123" diffusionCoeff="1.9569056352302285"/>
</MSD>
Here, each Species and Topology will have a separate entry for the calculated diffusion coefficient. This value is just calculated from the total distanced travelled over the simulation by each particle, so there isn't a significant amount of work to do in processing it. We only need to be confident that the simulations have been run for sufficient time to reach the long-time behavior.
The other transport property data lies within several correlator tags. E.g.
<ThermalConductivity>
<Correlator>
0 0 0 0 0
0.016637884124075412 4135 0.00080420056234304738 0.00080194081779336385 0.00079775560698344759
0.033275768248150824 4134 0.0017904547411232278 0.0017381232563064169 0.0018186474468955217
...
</Correlator>
</ThermalConductivity>
For the thermal conductivity, the first column is the time, the second is the number of samples collected at that time, and the last three columns are the correlator values in the $x$, $y$, and $z$-directions. For more information please take a look at the reference entry for the thermal conductivity. As these are the Einstein correlators, the transport coefficients are the gradients of the correlation functions. If we cut the data out of output.xml using xmlstarlet or some other tool and plot each correlator we end up with the graph to the right.
Ideally, this plot should consist of points in a straight line which we can fit to extract the slope/thermal transport coefficient, $L_{\lambda\lambda}$. Unfortunately, at short times we have short-time effects from molecular processes which dominate the correlator. We are only interested in the behaviour at long times, where a "hydrodynamic" description applies, so we need to avoid these short-time effects. Unfortunately again, at long times we have poor statistics (see the low number of samples at long times) AND we have effects from the periodic boundary conditions.
We need to extract the correlator curves from a window which has good statistics, ignores short times and avoids long time effects and poor statistics. Once we have the window of time, we need to fit a linear function to the correlator, and to calculate the gradient/transport coefficient. Luckily, there is the dynatransport tool to help us do all of this.
### dynatransport
If we run the dynatransport tool on the output file, we can get an estimate of the transport coefficients.
dynatransport output.xml
ShearViscosityL_{\eta,\eta}= 0.0628751050012 +- 0.0 <R>^2= 0.386109814931
BulkViscosityL_{\kappa,\kappa}= 11.3811651077 +- 0.0 <R>^2= 0.567696677048
ThermalConductivityL_{\lambda,\lambda}= 0.178111094018 +- 0.0 <R>^2= 0.535953655856
ThermalDiffusionL_{\lambda,Bulk}= -3.03999278728e-18 +- 0.0 <R>^2= 0.736800451496
MutualDiffusionL_{Bulk,Bulk}= 1.04605741811e-34 +- 0.0 <R>^2= 0.893542145422
By default, dynatransport uses the full data set to calculate the correlators. You should be able to see that the $R^2$ values of the fits are significantly below $1$ which indicates that the correlators are not linear in the window selected. We can view the current fit by passing the -v option:
dynatransport output.xml -v
This will give plots, like the one presented to the right, for each transport property, including averaged correlator data, standard deviations between each dimension, and the regressed line used to calculate the coefficient.
Clearly, this linear fit is terrible and focuses too much on the long time section which has poor sampling. After examining the fit and testing a few different window sizes its clear that one suitable window appears to be $\Delta t \in [2,10]$. We can set this window for dynatransport using the start (-s) and cutoff (-c) options to give:
dynatransport output.xml -s 2 -c 10 -v
ShearViscosityL_{\eta,\eta}= 0.195306335342 +- 0.0 <R>^2= 0.999498595286
BulkViscosityL_{\kappa,\kappa}= 0.0707529418455 +- 0.0 <R>^2= 0.0110843745332
ThermalConductivityL_{\lambda,\lambda}= 0.508343229525 +- 0.0 <R>^2= 0.997347634149
ThermalDiffusionL_{\lambda,Bulk}= -4.22881210719e-19 +- 0.0 <R>^2= 0.866354789486
MutualDiffusionL_{Bulk,Bulk}= 2.2536681734e-35 +- 0.0 <R>^2= 0.968183740424
This fit is significantly better (see plot), although there is some strong variation between the dimensions it appears that the average is almost linear. We're only comparing the fit for the viscosity but all fits should be checked.
Using this window gives a much better fit for the thermal conductivity $L_{\lambda\,\lambda}\approx0.5083$ and viscosity $L_{\eta\,\eta}\approx0.1953$. The bulk viscosity is relatively hard to calculate and you should notice that the thermal diffusion, $L_{\lambda\,Bulk}$, and mutual diffusion, $L_{Bulk,Bulk}$, coefficients are close to zero. These coefficients are only non-zero for systems with multiple Species.
# Conclusions
We've simulated a square well system of $N=4000$ molecules/particles with a well-width of $\lambda=1.5$ at a reduced density of $\rho=0.1$ and temperature $k_B\,T=1.97\approx2$. Using dynarun we've equilibrated and run the simulation to collect data. We've then calculated the average configurational internal energy $U_{conf.}\approx-3161$ and internal energy histograms. Using dynatransport we've calculated the self-diffusion coefficient $D\approx1.957$, thermal conductivity $L_{\lambda\,\lambda}\approx0.5083$, and viscosity $L_{\eta\,\eta}\approx0.1953$.
This is our first proper study with DynamO. Now we will look at more complex systems and how to set them up.
|
2017-10-20 06:58:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5678539276123047, "perplexity": 1878.0958901736194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823839.40/warc/CC-MAIN-20171020063725-20171020083725-00596.warc.gz"}
|
https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/nn/ClipGradByGlobalNorm_en.html
|
Given a list of Tensor $$t\_list$$ , calculate the global norm for the elements of all tensors in $$t\_list$$ , and limit it to clip_norm .
• If the global norm is greater than clip_norm , all elements of $$t\_list$$ will be compressed by a ratio.
• If the global norm is less than or equal to clip_norm , nothing will be done.
The list of Tensor $$t\_list$$ is not passed from this class, but the gradients of all parameters set in optimizer. If need_clip of specific param is False in its ParamAttr, then the gradients of this param will not be clipped.
Gradient clip will takes effect after being set in optimizer , see the document optimizer (for example: SGD).
The clipping formula is:
$t\_list[i] = t\_list[i] * \frac{clip\_norm}{\max(global\_norm, clip\_norm)}$
where:
$global\_norm = \sqrt{\sum_{i=0}^{N-1}(l2norm(t\_list[i]))^2}$
Note
need_clip of ClipGradyGlobalNorm HAS BEEN DEPRECATED since 2.0. Please use need_clip in ParamAttr to speficiy the clip scope.
Parameters
• clip_norm (float) – The maximum norm value.
• group_name (str, optional) – The group name for this clip. Default value is default_group.
Examples
import paddle
x = paddle.uniform([10, 10], min=-1.0, max=1.0, dtype='float32')
|
2022-11-28 04:51:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7823485732078552, "perplexity": 3462.5095516367637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710473.38/warc/CC-MAIN-20221128034307-20221128064307-00802.warc.gz"}
|
http://events.berkeley.edu/?event_ID=110839&date=2017-09-11&tab=all_events
|
## Number Theory Seminar: Exceptional splitting of reductions of abelian surfaces with real multiplication
Seminar | September 11 | 4:10-5 p.m. | 891 Evans Hall | Canceled
Yunqing Tang, Princeton University
Department of Mathematics
Zywina showed that after passing to a suitable field extension, every abelian surface $A$ with real multiplication over some number field has geometrically simple reduction modulo $\mathfrak p$ for a density one set of primes $\mathfrak p$. One may ask whether its complement, the density zero set of primes $\mathfrak p$ such that the reduction of $A$ modulo $\mathfrak p$ is not geometrically simple, is infinite. Such question is analogous to the study of exceptional mod $\mathfrak p$ isogeny between two elliptic curves in the recent work of Charles. In this talk, I will show that abelian surfaces over number fields with real multiplication have infinitely many non-geometrically-simple reductions. This is joint work with Ananth Shankar.
yxy@berkeley.edu
|
2019-03-23 10:30:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9011075496673584, "perplexity": 587.0548843101927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202781.83/warc/CC-MAIN-20190323101107-20190323123107-00185.warc.gz"}
|
https://wiki.ruda.city/Algebra
|
Abstract algebra studies algebraic structures.
Algebraic operation $\omega: X^n \to X$ is a mapping from the a Cartesian power of a set to the set itself, where $n$ is the arity (元数) of the algebraic operation: nullary operation, $n = 0$; unary operation, $n = 1$; binary operation, $n = 2$; finitary operation, $n \in \mathbb{N}$; infinitary operation, $n = \aleph_\alpha, \alpha \in \mathbb{N}$.
Algebraic structure $((\omega_i)_{i \in I}, (R_j)_{j \in J})$ on a set is a class of finitary operations and a class of finitary relations on the set. Algebraic system $(X, (\omega_i)_{i \in I}, (R_j)_{j \in J})$ is a set endowed with an algebraic structure. Basic or primitive operations and relations of an algebraic system are those in its algebraic structure. Universal algebra (泛代数) or algebra is an algebraic system with no basic relation. Relational system or model (in logic; 模型) is an algebraic system with no basic operation.
Homomorphism (同态) is a map between two algebraic systems that preserves their basic operations and relations; in other words, it is a morphism in a category of algebraic systems: $\phi \in \text{Hom}(A, A')$. Endomorphism (自同态) is a homomorphism from an algebraic system to itself. The set of all endomorphisms on an algebraic system is denoted as $\text{End}(A)$. The classification of endomorphisms on finite-dimensional vector spaces over an algebraically closed field is call Jordan cannonical form.
## Group
Magma (原群), semigroup (半群), monoid (幺半群), group (群), and abelian group (交换群) are algebras $(X, ∗)$ where $∗$ is a binary operation satisfying a cumulative list of properties, defined in the following table.
Table: Group-like Algebras $(X, ∗)$ by Cumulative Properties of the Operation
Property Property Definition Algebra
closure $\forall a,b \in X, a ∗ b \in X$ magma
associativity $\forall a,b,c \in X, (a ∗ b) ∗ c = a ∗ (b ∗ c)$ semigroup
identity $\exists e \in X, \forall a \in X, e ∗ a = a ∗ e = a$ monoid
inverse $\forall a \in X, \exists b \in X, b ∗ a = a ∗ b = e$ group
commutativity $\forall a,b \in X, a ∗ b = b ∗ a$ abelian group
Integer group $(\mathbb{Z}, +)$ is the group consisting of integers, with the usual addition. For the integer group, its identity is 0 and the inverse of an element $a$ is $-a$. Dihedral groups have underlying sets consisting of symmetries like rotations and flections, and composition as group operation.
Symmetric group $(S(X), \circ)$ on a set $X$ is the group consisting of all bijective transformations on the set, with the composition operation. Automorphism group $(\text{Aut} X, \circ)$ on a space $(X, \dots)$ is the group consisting of all automorphisms on the space, with the composition operation. For a symmetric group or an automorphism group, its identity is the identity map $\text{id}$ and the inverse of an element $f$ is $f^{-1}$.
Left action (作用) $g \cdot x$ of a group $(G, ∗)$ on a set $X$ is a map $\phi: G \times X \mapsto X$ that satisfies identity and associativity: $1 ∗ x = x$; $g_1 \cdot (g_2 \cdot x) = (g_1 ∗ g_2) \cdot x$. Right action (右作用) of a group on a set is similarly defined, only with group elements appearing on the right.
## Ring
Ring (环) $(X, (+, ×))$ is a set endowed with two binary operations called addition $+$ and multiplication $×$ such that: the set with the addition is an abelian group $(X, +)$, the set with the multiplication is a monoid $(X, ×)$, and the multiplication left and right distributes over the addition: $a × (b + c) = (a × b) + (a × c)$, $(a + b) × c = (a × c) + (b × c)$. Subtraction $-$ is the inverse operation of addition $+$. Semiring (半环) is an algebra similar to a ring, but without additive inverses. Commutative ring (交换环) is a ring whose multiplication is commutative.
Additive identity $0$ of a ring is an identity of the addition. Additive identity is unique; the additive inverse of every element is unique. Multiplication by the additive identity annihilates a ring: $0 × a = a × 0 = 0$. Multiplicative identity $1$ of a ring is an identity of the multiplication. Multiplication by the additive inverse of a multiplicative identity equals its additive inverse: $-1 × a = -a$. Zero ring or trivial ring is a ring with the same additive and multiplicative identities: $0 = 1$. Every zero ring is a singleton consisting of its additive/multiplicative identity only. Integer ring $(\mathbb{Z}, (+, ×))$ is the ring consisting of integers, with the usual addition and multiplication. For the integer ring, its additive identity is $0$, additive inverse of an element $a$ is $-a$, and the multiplicative identity is $1$. Modular arithmetic $\mathbb{Z}/n\mathbb{Z}$ is a ring. Matrix ring $M_n(R)$ or $R_n$ over a ring is the ring consisting of all n-by-n matrices over the underlying ring, endowed with matrix addition and matrix multiplication. For a matrix ring, its additive identity is $0$ and its multiplicative identity is $I$.
### Polynomial
Polynomial of one variable over a commutative ring is a function in the form of a finite sum of products of elements in the ring and power functions with non-negative integer exponents: $f: \mathcal{R} \mapsto \mathcal{R}$, $f(x) = \sum_{k=0}^n a_k x^k$ where $(a_k)_{k=0}^n \subset \mathcal{R}$. We call $(a_k)_{k=0}^n$ coefficients and $(a_k x_k)_{k=0}^n$ a terms. Root of a polynomial is an element of its zero set. Fundamental Theorem of Algebra: Every complex polynomial has a root [@Girard1629; @Descartes1637]. Every real polynomial can be decomposed into a product of linear and quadratic real polynomials [MacLaurin and Euler; Gauss]. Multiplicity of a root of a polynomial is the number of times it appears in the factorization of the polynomial. Polynomial ring $\mathcal{R}[ x ]$ over a commutative ring $\mathcal{R}$ is the ring consisting of all polynomials in variable $x$ over the underlying ring, endowed with pointwise addition and multiplication.
Polynomial of n variables over a commutative ring is a function in the form of a finite sum of products of elements in the ring and power functions with non-negative integer exponents in each variable: $f: \mathcal{R}^n \mapsto \mathcal{R}$, $f(x_i)_{i=1}^n = \sum_{k=0}^m a_k \prod_{i=1}^n x_i^{k_i}$. The n-ary polynomial ring over a commutative ring is denoted as $\mathcal{R}[x_1, \dots, x_n]$. Polynomials with two or three variables are called dyadic/binary, or triadic/ternary, etc. Polynomials with one, two, or three terms are called a monomial, binomial, or trinomial, etc. Degree of a term in a polynomial is the sum of its exponents. Degree of a polynomial is the highest degree of its terms with non-zero coefficients. Homogeneous polynomial or form is a polynomial whose terms have the same degree. Homogeneous polynomials of the first, second, or third degree are called linear, quadratic, or cubic, etc. Quadratic form $q: \mathcal{R}^n \mapsto \mathcal{R}$ over a commutative ring $\mathcal{R}$ is a homogeneous quadratic n-ary polynomial over the ring: $q(x) = \sum_{i=1}^n \sum_{j=i}^n q_{ij} x_i x_j$ where $x_i, q_{ij} \in \mathcal{R}$. Kronecker matrix $A$ of a quadratic form is the symmetric matrix defined by $a_{ij} = q_{ij} + q_{ji}$.
### Field
Division ring (除环) is a ring with multiplicative inverses for all nonzero elements. Division is the inverse operation of multiplication $+$, defined for all nonzero elements of a division ring. Field (域) $(\mathbb{F}, (+, ×))$ is a commutative division ring. Examples of fields: a finite field with four elements; rational numbers $(\mathbb{Q}, (+,×))$; real numbers $(\mathbb{R}, (+,×))$; complex numbers $(\mathbb{C}, (+,×))$.
## Module
Module (模) $(V, (+, \cdot_{\mathcal{R}}))$ over a commutative ring $(\mathcal{R}, (+, ×))$ is a set $V$ endowed with a binary operation $+: V^2 \mapsto V$ and a map $\cdot_{\mathcal{R}}: \mathcal{R} \times V \mapsto V$. Module is a generalization of vector space. Submodule of an $\mathcal{R}$-module is an $\mathcal{R}$-module consisting of a subset and the same module structure. Direct product (直积) $(\prod_{\alpha \in A} V_\alpha, (+, \cdot))$ of an indexed family $(V_\alpha)_{\alpha \in A}$ of $\mathcal{R}$-modules is the $\mathcal{R}$-module consisting of the Cartesian product and addition and scalar multiplication defined as: $(v_\alpha) + (v_\alpha') = (v_\alpha + v_\alpha')$, $c (v_\alpha) = (c v_\alpha)$, where $(v_\alpha)$ is an "A-tuple". External direct sum (直和), or coproduct (余积) in the category of modules, $(\oplus_{\alpha \in A} V_\alpha, (+, \cdot))$ of an indexed family of $\mathcal{R}$-modules is the submodule of their direct product that consists of A-tuples with a finite number of nonzero components. The direct product and the direct sum of a finite family of modules are identical. We say an $\mathcal{R}$-module $W$ is the internal direct sum of an indexed family $(V_\alpha)_{\alpha \in A}$ of its submodules if every element of the module has a unique expression as a finite sum $\sum_\alpha v_\alpha$ of elements of the submodules: $W = \oplus_{\alpha \in A} V_\alpha$.
## Misc
Algebra $(A, (+, \cdot_{\mathbb{R}}, ×))$ over $\mathbb{R}$ is a real vector space $(A, (+, \cdot_{\mathbb{R}}))$ endowed with a bilinear product map $×: A^2 \mapsto A$. Real-valued function algebra $(\mathbb{R}^X, (+, \cdot_{\mathbb{R}}, ×))$ is a real-valued function space endowed with pointwise multiplication. Every polynomial ring over a ring is a commutative free algebra with an identity over the underlying ring, i.e. a free object in the category of commutative algebras.
Unary algebra.
Banach algebra.
|
2022-05-22 16:26:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9090537428855896, "perplexity": 223.78992000419927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545875.39/warc/CC-MAIN-20220522160113-20220522190113-00164.warc.gz"}
|
https://www.physicsforums.com/threads/impulse-q.134341/
|
# Impulse Q
#### glasgowm
A cue exerts an average force of 7N on a stationary snooker ball of mass 200g if the impact lasts for 45ms, with what speed does the ball leave the que?
---
Ft = mv - mu
F = m(v-u/t)
F = 0.2(v-0/0.0045)
= 0.2v/0.0045 = 44.4444V
Can't figure out what the next step is from my notes.
Cheers
#### radou
Homework Helper
glasgowm said:
A cue exerts an average force of 7N on a stationary snooker ball of mass 200g if the impact lasts for 45ms, with what speed does the ball leave the que?
---
Ft = mv - mu
F = m(v-u/t)
F = 0.2(v-0/0.0045)
= 0.2v/0.0045 = 44.4444V
Can't figure out what the next step is from my notes.
Cheers
Simply stated, the impulse equals the change of linear momentum, so, you have: $$F\cdot t = mv_{2}-mv_{1}$$, where m is the mass of the ball, v2 the final velocity and v1 the initial velocity (equals zero). From this equation you can easily retrieve v2.
#### glasgowm
radou said:
Simply stated, the impulse equals the change of linear momentum, so, you have: $$F\cdot t = mv_{2}-mv_{1}$$, where m is the mass of the ball, v2 the final velocity and v1 the initial velocity (equals zero). From this equation you can easily retrieve v2.
I already did that.
#### radou
Homework Helper
glasgowm said:
I already did that.
I saw you did that and I don't see where the problem is. Just plug in the force and solve to get the speed v.
### The Physics Forums Way
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
2019-03-25 22:39:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5852479338645935, "perplexity": 2690.741596710514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204461.23/warc/CC-MAIN-20190325214331-20190326000331-00332.warc.gz"}
|
http://math.stackexchange.com/questions/79586/solving-an-equation-in-several-variable-of-the-form-a-1x-1x-2a-2x-2-c
|
# Solving an equation in several variable of the form $a_1x_1x_2+a_2x_2=c$
Consider the following equation in two variables $$a_1x_1x_2+a_2x_2=c$$
where $x_i$ are variables and the other constants can be any real numbers. In three variables, this is $$a_1x_1x_2x_3+a_2x_2x_3+a_3x_3=c$$
In $n$ variables the equation becomes
$$\sum_{i=1}^{n}a_i \prod_{j=i}^{n}x_j=c$$
Solve the equation in $n$ variables for the $x_i$.
The equation doesn't seem to complicate, but the only way to solve it I see, is to use numerical methods. The gradient and Hessian are not difficult to compute, so Newton-Raphson can be used.
Is there a way to analytically find the solution of the equation?
If numerical method are the only way to go, as the equation doesn't seem too complicate, is it possible to guaranted all roots have been found?
In my case $n=30$ and the $x_i$ are in $(0,2)$.
-
"Is there a way to analytically find the solution of the equation?" - You can try Gröbner basis methods... – J. M. Nov 6 '11 at 19:14
When you say the $x_i$ are variables, what to they range over? $\mathbb{N}$ (with or without $0$)? Otherwise you need more equations. – Ross Millikan Nov 6 '11 at 19:16
It can be solved by induction: starting to solve the first equation in two variables (by dividing by $x_2$ if $x_2\neq 0$ we get $a_1 x_1+a_2= c/x_2$) and so on. – user17090 Nov 6 '11 at 19:17
There are typically infinitely many solutions, so it is not clear what is meant by solve. One can write down a parametric solution, but that is not really helpful. – André Nicolas Nov 6 '11 at 19:20
The equation makes essentially no demands on the $x_i$. For simplicity take $n=3$. Pick $x_1$ and $x_2$ almost arbitrarily, and let $x_3=c/(a_1x_1x_2+a_2x_2+a_3$. Then we have a solution unless the denominator is $0$. – André Nicolas Nov 6 '11 at 20:00
|
2016-06-30 01:46:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9100146293640137, "perplexity": 303.5746470235539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00004-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://neurips.cc/Conferences/2017/ScheduleMultitrack?event=10048
|
Timezone: »
Spotlight
Tri Dao · Christopher M De Sa · Christopher Ré
Tue Dec 05 05:50 PM -- 05:55 PM (PST) @ Hall C
Kernel methods have recently attracted resurgent interest, matching the performance of deep neural networks in tasks such as speech recognition. The random Fourier features map is a technique commonly used to scale up kernel machines, but employing the randomized feature map means that $O(\epsilon^{-2})$ samples are required to achieve an approximation error of at most $\epsilon$. In this paper, we investigate some alternative schemes for constructing feature maps that are deterministic, rather than random, by approximating the kernel in the frequency domain using Gaussian quadrature. We show that deterministic feature maps can be constructed, for any $\gamma > 0$, to achieve error $\epsilon$ with $O(e^{\gamma} + \epsilon^{-1/\gamma})$ samples as $\epsilon$ goes to 0. We validate our methods on datasets in different domains, such as MNIST and TIMIT, showing that deterministic features are faster to generate and achieve comparable accuracy to the state-of-the-art kernel methods based on random Fourier features.
|
2021-06-13 11:31:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6920281648635864, "perplexity": 654.7381021258344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487608702.10/warc/CC-MAIN-20210613100830-20210613130830-00582.warc.gz"}
|
https://www.physicsforums.com/threads/given-an-algebraic-alpha-be-of-degree-n-over-f-show-at-most.176364/
|
# Given an algebraic alpha be of degree n over F, show at most . . .
1. Jul 8, 2007
### barbiemathgurl
let alpha be algebraic over F of degree n, show that there exists at most n isomorphisms mapping F(alpha) onto a subfield of bar F (this means the algebraic closure).
thanx
2. Jul 8, 2007
### StatusX
Let phi be one of the isomorphisms, and apply phi to the minimal polynomial for alpha. This should give you a condition on phi(alpha).
3. Jul 8, 2007
### Kummer
If $$\phi:F(a) \mapsto E$$ is an isomorphism for $$E\leq \bar F$$ then $$a\mapsto b$$ where $$a\mbox{ and }b$$ are conjugates. Conversely, if $$a\mapsto b$$ and their are conjugates then by the Conjugation Isomorphism Theorem there is exactly one such isomorphism leaving $$F$$ fixed. Let the irreducible monic polynomial for $$a$$ be $$c_0+c_1x+...+c_nx^n$$ then there are at most $$n$$ zeros, and hence at most $$n$$ conjugate elements.
|
2017-03-27 16:35:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9721135497093201, "perplexity": 650.9883274187177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189490.1/warc/CC-MAIN-20170322212949-00152-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://hal.archives-ouvertes.fr/hal-01281179
|
HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
# The Kadec–Pełczyński–Rosenthal subsequence splitting lemma for JBW$^∗$-triple preduals
Abstract : Any bounded sequence in an $L^1$-space admits a subsequence which can be written as the sum of a sequence of pairwise disjoint elements and a sequence which forms a uniformly integrable or equiintegrable (equivalently, a relatively weakly compact) set. This is known as the Kadec–Pełczyński–Rosenthal subsequence splitting lemma and has been generalized to preduals of von Neuman algebras and of JBW∗-algebras. In this note we generalize it to JBW$^∗$-triple preduals.
Keywords :
Document type :
Journal articles
https://hal.archives-ouvertes.fr/hal-01281179
Contributor : Hermann Pfitzner Connect in order to contact the contributor
Submitted on : Tuesday, March 1, 2016 - 5:14:10 PM
Last modification on : Tuesday, October 12, 2021 - 5:20:13 PM
### Citation
Antonio Peralta, H. Pfitzner. The Kadec–Pełczyński–Rosenthal subsequence splitting lemma for JBW$^∗$-triple preduals. Studia Mathematica, Instytut Matematyczny - Polska Akademii Nauk, 2015, 227 (1), pp.77-95. ⟨10.4064/sm227-1-5⟩. ⟨hal-01281179⟩
Record views
|
2022-05-24 03:26:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41200539469718933, "perplexity": 5539.693600559217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00014.warc.gz"}
|
https://dirkmittler.homeip.net/blog/archives/286
|
# A Glitch with the Chrome for Linux Package Repository
One of the software packages I have installed on the computer ‘Phoenix’ is “Google Chrome”, for Debian / Linux. And this package is eligible for upgrades via Package Manager because Google makes the binaries available. In order for the upgrades to take place, an ‘apt-get update’ command needs to succeed, with this file installed:
/etc/apt/sources.list.d/google-chrome.list
One problem I’ve encountered recently, is that because my computer is set up to pull both the 32-bit and the 64-bit repositories, I get an error message telling me that there is apparently no more 32-bit version on the Google servers. And so the line of code that is needed in this file requires
deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main
The only problem though with this, is that as soon as my Chrome package does update, the ‘[arch=amd64]‘ vanishes, because the package overwrites whatever modifications I made to this file.
It could be that this problem has delayed my getting updates through until now. But unless either the Chrome repository starts to include a 32-bit entry again, or until the same package replaces this file with the specification in-place, this problem will eventually recur.
Dirk
## 2 thoughts on “A Glitch with the Chrome for Linux Package Repository”
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
2020-01-21 21:26:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3600289225578308, "perplexity": 2975.2196076689506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250605075.24/warc/CC-MAIN-20200121192553-20200121221553-00500.warc.gz"}
|
https://gmatclub.com/forum/two-ants-a-and-b-start-from-a-point-p-on-a-circle-at-the-same-time-wi-320721.html
|
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 03 Jun 2020, 03:53
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Two ants A and B start from a point P on a circle at the same time, wi
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 64196
Two ants A and B start from a point P on a circle at the same time, wi [#permalink]
### Show Tags
09 Apr 2020, 06:58
10
00:00
Difficulty:
55% (hard)
Question Stats:
67% (02:42) correct 33% (03:23) wrong based on 63 sessions
### HideShow timer Statistics
Two ants A and B start from a point P on a circle at the same time, with A moving clock-wise and B moving anti-clockwise. They meet for the first time at 10:00 am when A has covered 60% of the track. If A returns to P at 10:12 am, then B returns to P at
A. 10:45 am
B. 10:40 am
C. 10:27 am
D. 10:25 am
E. 10:18 am
Are You Up For the Challenge: 700 Level Questions
_________________
Math Expert
Joined: 02 Aug 2009
Posts: 8623
Re: Two ants A and B start from a point P on a circle at the same time, wi [#permalink]
### Show Tags
09 Apr 2020, 07:10
Bunuel wrote:
Two ants A and B start from a point P on a circle at the same time, with A moving clock-wise and B moving anti-clockwise. They meet for the first time at 10:00 am when A has covered 60% of the track. If A returns to P at 10:12 am, then B returns to P at
A. 10:45 am
B. 10:40 am
C. 10:27 am
D. 10:25 am
E. 10:18 am
Are You Up For the Challenge: 700 Level Questions
When A and B meet, A has covered 60%, so B has covered 40%, and their speeds are in ratio A:B=60:40=3:2..
Next A completes the remaining 40% in 1012-1000=12 minutes, while B has to cover 60% of the route..
B would cover 40% route in 3/2 more time => 12*3/2=18..
So B would cover 60% in 18*60/40=27.
Hence B reaches 27 minutes after the ants meet at 1000, so at 1000+27=1027
C
_________________
Senior Manager
Joined: 24 Oct 2015
Posts: 439
Location: India
Schools: Sloan '22, ISB, IIM
GMAT 1: 650 Q48 V31
GPA: 4
Re: Two ants A and B start from a point P on a circle at the same time, wi [#permalink]
### Show Tags
09 Apr 2020, 07:38
Bunuel wrote:
Two ants A and B start from a point P on a circle at the same time, with A moving clock-wise and B moving anti-clockwise. They meet for the first time at 10:00 am when A has covered 60% of the track. If A returns to P at 10:12 am, then B returns to P at
A. 10:45 am
B. 10:40 am
C. 10:27 am
D. 10:25 am
E. 10:18 am
Attachment:
ants.png [ 2.86 KiB | Viewed 754 times ]
A and B meet at point M at 10:00 AM
A travels the rest of the distance $$\frac{2D}{5}$$ in 12 minutes, as it is given that A returns to P at 10:12 AM.
when A travels $$\frac{3D}{5}$$, B travels $$\frac{2D}{5}$$.
when A travels $$\frac{2D}{5}$$, B will travel:
$$\frac{2D*5*2D}{5*3D*5}$$= $$\frac{4D}{15}$$
B travels $$\frac{4D}{15}$$ in 12 minutes.
B will travel $$\frac{3D}{5}$$ in $$\frac{12*15*3D}{4D*5}$$ = 27 minutes.
So B will take 27 minutes from point M to reach point P.
Ans: C
CEO
Joined: 03 Jun 2019
Posts: 2927
Location: India
GMAT 1: 690 Q50 V34
WE: Engineering (Transportation)
Re: Two ants A and B start from a point P on a circle at the same time, wi [#permalink]
### Show Tags
09 Apr 2020, 09:29
Bunuel wrote:
Two ants A and B start from a point P on a circle at the same time, with A moving clock-wise and B moving anti-clockwise. They meet for the first time at 10:00 am when A has covered 60% of the track. If A returns to P at 10:12 am, then B returns to P at
A. 10:45 am
B. 10:40 am
C. 10:27 am
D. 10:25 am
E. 10:18 am
Are You Up For the Challenge: 700 Level Questions
Given: Two ants A and B start from a point P on a circle at the same time, with A moving clock-wise and B moving anti-clockwise. They meet for the first time at 10:00 am when A has covered 60% of the track.
Asked: If A returns to P at 10:12 am, then B returns to P at
When A.& B meet at 1000 hrs,
A has covered 60% & B has covered remaining 40% of the track
vA : vB =60:40 =3:2
A takes 12 mins to cover remaining 40% of the track
B will take 12*3/2 = 18 min to cover 40% of the track
B will take 18*60/40 = 27 mins to cover remaining 60% of the track
B will reach P at 1027 hrs
IMO C
_________________
Kinshook Chaturvedi
Email: kinshook.chaturvedi@gmail.com
Senior Manager
Status: Student
Joined: 14 Jul 2019
Posts: 327
Location: United States
Concentration: Accounting, Finance
GPA: 3.9
WE: Education (Accounting)
Re: Two ants A and B start from a point P on a circle at the same time, wi [#permalink]
### Show Tags
09 Apr 2020, 11:23
Bunuel wrote:
Two ants A and B start from a point P on a circle at the same time, with A moving clock-wise and B moving anti-clockwise. They meet for the first time at 10:00 am when A has covered 60% of the track. If A returns to P at 10:12 am, then B returns to P at
A. 10:45 am
B. 10:40 am
C. 10:27 am
D. 10:25 am
E. 10:18 am
Are You Up For the Challenge: 700 Level Questions
Ratio of speed of A and B = 3: 2
To cover remaining 40% ant A took 12 minutes. So for the total distance ant A took 12/.4 = 30 minutes and ant B took 45 minutes.
They started their journey (30-12)= 18 minutes before 10 am or, 9.42 am. So ant B will reach to P = 9.42 + 45 = 9.87 or 10.27 am.
Manager
Joined: 30 Jun 2019
Posts: 209
Re: Two ants A and B start from a point P on a circle at the same time, wi [#permalink]
### Show Tags
09 Apr 2020, 18:09
60% covered @10am, returns at 10:12am --> goes 40% in 12min
40 = r*1/5
r=200
60:40 --> ratio of person A to person B
3:2
3/2 = 200/x
x=400/3
Person B needs to go 60%
60 = 400/3*t
t = 180/400 = 18/40 = 9/20 = 27/60 = 27min
10:00am + 27min = 10:27am
C
Manager
Joined: 18 Dec 2017
Posts: 217
Re: Two ants A and B start from a point P on a circle at the same time, wi [#permalink]
### Show Tags
09 Apr 2020, 20:23
A covers 40 % of the distance is 12 minutes so will cover 60% of the distance in 18 minutes. Now B has to cover 60% of the distance and his speed is 2/3 of A's speed.
Therefore time taken by B =18×3/2 = 27 minutes
Posted from my mobile device
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 10645
Location: United States (CA)
Re: Two ants A and B start from a point P on a circle at the same time, wi [#permalink]
### Show Tags
13 Apr 2020, 03:39
Bunuel wrote:
Two ants A and B start from a point P on a circle at the same time, with A moving clock-wise and B moving anti-clockwise. They meet for the first time at 10:00 am when A has covered 60% of the track. If A returns to P at 10:12 am, then B returns to P at
A. 10:45 am
B. 10:40 am
C. 10:27 am
D. 10:25 am
E. 10:18 am
We see that ant A must cover 40% of the track in 12 minutes, which means it covers the entire track (or the circumference of the circle) in 12/0.4 = 30 minutes. This means both ants start moving from P at 10:12 am - 30 minutes = 9:42 am. Furthermore, we see that since it takes ant A 10:00 am - 9:42 am = 18 minutes to cover 60% of the track, it will take ant B the same (i.e., 18 minutes) to cover the remaining 40% of the track when they meet at 10:00 am. So it will take ant B 18/0.4 = 45 minutes to cover the entire track. Since ant B begins moving from P at 9:42 am, ant B will return to P at 9:42 am + 45 minutes = 10:27 am.
_________________
# Scott Woodbury-Stewart
Founder and CEO
Scott@TargetTestPrep.com
202 Reviews
5-star rated online GMAT quant
self study course
See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews
If you find one of my posts helpful, please take a moment to click on the "Kudos" button.
Intern
Joined: 18 Nov 2018
Posts: 5
Re: Two ants A and B start from a point P on a circle at the same time, wi [#permalink]
### Show Tags
16 Apr 2020, 07:22
Eq1- Speed of a / speed of b = 3/2
Speed of a = 40/12
Speed of b = 60/t
Put the values in eq1 and find t which comes out to be 27.
So time taken by B is 27 mins more after A and B meet at 10:00am
I tried solving by long method but then realized that placing speed in the ratios is more efficient and easy. Hope it helps.
Posted from my mobile device
Re: Two ants A and B start from a point P on a circle at the same time, wi [#permalink] 16 Apr 2020, 07:22
|
2020-06-03 11:53:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6599147319793701, "perplexity": 2717.4661836991017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347434137.87/warc/CC-MAIN-20200603112831-20200603142831-00145.warc.gz"}
|
https://pypi.org/project/collective.monkeypatcher/
|
Support for applying monkey patches late in the startup cycle by using ZCML configuration actions
## Introduction
Sometimes, a monkey patch is a necessary evil.
This package makes it easier to apply a monkey patch during Zope startup. It uses the ZCML configuration machinery to ensure that patches are loaded “late” in the startup cycle, so that the original code has had time to be fully initialised and configured. This is similar to using the initialize() method in a product’s __init__.py, except it does not require that the package be a full-blown Zope product with a persistent Control_Panel entry.
## Installation
To install collective.monkeypatcher into the global Python environment (or a working environment), using a traditional Zope instance, you can do this:
• When you’re reading this you have probably already run pip install collective.monkeypatcher.
• Create a file called collective.monkeypatcher-configure.zcml in the /path/to/instance/etc/package-includes directory. The file should only contain this:
<include package="collective.monkeypatcher" />
Alternatively, if you are using zc.buildout and the plone.recipe.zope2instance recipe to manage your project, you can do this:
• Add collective.monkeypatcher to the list of eggs to install, e.g.:
[buildout]
...
eggs =
...
collective.monkeypatcher
• Tell the plone.recipe.zope2instance recipe to install a ZCML slug:
[instance]
recipe = plone.recipe.zope2instance
...
zcml =
collective.monkeypatcher
• Re-run buildout, e.g. with:
\$ ./bin/buildout
You can skip the ZCML slug if you are going to explicitly include the package from another package’s configure.zcml file.
## Applying a monkey patch
Here’s an example:
<configure
xmlns="http://namespaces.zope.org/zope"
xmlns:monkey="http://namespaces.plone.org/monkey"
i18n_domain="collective.monkeypatcher">
<include package="collective.monkeypatcher" />
<monkey:patch
description="This works around issue http://some.tracker.tld/ticket/123"
class="Products.CMFPlone.CatalogTool.CatalogTool"
original="searchResults"
replacement=".catalog.patchedSearchResults"
/>
</configure>
In this example, we patch Plone’s CatalogTool’s searchResults() function, replacing it with our own version in catalog.py. To patch a module level function, you can use module instead of class. The original class and function/method name and the replacement symbol will be checked to ensure that they actually exist.
If patching happens too soon (or too late), use the order attribute to specify a higher (later) or lower (earlier) number. The default is 1000.
By default, DocFinderTab and other TTW API browsers will emphasize the monkey patched methods/functions, appending the docstring with “Monkey patched with ‘my.monkeypatched.function’”. If you don’t want this, you could set the docstringWarning attribute to false.
If you want to do more than just replace one function with another, you can provide your own patcher function via the handler attribute. This should be a callable like:
def apply_patch(scope, original, replacement):
...
Here, scope is the class/module that was specified. original is the string name of the function to replace, and replacement is the replacement function.
Full list of options:
• class The class being patched
• module The module being patched (see Patching module level functions)
• handler A function to perform the patching. Must take three parameters: class/module, original (string), and replacement
• original Method or function to replace
• replacement Method or function to replace with
• preservedoc Preserve docstrings?
• preserveOriginal Preserve the original function so that it is reachable view prefix _old_. Only works for default handler
• preconditions Preconditions (multiple, separated by space) to be satisified before applying this patch. Example: Products.LinguaPlone-=1.4.3 or Products.TextIndexNG3+=3.3.0
• ignoreOriginal Ignore if the orginal function isn’t present on the class/module being patched
• docstringWarning Add monkey patch warning in docstring
• order Execution order
## Handling monkey patches events
Applying a monkey patch fires an event. See the interfaces.py module. If you to handle such event add this ZCML bunch:
...
<subscriber
for="collective.monkeypatcher.interfaces.IMonkeyPatchEvent"
handler="my.component.events.myHandler"
/>
...
def myHandler(event):
"""see collective.monkeypatcher.interfaces.IMonkeyPatchEvent"""
...
## Patching module level functions
If you want to patch the method do_something located in patched.package.utils which is imported in a package like this
from patched.package.utils import do_something
the reference to this function is loaded before collective.monkeypatcher will patch the original method.
### Workaround
Do the patching in __init__.py of your package:
from patched.package import utils
def do_it_different():
return 'foo'
utils.do_something = do_it_different
## Changelog
### 1.2.1 (2020-03-21)
Bug fixes:
• Minor packaging updates. [various] (#1)
### 1.2 (2018-12-10)
New features:
• Include installation instructions in the README.
• Update test infrastructure.
### 1.1.6 (2018-10-31)
Bug fixes:
• Prepare for Python 2 / 3 compatibility [frapell]
### 1.1.5 (2018-06-18)
Bug fixes:
• Fix import for Python 3 in the tests module [ale-rt]
### 1.1.4 (2018-04-08)
Bug fixes:
• Fix import for Python 3 [pbauer]
### 1.1.3 (2017-11-26)
New features:
• Document possible problems when patching module level functions [frisi]
Fixes:
### 1.0.1 - 2011-01-25
• Downgrade standard log message to debug level. [hannosch]
### 1.0 - 2010-07-01
• Avoid a zope.app dependency. [hannosch]
• Added new parameter preconditions that only patches if preconditions are met like version of a specific package. [spamsch]
• Added new parameter preserveOriginal. Setting this to true makes it possible to access the patched method via _old_name of patched method [spamsch]
### 1.0b2 - 2009-06-18
• Add the possibility to ignore the error if the original function isn’t present on the class/module being patched [jfroche]
• Check if the docstring exists before changing it [jfroche]
• Add buildout.cfg for test & test coverage [jfroche]
### 1.0b1 - 2009-04-17
• Fires an event when a monkey patch is applied. See interfaces.py. [glenfant]
• Added ZCML attributes “docstringWarning” and “description”. [glenfant]
### 1.0a1 - 2009-03-29
• Initial release [optilude]
## Project details
Uploaded source
Uploaded py2 py3
|
2022-12-09 19:13:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21401438117027283, "perplexity": 14416.973953500841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711475.44/warc/CC-MAIN-20221209181231-20221209211231-00432.warc.gz"}
|
https://publications.hse.ru/en/articles/195186562
|
• A
• A
• A
• ABC
• ABC
• ABC
• А
• А
• А
• А
• А
Regular version of the site
## A precise measurement of the $B^0$ meson oscillation frequency
The oscillation frequency, TeV">TeVTeV. A combination of the two decay modes gives <span data-mathml="nsΔmd=(505.0±2.1±1.0)ns−1Δmd=(505.0±2.1±1.0)ns−1, where the first uncertainty is statistical and the second is systematic. This is the most precise single measurement of this parameter. It is consistent with the current world average and has similar precision.
|
2021-07-25 10:19:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8327733278274536, "perplexity": 4786.836049612124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151641.83/warc/CC-MAIN-20210725080735-20210725110735-00002.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/assume-there-is-a-non-uniform-temperature-field-txyzt-in-a-fluid-moving-with-velocity-vxyz-q3483458
|
## Heat conservation differentiation.
Assume there is a non-uniform temperature field T(x,y,z,t) in a fluid moving with velocity v(x,y,z,t). Assume there are no sources of heat and take the following parameters of the fluid to be constant: density p: specific heat c; conductivity k. Assume that generation of heat by dissipation of mechanical energy is negligible. Because the density is constant, thermal energy (heat) and mechanical energy are then seperatly conserved. Making these assumptions, begin with the statement "heat is conserved" and end with the differential equation for the temperature, using the following physical model: (1.) The heat per unit volume is pcT: (2.) Heat radiation is negligible, so transport of heat is carried out by two mechanisms: (A.) Conduction. heat flow due to conduction is proportioal to the temperature gradient and directed opposite to it, i.e. it is -k(del)T. (B.) Convection. Heat flow due to the fluid motion is pcTv.
• hey are you the sjcc guy taking langlois's class at leland right now?
no guarantees that this is right, but for #1 we used the continuity equation on http://en.wikipedia.org/wiki/Continuity_equation - the general one.
the symbols don't transfer over when i copy paste, but the section looks like:
The general form for a continuity equation where
trident looking symbol is some quantity,
f is a vector function describing the flux (flows) of ,
del dot is divergence,
and s is a function describing the generation and removal of . (Generation and removal are also called "sources" and "sinks" respectively, and correspond to s > 0 ands < 0 respectively.)
so the trident looking thing is heat, so you take the derivative of that,
and you replace f with (convection + conduction)
and you set s = 0 because that's like the continuity equation that he gave us in class.
|
2013-05-23 18:12:55
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8046477437019348, "perplexity": 1466.8555421315057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703662159/warc/CC-MAIN-20130516112742-00074-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://calculator.academy/mortgage-service-ratio-calculator/
|
Enter the net operating income ($) and the debt service ($) into the Calculator. The calculator will evaluate the Mortgage Service Ratio.
## Mortgage Service Ratio Formula
MSR = NOI / DS
Variables:
• MSR is the Mortgage Service Ratio ()
• NOI is the net operating income ($) • DS is the debt service ($)
To calculate Mortgage Service Ratio, divide the net operating income by the debt service.
## How to Calculate Mortgage Service Ratio?
The following steps outline how to calculate the Mortgage Service Ratio.
• First, determine the net operating income ($). • Next, determine the debt service ($).
• Next, gather the formula from above = MSR = NOI / DS.
• Finally, calculate the Mortgage Service Ratio.
• After inserting the variables and calculating the result, check your answer with the calculator above.
Example Problem :
Use the following variables as an example problem to test your knowledge.
net operating income ($) = 500 debt service ($) = 234
|
2023-03-31 18:44:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21921919286251068, "perplexity": 6503.304474309208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949678.39/warc/CC-MAIN-20230331175950-20230331205950-00545.warc.gz"}
|
https://physics.stackexchange.com/questions/652812/relativistic-energy-and-momentum-understanding
|
# Relativistic energy and Momentum understanding
After discussing with some people, it appears to me that the way to define quantities like "energy" and "momentum" in general, is to just treat them as "conserved quantities" under some special conditions. The original definitions like "product of mass and velocity" and "capacity to do work" ... work well in most cases, but it's clear that as general definitions these are inappropriate.
In that light, it made sense to me to redefine momentum. By using a simple argument. We analyze the elastic collision between two balls in two frames: the rest frame, and the frame of the two balls. It was clear that the momentum, if we just defined it as $$mv$$, would not be conserved in both the frames.
It made total sense to therefore redefine the momentum. We have: $$\mathbf{p}= \gamma(v) m \mathbf{v}$$ It can be shown that this definition makes it a conserved quantity under no force in both frames, and thus adheres to the principle of relativity.
Now, coming to energy:
It can be shown that (using the newly defined definition of momentum), that the appropriate KE should be: $$\mathrm{KE}= \gamma mc^2- mc^2$$
Now textbooks simply move $$mc^2$$ to the other side, and then claim that "$$\mathrm{KE}+mc^2$$ defines the total energy", so we get $$E=\gamma mc^2$$.
My issue with all of this is again, how should one precisely define energy? Why was $$\mathrm{KE}+mc^2$$ chosen as the definition of total energy $$E$$? Is it because, as I said before, it can be shown to be a conserved quantity under some conditions?
Also, why was there a need to consider $$mc^2$$ as an extra bit of energy in that equation? It appears to me that we can make do of just using the newly defined $$\mathrm{KE}$$ all the time in energy conservation equations and that shall work too...
$$mc^2$$ is the rest energy and KE is all the energy due to motion. Add them and you get the total energy = all energy due to motion + all energy at rest. This bears some similarity to the classic equation E = KE + PE.
|
2022-05-16 16:52:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8876410126686096, "perplexity": 257.00953601854627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510138.6/warc/CC-MAIN-20220516140911-20220516170911-00487.warc.gz"}
|
https://datacadamia.com/code/algorithm/running_time
|
# Algorithm - (Performance|Running Time|Fast)
## 1 - About
The performance of an algorithm can be assessed through the number of basic operations that it performs as a function of the length of the input numbers.
running time = # of lines of code executed.
## 3 - Debugger definition
Running the algorithm in a debugger, every time we press enter, we advance with one line of the program through the debugger. Basically, the running time is just a number of operations executed, the number of lines of code executed.
How many times you have to hit enter on the debugger before the, program finally terminates, function of the input N gives us a performance metrics.
## 4 - Fast
$$\text{fast algorithm} \approx \href{case#worst}{\text{worst-case running}} \text{ grows slowly with input size}$$ Usually want as close to linear (O(n)) as possible.
Data Science
Data Analysis
Statistics
Data Science
Linear Algebra Mathematics
Trigonometry
|
2021-06-15 12:58:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37321802973747253, "perplexity": 1711.3642228883411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621273.31/warc/CC-MAIN-20210615114909-20210615144909-00291.warc.gz"}
|
http://tex.stackexchange.com/questions/99018/warning-in-you-code
|
# Warning in you code [closed]
Neural Network representation
I took neural network drawing but i have an error on that:
``````\foreach \i in {1,2,3,4,5,6,7,8,9}{
\foreach \j in {1,2,3,4,5,6,7,8}{
\path[normal arrow,Cyan, draw opacity=0.2] (nx\i) -- (cl1n\j);
}
}
``````
• cl1n
is like not declared or defined can you help me?
-
## closed as too localized by Qrrbrbirlbel, diabonas, Fran, Marco Daniel, lockstepFeb 24 '13 at 18:24
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question.
Welcome to TeX.SE. While code snippets are useful in explanations, it is always best to compose a fully compilable MWE that reproduces this problem. – Peter Grill Feb 20 '13 at 2:46
I have updated the answer with a full MWE, including `xcolor` option and required TikZ libraries. Does the example work now for you (use LuaLaTeX)? – Qrrbrbirlbel Feb 20 '13 at 3:12
|
2015-05-24 09:35:08
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8633871078491211, "perplexity": 4377.8953868082535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927863.72/warc/CC-MAIN-20150521113207-00327-ip-10-180-206-219.ec2.internal.warc.gz"}
|
http://elitistjerks.com/f77/t47907-3_1_ptr/p11/
|
Elitist Jerks 3.1 PTR
03/01/09, 5:33 AM #151
Promethia
Piston Honda
Blood Elf Priest
Kilrogg
Originally Posted by Thistlebee You bring up a great point with this and I would love to see some numbers on haste vs crit as far as mana regen in concerned.
Originally Posted by Havoc12 I think people should rexamine the uptime formula very carefully $1 - (1-C)^{n}$ Both n and C have diminishing returns, but the diminishing returns applies to both values.
If we redefine n to be the number of (crit capable) casts in the last 8 seconds prior to any haste, then we could use
$HC_{uptime}= 1 - (1-C)^{n(1 + H)}$
where C is the crit percentage and H is the haste percentage. That allows us to consider the effects of adding haste as well. We can then take a few (partial) derivatives to see how changing crit, haste, and n affect HC uptime. In particular:
$\displaystyle{\frac{\partial HC}{\partial C}= n(1 + H)(1-C)^{n(1 + H)-1}}$
$\displaystyle{\frac{\partial HC}{\partial H}= -n\ln(1 - C)(1-C)^{n(1 + H)}}$
$\displaystyle{\frac{\partial HC}{\partial n}= -(1 + H)\ln(1 - C)(1-C)^{n(1 + H)}}$
So the above formulas show how changing crit, haste, and n respectively affect HC uptime. Just on inspection, it is apparent that changing crit has a bigger effect on HC uptime than changing haste. The (1 - C) term is <1 so having one less of them in a product is a good thing. But more formally, if we look at the effect of haste relative to crit on HC uptime, we get:
$\displaystyle{\frac{HC_{haste}}{HC_{crit}}= -\frac{(1 - C)\ln(1 - C)}{1 + H}}$
The above expression will always be less than 1, although that's maybe not obvious:
$\displaystyle{\text{Let }R= \frac{1}{1 - C}\text{, then}}$
$\displaystyle{\frac{HC_{haste}}{HC_{crit}}= \frac{\ln(R)}{R}\frac{1}{1 + H}}$
Since ln R is strictly less than R for all R, ln(R) / R is less than 1. Similarly, 1 / (1 + H) is always less than one, so the product of the two fractions is also less than one as well. Or if that doesn't convince you, plug a bunch of numbers into a spreadsheet and test it out.
Bottom line: haste is never better than crit for improving HC uptime.
Originally Posted by Havoc12 Well its been confirmed and its 100% proc on crit plus it also includes renew!, thus the new regen formula for spirit regen for priests is $R= 0.005 + k'*(1-0.5*FSR)*(1+0.5*(1-(1-C)^{n}) )*Spi*\sqrt{Int}$
I was wondering where the 0.005 came from. Anyone know?
Also, since the tooltip language is a bit ambiguous, I'm wondering if anyone has verified the effects of HC on mana regen. Specifically:
1. Does it only affect mana regen from spirit (similar to meditation)? I've heard that, but has it been proven?
2. If so, does it affect regen additively or multiplicatively. For instance, while inside the five second rule, does HC work to increases the mana regen while casting to 100% (50% from meditation plus another 50%) or to 75% (multiplying the meditation bonus by 150% as reflected in the above formula)?
Last edited by Promethia : 03/26/09 at 6:44 AM. Reason: silly typos
03/01/09, 7:35 AM #152
Bjork
Piston Honda
Human Priest
Sylvanas (EU)
Originally Posted by typobox I mathed Serendipity out here, although I did forget to account for GCD capping on the FHx3 rotation. Short story is that FHx2->GH is definitely a large improvement over straight GH spam or FH->GH, and FHx3->GH is likely a small improvement over that. (Feel free to point out any other glaring errors.)
Thanks.
So what we're looking at is - at best (and that's 100% wrong because you didn't adjust for the GCD gap) - a 12% increase in single target HPS from pure GH-spam to 3*FH+GH. That 12% increase came at a very, very high cost as we've lost our old Serendipity. What we've gained is peak-HPS roughly every 6-7 seconds - isn't that what paladins have had with Holy Shock and druids with Swiftmend all along just that these things are instant?
So now people think holypriests are amazing tankhealers because we've got a weak version of Holy Shock, we're still miles behind on pure single target power and we also have the worst mana efficency on single target healing. Hallelujah.
Only valid reason for having a priest on tankhealing is Inspiration and then you should be disc. Or you just bring a restoshaman doing the same job and you have Mana Tide.
Don't get me wrong, new Serendipity is amazing, it makes holypreists extremly good healers and it's a fun mechanic, but we're still gimp tankhealers compared to any other healer.
03/01/09, 8:15 AM #153
maldran
Glass Joe
Burning Legion (EU)
Originally Posted by Promethia Also, since the tooltip language is a bit ambiguous, I'm wondering if anyone has verified the effects of HC on mana regen. Specifically: 1. Does it only affect mana regen from spirit (similar to meditation)? I've heard that, but has it been proven? 2. If so, does it affect regen additively or multiplicatively. For instance, while inside the five second rule, does HC work to increases the mana regen while casting to 100% (50% from meditation plus another 50%) or to 75% (multiplying the meditation bonus by 150% as reflected in the above formula)?
HC gives you a 1.5 multiplier to the regen formula for 8 seconds.
I've tested it out on the PTR, my regen changes from 336(oo5s)/168(i5s) to 505(oo5s)/252(i5s) (not using any mp5 pieces).
The spirit and int values stay the same, only the effective regen out of it is changed. If you crit a second time during the HC buff its duration resets to 8 seconds. Leaving the 5second rule or not doesn't change the buff either obviously.
If I buff myself with spirit while HC is up, its effects get immediately applied to my regen. I suppose trinket procs would work similar.
03/01/09, 8:29 AM #154
MavSteele
Von Kaiser
Human Priest
Turalyon
Originally Posted by Bjork Thanks. So what we're looking at is - at best (and that's 100% wrong because you didn't adjust for the GCD gap) - a 12% increase in single target HPS from pure GH-spam to 3*FH+GH. That 12% increase came at a very, very high cost as we've lost our old Serendipity. What we've gained is peak-HPS roughly every 6-7 seconds - isn't that what paladins have had with Holy Shock and druids with Swiftmend all along just that these things are instant? So now people think holypriests are amazing tankhealers because we've got a weak version of Holy Shock, we're still miles behind on pure single target power and we also have the worst mana efficency on single target healing. Hallelujah. Only valid reason for having a priest on tankhealing is Inspiration and then you should be disc. Or you just bring a restoshaman doing the same job and you have Mana Tide. Don't get me wrong, new Serendipity is amazing, it makes holypreists extremly good healers and it's a fun mechanic, but we're still gimp tankhealers compared to any other healer.
Fair enough, but with dual specs (and depending on your raid composition) I'm not sure that it matters much. If you're going to be healing a tank full time then a holy paladin or disc priest is better suited for the job because of mana efficiency and burst healing on a short CD. Depending on how the LB "change" works out, resto druids may still be up there slightly ahead of holy priests, but we'll see on that one.
Additionally, if BH is going to continue to give two stacks of serendipity, we have the ability to trade efficiency for burst pretty easily through BH->GH combos.
03/01/09, 8:39 AM #155
Bjork
Piston Honda
Human Priest
Sylvanas (EU)
Originally Posted by MavSteele Fair enough, but with dual specs (and depending on your raid composition) I'm not sure that it matters much. If you're going to be healing a tank full time then a holy paladin or disc priest is better suited for the job because of mana efficiency and burst healing on a short CD. Depending on how the LB "change" works out, resto druids may still be up there slightly ahead of holy priests, but we'll see on that one. Additionally, if BH is going to continue to give two stacks of serendipity, we have the ability to trade efficiency for burst pretty easily through BH->GH combos.
Our restodruid was actually slightly above holypaladins on the Patchwerk dummy last night. The dummy that doesn't do hatefuls and increase damage over time on one tank only. Holypriests not even close (neither of us had optimal specs though).
E: That was a lot of sniping going on though, as he starts hitting very weak.
Last edited by Bjork : 03/01/09 at 8:45 AM.
03/01/09, 8:49 AM #156
Elimbras
Don Flamenco
Dwarf Priest
Eitrigg (EU)
Originally Posted by Dagma Now, this isn't a completely general model. I solved with a fixed n = 4 (2 second cast intervals), and this model assumes even cast intervals. We can't be sure that the quality of the estimate doesn't depend strongly upon n, until we solve for either a more general model or more other n values. Thankfully, that's not a hard task. But a model of uneven cast intervals would be a little trickier, perhaps. But absent the additional modeling effort, I am at least comfortable to conduct rough theorycraft using the 1-(1-c)^n approximation. It seems pretty good, to get a sense of the uptime.
Yeh, the formula [latex]1 - (1-C)^n[\latex] is exact for the steady state, when your spell inter-time is such that it divides 8 sec. And the steady state approximation is really good for any fight where mana regen is important (this excludes shorts fights, but manapool is the main component there).
If you want to deal with uneven cast intervals, it's pretty easy.
Just use the probability to have the buff at time t (assuming t > 8s) :
$P_{HC}(t)= 1 - (1-C)^{N(t)}$, where N(t) is the number of spells that can trigger HC in the last 8 seconds.
Then, you can compute the mean HC uptime :
$HC_{Uptime}= \frac{1}{L} \int_0^L P_{HC}(t) dt = \sum_{i=0}^8 (1 - (1-C)^i ) P(N = i)$,
where P(N=i) is the probability to have i spell that can triggers HC in the last 8 sec
($P(N=i) = \frac{1}{L} \int_0^L 1_{N(t)=i} dt$).
So, basically, all what you need is the distribution of N(t), instead of the mean value N.
And anyway, you may have an acceptable approximation using the mean value N directly.
The difference between the exact formula and the mean-value formula is :
$diff= (1-C)^{\sum p_i * i } - \sum p_i * (1-C)^i = 1 - \sum p_i * (1-C)^{i - N_{mean}}$
Note that the function x -> a^x is convex, so the mean value of the function is less than the function taken at the mean value. So, diff is negative, and using the mean value formula is overestimation.
Worst case is when P(N=0) = 0.5, and P(N=8) = 0.5.
Then the mean value formula gives HC = 1 - (1-C)^4
The real uptime is 0.5 * (1 - (1-C) ^ 8 ).
For the following values of crit rate, we get :
Crit rate Exact Uptime "Mean value" uptime difference 10 % 28.5 % 34.4 % 15 % 36.4 % 47.8 % 20 % 41.6 % 59 % 25 % 45 % 68.4 % 30 % 47.1 % 76 % 35 % 48.4 % 82.1 %
For low crit rate, the difference is not such big. For high crit rate, it's more important, but that's basically because the real HC uptime is capped by one half (it can't proc half of the time).
If you take less variance in N(t), the approximation would be quite closer.
Last edited by Elimbras : 03/01/09 at 9:17 AM.
03/01/09, 1:55 PM #157 Safiyania Von Kaiser Safyania Troll Rogue Zul'Jin Having spent some time trying out Empowered Renew on my priest alt I found that my renew ticks were ticking for a lot less than I had expected them to be assuming the formula for determining healing per tick is as follows: (Improved Renew)*(Twin Disciplines)*(Spiritual Healing)*(Glyph)*(Base+Spellpower*C*(Empowered Renew)) After quite a bit of jumping off of high places I have found that the spell formula for renew ticks is actually a better fit to the following: (Sum of Improved Renew,Twin Disciplines, Spiritual Healing and Glyph Effects)*(Base+Spellpower*C*(Empowered Renew)) Having stripped my toon of all gear and talents I have confirmed that Base = 280. By throwing gear on but having no talents and glyphs I tentatively determined C to be approximately 0.376. Through gradual addition of the talents and the glyph of renew to the mix I found that Improved Renew, Twin Disciplines, Spiritual Healing and the glyph of renew appear to be additive rather than multiplicative in their effects. Empowered Renew is indeed a spellpower coefficient modifier as is clearly indicated by the tooltip information. For my priest, with 2231 Spellpower, including Inner Fire but no Divine Spirit active (to compare to live) the formula, as derived from my empirical testing yields the following prediction for a renew tick: (1+0.15+0.05+0.1+0.25)*(280+2231*0.376*1.15)=1929.26. In PTR, observed renew ticks were 1929-1930. Upfront healing component was on the order of 1213-1214 (more on this to come) Stripping off gear to a lower spellpower amount of 963 (an order of magnitude difference) yields: Predicted tick: 1079.42 Observed range:1079-1080, Upfront component: 679. With regards to the Upfront healing component of an Empowered renew it appears that it is currently being determined as follows: Upfront Healing=0.15*(Total Amount Healed by 5 Unglyphed Renew ticks) Again, for 2231 Spellpower, if Upfront healing were determined by the ACTUAL amount of total healing done by my glyphed renew it would be in the range of: 0.15*1929*4<=Observed Upfront Healing<=0.15*1930*4 -> 1157.4<=Observed Upfront Healing<=1158, which clearly it was not from my testing. But what if it were calculating based on 5 ticks, not 4? Then the observed range should have been on the order of 1446.75<=Observed Upfront Healing<= 1447.5. So clearly the actual healing done by your glyphed renew ticks is not how the upfront component is being calculated. Now, what if we remove the glyph effect from the healing? Going back to my predictive model for renew ticks I estimate an unglyphed renew tick at 2231 spellpower to be 1618 (which also appears to correspond with observed ticks for fully talented but unglyphed empowered renew). 5 ticks @ 1618 healing = 8090 Healing Done, 15% of which is 1213.5 which falls within the observed range of the Upfront healing component of Empowered Renew. At 963 Spellpower, 5 unglyphed ticks should yield ~4527 total healing done, 15% of which would be 679, again corresponding to the observed upfront healing component.
03/01/09, 6:45 PM #158 Tainter Don Flamenco Tainter Undead Priest Frostwhisper (EU) There's something funny about the Renew glyph. It's meant to increase healing per tick, but with the same total healing. Somehow the way it works is screwed up though and while it does increase the tick size the total healing is currently less than for a non-glyphed Renew. Edit: On Live anyway. If you can't join them? Beat them.
03/01/09, 7:48 PM #159
Sinndir
Great Tiger
Night Elf Priest
Medivh
Originally Posted by Tainter There's something funny about the Renew glyph. It's meant to increase healing per tick, but with the same total healing. Somehow the way it works is screwed up though and while it does increase the tick size the total healing is currently less than for a non-glyphed Renew. Edit: On Live anyway.
How do you figure? If your renew healed for 10,000 health (ticks of 2000) then the glyph would change it to 12,500 (ticks of 3125)
Fair enough lets try this another way (also not that I do not current have the glyph of renew, just using a hypothetical situation.
Renew that heals for 10,000 health (ticks of 2000)
Glyph changes the ticks to be 2500, but only have four of them. Still manages to be 10,000 ?
Last edited by Sinndir : 03/01/09 at 8:19 PM.
03/01/09, 7:48 PM #160 constantius Soda Popinski Nidaba Pandaren Priest Windrunner Playing around with the Patchwerk test, and our tank healing (as Holy) has taken a huge hit to the regen nuts. I simply can't sustain tank healing anymore. It used to be, with careful use of IHC, I *could* spam GHeal at the same rate as paladins and their HL. On PTR (with current build), it's not even close. The paladins can sustain a mixed healing for the entire duration of 30 million HP ... I cannot. I'd try Disc, but I'm not Glyph'd. [e] @ Sinn: no, he's right. The math doesn't work out. The glyph is supposed to increase each tick by 25%, and reduce total ticks by 1. But the total healing goes down -- something's wrong with their implementation. You don't get (healing from a 5-tick renew) over (4 ticks). You get slightly less. [e2] Alright, tried some more pulls. With 4-piece T7, two regen trinkets, and a druid keeping Rejuv (Revivify) up on me, I could get through a fight without sucking fumes. Much. When HC proc'd in that gear, it gave me the type of regen I enjoy on live. When it was down, I was hurting pretty bad. Here's hoping they avoid Patchwerk-style stand-and-nuke-heal fights for Ulduar, because Holy is pretty bad at them now. Disc seems to be ok, though, so dual-spec, go. Last edited by constantius : 03/01/09 at 8:45 PM. Anyone who cannot cope with mathematics is not fully human. At best he is a tolerable subhuman who has learned to wear shoes, bathe, and not make messes in the house. - R.A. Heinlein
03/01/09, 9:44 PM #161 toth Von Kaiser Senres Blood Elf Priest Dragonmaw I was thinking today that especially with what we're seeing on the PTR, mp5 cloth needs to go in 3.1. It's polluting the loot tables with stuff nobody wants. Nidaba has some points in this post about mp5 vs. spirit for mana regen. My take on it is that spirit gives you almost the same static mana regen assuming 100% I5SR. In addition holy priests gain a portion of spirit as spellpower and mp5 gains no benefit under the new Holy Concentration while spirit regen does. This means that spirit allows our mana regen to scale with crit rating. Clearly mp5 is wasted itemization for mages, warlocks and shadow priests. I believe it is also wasted itemization for holy priests. You can make an argument that mp5 is better than spirit for disc priests but even then it's pretty marginal. I'd personally like to see Blizzard replace mp5 with spirit on future cloth drops. For me, mp5 cloth is something I grab until I get a replacement with spirit simply because nobody else wants it.
03/01/09, 9:45 PM #162 Sarkli Glass Joe Willmakucri Blood Elf Priest Arygos Ok.. to talk about some of the things this new patch is going to bring forth that has either tickled the anger nerve in my brain or had a question about. All in all i really have no complaints about what they are doing to Priests. They are finally getting rid of pointless spells and talents, buffing or giving us other ones and definitely giving us more to have fun with. Based on MMO-Champion With how easily fear can be interupted, trinketed out of or be immune to i see nothing but a waste of effort been put into Maybe its just me.. but what id rather see is a cooldown reducer put in for this fear glyph instead of having the duration lengthened. Anyone else agree? I mean i dont ever see myself using this glyph ESPECIALLY putting an 8sec INCREASE on its CD. Blackout: This talent has been removed. Did anyone else catch this? I had to do a double-take..why is this being removed? I havent read up on much around the forums lately about possible changes etc etc but it seems to me that this is a big mistake on Blizzards part to remove this talent. IMO its one of the biggest pains in the ass to have happen to you in a pvp setting and is awesome for a pvp Spriest to have available to him. Only thing i can think of to why they are getting rid of it is because of diminishing returns. Is it currently affected by that? If so why cant they just do to it what they're doing to a warriors charge? PvP Trinkets will now break Shackle Undead. Question on this one. What is it used on other than a pet of sorts? DK w/ Lichborne?
03/01/09, 10:17 PM #163
cs-cam
Von Kaiser
Nagrand
Originally Posted by Sarkli Blackout: This talent has been removed. Did anyone else catch this? I had to do a double-take..why is this being removed? I havent read up on much around the forums lately about possible changes etc etc but it seems to me that this is a big mistake on Blizzards part to remove this talent. IMO its one of the biggest pains in the ass to have happen to you in a pvp setting and is awesome for a pvp Spriest to have available to him. Only thing i can think of to why they are getting rid of it is because of diminishing returns. Is it currently affected by that? If so why cant they just do to it what they're doing to a warriors charge?
Most RnG-based stuns have been removed due to PvP implications, it was Blackouts turn to go.
03/01/09, 10:32 PM #164 Iluminati Piston Honda Iluminati Human Priest Earthen Ring That's true, however mages still have impact and shadow priests are much worse than mages in pvp, so there must be some other logic behind the removal. It's possible they were just shuffling talents and didnt want to put it back after the darkness change. As for psychic scream: "Psychic Horror: Has been redesigned and is now a 1-pt talent. You terrify the target, causing them to tremble in horror for 3 sec. and drop all weapons (disarm effect: including bows) for 10 sec. 1 minute cooldown. Instant cast. The horror effect can be dispelled, but the disarm cannot." It's up on mmo-champion but not in their talent calc. Can anyone confirm/deny this change?
03/01/09, 11:29 PM #165 Yaltus Bald Bull Yaltus Blood Elf Priest Mal'Ganis Impact is almost certainly on the way out as well, it's just that Blizzard has to figure out something else to put there before they want to yonk it. Since they were already messing with the shadow tree, they took advantage of this chance to remove blackout. World of Warcraft - English (NA) Forums -> Remove RNG stun = buff damage? Mon centre cède, ma droite recule, situation excellente, j'attaque.
Elitist Jerks 3.1 PTR
|
2013-06-20 05:59:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 13, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5438287854194641, "perplexity": 3841.8316907062176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710366143/warc/CC-MAIN-20130516131926-00001-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/3010753/how-to-determine-local-extrema-for-fx-x-cdot-sinx-sinx
|
# How to determine local extrema for $f(x) = x\cdot \sin(x) ^ {\sin(x)}$
I need to find the local extrema points of the following function: $$f(x) = x\cdot\sin(x) ^ {\sin(x)}$$
I was already able to derive to this function: $$f'(x) = x (\ln(\sin(x))+1)\cos(x)\sin(x)^{\sin(x)}+\sin(x)^ {\sin(x)}$$
• find $x$ where $f'(x)$ is equal to $0$. Then assess the sign change while crossing the roots – Makina Nov 23 '18 at 19:45
• This function is defined in $\bigcup\limits_{n\in\mathbb Z}[2n\pi,2n\pi+\pi]$. – Federico Nov 23 '18 at 19:50
• You first have to take into considerations the extreme points of the intervals – Federico Nov 23 '18 at 19:51
• Then your equation $f'=0$ inside these intervals is highly non-linear and trascendental... – Federico Nov 23 '18 at 19:52
• I don't think much can be said, except numerically – Federico Nov 23 '18 at 19:53
The domain of $$f$$ is where $$\sin x>0$$ i.e. $$\bigcup_{n\in \Bbb Z}(2n\pi ,2n\pi +\pi)$$ on this domain by equaling the derivative to zero we obtain $$\sin x^{\sin x}=0\\\text{or}\\ x(1+\ln \sin x)\cos x+1=0$$where $$\sin x^{\sin x}=0$$ is always impossible and the second equation can only be solved numerically. Here is a sketch of the function
|
2019-11-12 11:56:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7863858938217163, "perplexity": 263.6709044072455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665521.72/warc/CC-MAIN-20191112101343-20191112125343-00315.warc.gz"}
|
http://www.cs.columbia.edu/~rocco/papers/random09.html
|
Testing +1/-1 Weight Halfspaces.
K. Matulef and R. O'Donnell and R. Rubinfeld and R. Servedio.
13th International Workshop on Randomness and Computation (RANDOM), 2009, pp. 646--657.
Abstract:
We consider the problem of testing whether a Boolean function f: \{-1,1}^n \to \{-1,1\} is a unate reorientation of majority (UROM), i.e., a function of the form f(x) = sign(w_1 x_1 + w_2 x_2 + \cdots + w_n x_n) where the weights $w_i$ take values in \{-1,1\}. We show that the complexity of this problem is markedly different from the problem of testing whether f is a general halfspace with arbitrary weights. While the latter can be done with a number of queries that is independent of n (by results of \cite{MORS:09}), to distinguish whether f is a UROM versus \epsilon-far from all UROMs we prove that nonadaptive algorithms must make \Omega(\log n) queries. We complement this lower bound with a sublinear upper bound showing that O(\sqrt{n}\cdot poly(\frac{1}{\eps})) queries suffice.
|
2018-01-18 19:57:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.900742769241333, "perplexity": 1298.4614213705374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887600.12/warc/CC-MAIN-20180118190921-20180118210921-00785.warc.gz"}
|
https://www.mathway.com/examples/basic-math/data-measurement-and-statistics/finding-the-mode?id=283
|
# Basic Math Examples
, , , , , , , , , , , ,
Step 1
The mode is the element that occurs most in the data set. In this case, occurs times.
|
2022-06-29 15:55:10
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9892398715019226, "perplexity": 577.1321811544886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103640328.37/warc/CC-MAIN-20220629150145-20220629180145-00199.warc.gz"}
|
http://mathhelpforum.com/calculus/71558-integration-parts-what-am-i-doing-wrong-print.html
|
# Integration By Parts: What am I doing wrong?
• Feb 3rd 2009, 09:20 AM
Krooger
Integration By Parts: What am I doing wrong?
$\int 3xcos(5x)dx$
Let $u = 3x$ so, $du = 3dx$
Let $dv = cos(5x)dx$ so, $v = sin(5x)/5$
Since, $\int udv = uv - \int duv$
$\int 3xcos(5x)dx = 3xsin(5x)/5 - (1/15)\int sin(5x)dx$
$= 3xsin(5x)/5 - (1/15)[-cos(5x)/5]$
$= (3/5)xsin(5x) + (1/75)cos(5x)$
...but this is wrong.
the answer is: $(3/25)(cos(5x)+5xsin(5x))$
Where did I go wrong? Thank You
• Feb 3rd 2009, 10:14 AM
Opalg
Quote:
Originally Posted by Krooger
$\int 3xcos(5x)dx$
Let $u = 3x$ so, $du = 3dx$
Let $dv = cos(5x)dx$ so, $v = sin(5x)/5$
Since, $\int udv = uv - \int duv$
$\int 3xcos(5x)dx = 3xsin(5x)/5 - ({\color{red}3/5})\int sin(5x)dx$ (You divided by 3 instead of multiplying by 3.)
$= 3xsin(5x)/5 - (1/15)[-cos(5x)/5]$
$= (3/5)xsin(5x) + (1/75)cos(5x)$
...but this is wrong.
the answer is: $(3/25)(cos(5x)+5xsin(5x))$
Where did I go wrong? Thank You
..
• Feb 3rd 2009, 02:23 PM
Krooger
Ah I see, I was thinking you had to isolate dx in order to sub it in making my result du/3 instead of 3du.
Thank You
• Feb 3rd 2009, 03:07 PM
ahawk1
[quote=Krooger;259573] $\int 3xcos(5x)dx$
Let $u = 3x$ so, $du = 3dx$
Let $dv = cos(5x)dx$ so, $v = sin(5x)/5$
alright so u do uv-integral(vdu)
so,
3xsin(5x)/5-integral(sin(5x)/5*3dx
3xsin(5x)/5+3cos(5x)/25
do u understand how i got that?
or need me to explain further?
• Feb 3rd 2009, 03:33 PM
Krooger
Yup, got it (Rofl) I was thrown off from the fact that you can just use du, you dont have to isolate and replace dx. Because of that I was using du/3 not 3du.
Thank You again
|
2017-05-27 08:15:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9525688886642456, "perplexity": 2445.720354020428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608877.60/warc/CC-MAIN-20170527075209-20170527095209-00131.warc.gz"}
|
https://orbilu.uni.lu/browse?type=author&sort_by=1&order=DESC&rpp=20&etal=3&value=Coron%2C+Jean-S%C3%A9bastien+50001378&offset=20
|
References of "Coron, Jean-Sébastien 50001378" in Complete repository Arts & humanities Archaeology Art & art history Classical & oriental studies History Languages & linguistics Literature Performing arts Philosophy & ethics Religion & theology Multidisciplinary, general & others Business & economic sciences Accounting & auditing Production, distribution & supply chain management Finance General management & organizational theory Human resources management Management information systems Marketing Strategy & innovation Quantitative methods in economics & management General economics & history of economic thought International economics Macroeconomics & monetary economics Microeconomics Economic systems & public economics Social economics Special economic topics (health, labor, transportation…) Multidisciplinary, general & others Engineering, computing & technology Aerospace & aeronautics engineering Architecture Chemical engineering Civil engineering Computer science Electrical & electronics engineering Energy Geological, petroleum & mining engineering Materials science & engineering Mechanical engineering Multidisciplinary, general & others Human health sciences Alternative medicine Anesthesia & intensive care Cardiovascular & respiratory systems Dentistry & oral medicine Dermatology Endocrinology, metabolism & nutrition Forensic medicine Gastroenterology & hepatology General & internal medicine Geriatrics Hematology Immunology & infectious disease Laboratory medicine & medical technology Neurology Oncology Ophthalmology Orthopedics, rehabilitation & sports medicine Otolaryngology Pediatrics Pharmacy, pharmacology & toxicology Psychiatry Public health, health care sciences & services Radiology, nuclear medicine & imaging Reproductive medicine (gynecology, andrology, obstetrics) Rheumatology Surgery Urology & nephrology Multidisciplinary, general & others Law, criminology & political science Civil law Criminal law & procedure Criminology Economic & commercial law European & international law Judicial law Metalaw, Roman law, history of law & comparative law Political science, public administration & international relations Public law Social law Tax law Multidisciplinary, general & others Life sciences Agriculture & agronomy Anatomy (cytology, histology, embryology...) & physiology Animal production & animal husbandry Aquatic sciences & oceanology Biochemistry, biophysics & molecular biology Biotechnology Entomology & pest control Environmental sciences & ecology Food science Genetics & genetic processes Microbiology Phytobiology (plant sciences, forestry, mycology...) Veterinary medicine & animal health Zoology Multidisciplinary, general & others Physical, chemical, mathematical & earth Sciences Chemistry Earth sciences & physical geography Mathematics Physics Space science, astronomy & astrophysics Multidisciplinary, general & others Social & behavioral sciences, psychology Animal psychology, ethology & psychobiology Anthropology Communication & mass media Education & instruction Human geography & demography Library & information sciences Neurosciences & behavior Regional & inter-regional studies Social work & social policy Sociology & social sciences Social, industrial & organizational psychology Theoretical & cognitive psychology Treatment & clinical psychology Multidisciplinary, general & others Showing results 21 to 40 of 40 1 2 3 Fast Evaluation of Polynomials over Binary Finite Fields and Application to Side-Channel CountermeasuresCoron, Jean-Sébastien ; Roy, Arnab; Venkatesh, Srinivas Vivek in Batina, Lejla; Robshaw, Matthew (Eds.) Cryptographic Hardware and Embedded Systems – CHES 2014 (2014)We describe a new technique for evaluating polynomials over binary finite fields. This is useful in the context of anti-DPA countermeasures when an S-box is expressed as a polynomial over a binary finite ... [more ▼]We describe a new technique for evaluating polynomials over binary finite fields. This is useful in the context of anti-DPA countermeasures when an S-box is expressed as a polynomial over a binary finite field. For n-bit S-boxes our new technique has heuristic complexity ${\cal O}(2^{n/2}/\sqrt{n})$ instead of ${\cal O}(2^{n/2})$ proven complexity for the Parity-Split method. We also prove a lower bound of ${\Omega}(2^{n/2}/\sqrt{n})$ on the complexity of any method to evaluate $n$-bit S-boxes; this shows that our method is asymptotically optimal. Here, complexity refers to the number of non-linear multiplications required to evaluate the polynomial corresponding to an S-box. In practice we can evaluate any 8-bit S-box in 10 non-linear multiplications instead of 16 in the Roy-Vivek paper from CHES 2013, and the DES S-boxes in 4 non-linear multiplications instead of 7. We also evaluate any 4-bit S-box in 2 non-linear multiplications instead of 3. Hence our method achieves optimal complexity for the PRESENT S-box. [less ▲]Detailed reference viewed: 184 (6 UL) A Note on the Bivariate Coppersmith TheoremCoron, Jean-Sébastien ; Kirichenko, Alexey; Tibouchi, Mehdi in Journal of Cryptology (2013), 26(2), 246-250Detailed reference viewed: 127 (2 UL) Practical Multilinear Maps over the IntegersCoron, Jean-Sébastien ; Lepoint, Tancrède ; Tibouchi, Mehdi in CRYPTO (1) (2013)Detailed reference viewed: 162 (3 UL) Batch Fully Homomorphic Encryption over the IntegersCheon, Jung Hee; Coron, Jean-Sébastien ; Kim, Jinsu et alin EUROCRYPT (2013)Detailed reference viewed: 183 (0 UL) Public Key Compression and Modulus Switching for Fully Homomorphic Encryption over the IntegersCoron, Jean-Sébastien ; Naccache, David; Tibouchi, Mehdi in EUROCRYPT (2012)Detailed reference viewed: 170 (0 UL) Conversion of Security Proofs from One Leakage Model to Another: A New IssueCoron, Jean-Sébastien ; Christophe, Giraud; Emmanuel, Prouff et alin Proceedings of COSADE 2012 (2012)To guarantee the security of a cryptographic implementation against Side Channel Attacks, a common approach is to formally prove the security of the corresponding scheme in a model as pertinent as ... [more ▼]To guarantee the security of a cryptographic implementation against Side Channel Attacks, a common approach is to formally prove the security of the corresponding scheme in a model as pertinent as possible. Nowadays, security proofs for masking schemes in the literature are usually conducted for models where only the manipulated data are assumed to leak. However in practice, the leakage is better modeled encompassing the memory transitions as e.g. the Hamming distance model. From this observation, a natural question is to decide at which extent a countermeasure proved to be secure in the first model stays secure in the second. In this paper, we look at this issue and we show that it must definitely be taken into account. Indeed, we show that a countermeasure proved to be secure against second-order side-channel attacks in the first model becomes vulnerable against a first-order side-channel attack in the second model. Our result emphasize the issue of porting an implementation from devices leaking only on the manipulated data to devices leaking on the memory transitions. [less ▲]Detailed reference viewed: 150 (8 UL) Fully Homomorphic Encryption over the Integers with Shorter Public KeysCoron, Jean-Sébastien ; Mandal, Avradip ; Naccache, David et alin CRYPTO (2011)Detailed reference viewed: 157 (0 UL) Improved Generic Algorithms for Hard KnapsacksBecker, Anja; Coron, Jean-Sébastien ; Joux, Antoinein EUROCRYPT (2011)Detailed reference viewed: 137 (1 UL) A Domain Extender for the Ideal CipherCoron, Jean-Sébastien ; Dodis, Yevgeniy; Mandal, Avradip et alin Proceedings of TCC 2010 (2010)We describe the first domain extender for ideal ciphers, i.e. we show a construction that is indifferentiable from a 2n-bit ideal cipher, given a n-bit ideal cipher. Our construction is based on a 3-round ... [more ▼]We describe the first domain extender for ideal ciphers, i.e. we show a construction that is indifferentiable from a 2n-bit ideal cipher, given a n-bit ideal cipher. Our construction is based on a 3-round Feistel, and is more efficient than first building a n-bit random oracle from a n-bit ideal cipher (as in [9]) and then a 2n-bit ideal cipher from a n-bit random oracle (as in [10], using a 6-round Feistel). We also show that 2 rounds are not enough for indifferentiability by exhibiting a simple attack. We also consider our construction in the standard model: we show that 2 rounds are enough to get a 2n-bit tweakable block-cipher from a n-bit tweakable block-cipher and we show that with 3 rounds we can get beyond the birthday security bound. [less ▲]Detailed reference viewed: 131 (0 UL) Analysis and Improvement of the Random Delay Countermeasure of CHES 2009Coron, Jean-Sébastien ; Kizhvatov, Ilya in Proceedings of CHES 2010 (2010)Random delays are often inserted in embedded software to protect against side-channel and fault attacks. At CHES 2009 a new method for generation of random delays was described that increases the attacker ... [more ▼]Random delays are often inserted in embedded software to protect against side-channel and fault attacks. At CHES 2009 a new method for generation of random delays was described that increases the attacker's uncertainty about the position of sensitive operations. In this paper we show that the CHES 2009 method is less secure than claimed. We describe an improved method for random delay generation which does not suffer from the same security weakness. We also show that the paper's criterion to measure the security of random delays can be misleading, so we introduce a new criterion for random delays which is directly connected to the number of acquisitions required to break an implementation. We mount a power analysis attack against an 8-bit implementation of the improved method verifying its higher security in practice. [less ▲]Detailed reference viewed: 132 (0 UL) Efficient Indifferentiable Hashing into Ordinary Elliptic CurvesBrier, Eric; Coron, Jean-Sébastien ; Icart, Thomas et alin CRYPTO (2010)Detailed reference viewed: 135 (0 UL) PSS Is Secure against Random Fault AttacksCoron, Jean-Sébastien ; Mandal, Avradip in Proceedings of Asiacrypt 2009 (2009)A fault attack consists in inducing hardware malfunctions in order to recover secrets from electronic devices. One of the most famous fault attack is Bellcore’s attack against RSA with CRT; it consists in ... [more ▼]A fault attack consists in inducing hardware malfunctions in order to recover secrets from electronic devices. One of the most famous fault attack is Bellcore’s attack against RSA with CRT; it consists in inducing a fault modulo p but not modulo q at signature generation step; then by taking a gcd the attacker can recover the factorization of N?=?pq. The Bellcore attack applies to any encoding function that is deterministic, for example FDH. Recently, the attack was extended to randomized encodings based on the iso/iec 9796-2 signature standard. Extending the attack to other randomized encodings remains an open problem. In this paper, we show that the Bellcore attack cannot be applied to the PSS encoding; namely we show that PSS is provably secure against random fault attacks in the random oracle model, assuming that inverting RSA is hard. [less ▲]Detailed reference viewed: 121 (0 UL) Fault Attacks on RSA Signatures with Partially Unknown MessagesCoron, Jean-Sébastien ; Joux, Antoine; Kizhvatov, Ilya et alin Proceedings of CHES 2009 (2009)Fault attacks exploit hardware malfunctions to recover secrets from embedded electronic devices. In the late 90’s, Boneh, DeMillo and Lipton introduced fault-based attacks on CRt-RSA. These attacks factor ... [more ▼]Fault attacks exploit hardware malfunctions to recover secrets from embedded electronic devices. In the late 90’s, Boneh, DeMillo and Lipton introduced fault-based attacks on CRt-RSA. These attacks factor the signer’s modulus when the message padding function is deterministic. However, the attack does not apply when the message is partially unknown, for example when messages contain some randomness which is recovered only when verifying a correct signature. In this paper we successfully extends rsa fault attacks to a large class of partially known message configurations. The new attacks rely on Coppersmith’s algorithm for finding small roots of multivariate polynomial equations. We illustrate the approach by successfully attacking several randomized versions of the ISO/IEC 9796-2 encoding standard. Practical experiments show that a 2048-bit modulus can be factored in less than a minute given one faulty signature containing 160 random bits and an unknown 160-bit message digest. [less ▲]Detailed reference viewed: 190 (1 UL) Practical Cryptanalysis of ISO/IEC 9796-2 and EMV SignaturesCoron, Jean-Sébastien ; Naccache, David; Tibouchi, Mehdi et alin Proceedings of CRYPTO 2009 (2009)In 1999, Coron, Naccache and Stern discovered an existential signature forgery for two popular RSA signature standards, ISO/IEC 9796-1 and 2. Following this attack ISO/IEC 9796-1 was withdrawn. ISO/IEC ... [more ▼]In 1999, Coron, Naccache and Stern discovered an existential signature forgery for two popular RSA signature standards, ISO/IEC 9796-1 and 2. Following this attack ISO/IEC 9796-1 was withdrawn. ISO/IEC 9796-2 was amended by increasing the message digest to at least 160 bits. Attacking this amended version required at least 2^{61} operations. In this paper, we exhibit algorithmic refinements allowing to attack the amended (currently valid) version of ISO/IEC 9796-2 for all modulus sizes. A practical forgery was computed in only two days using 19 servers on the Amazon EC2 grid for a total cost of $\simeq$ US$800. The forgery was implemented for e?= 2 but attacking odd exponents will not take longer. The forgery was computed for the RSA-2048 challenge modulus, whose factorization is still unknown. The new attack blends several theoretical tools. These do not change the asymptotic complexity of Coron et al.’s technique but significantly accelerate it for parameter values previously considered beyond reach. While less efficient (US$45,000), the acceleration also extends to EMV signatures. EMV is an ISO/IEC 9796-2-compliant format with extra redundancy. Luckily, this attack does not threaten any of the 730 million EMV payment cards in circulation for operational reasons. Costs are per modulus: after a first forgery for a given modulus, obtaining more forgeries is virtually immediate. [less ▲]Detailed reference viewed: 153 (9 UL) Analysis of the split mask countermeasure for embedded systemsCoron, Jean-Sébastien ; Kizhvatov, Ilya in 4th Workshop on Embedded Systems Security (2009)We analyze a countermeasure against differential power and electromagnetic attacks that was recently introduced under the name of split mask. We show a general weakness of the split mask countermeasure ... [more ▼]We analyze a countermeasure against differential power and electromagnetic attacks that was recently introduced under the name of split mask. We show a general weakness of the split mask countermeasure that makes standard DPA attacks with a full key recovery applicable to masked AES and DES implementations. Complexity of the attacks is the same as for unmasked implementations. We implement the most efficient attack on an 8-bit AVR microcontroller. We also show that the strengthened variant of the countermeasure is susceptible to a second order DPA attack independently of the number of used mask tables. [less ▲]Detailed reference viewed: 124 (0 UL) An Efficient Method for Random Delay Generation in Embedded SoftwareCoron, Jean-Sébastien ; Kizhvatov, Ilya in Proceedings of CHES 2009 (2009)Random delays are a countermeasure against a range of side channel and fault attacks that is often implemented in embedded software. We propose a new method for generation of random delays and a criterion ... [more ▼]Random delays are a countermeasure against a range of side channel and fault attacks that is often implemented in embedded software. We propose a new method for generation of random delays and a criterion for measuring the efficiency of a random delay countermeasure. We implement this new method along with the existing ones on an 8-bit platform and mount practical side-channel attacks against the implementations. We show that the new method is significantly more secure in practice than the previously published solutions and also more lightweight. [less ▲]Detailed reference viewed: 143 (0 UL) The Random Oracle Model and the Ideal Cipher Model Are EquivalentCoron, Jean-Sébastien ; Patarin, Jacques; Seurin, Yannickin Advances in Cryptography (2008)Detailed reference viewed: 138 (3 UL) A New DPA Countermeasure Based on Permutation TablesCoron, Jean-Sébastien in Advances in Cryptography (2008)Detailed reference viewed: 130 (1 UL) Attack and Improvement of a Secure S-Box Calculation Based on the Fourier TransformCoron, Jean-Sébastien ; Giraud, Christophe; Prouff, Emmanuel et alin Advances in Cryptography (2008)Detailed reference viewed: 114 (0 UL) Cryptanalysis of ISO/IEC 9796-1Coppersmith, Don; Coron, Jean-Sébastien ; Grieu, François et alin Journal of Cryptology (2008), 21(1), 2751Detailed reference viewed: 75 (0 UL) 1 2 3
|
2023-02-08 14:02:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5835378170013428, "perplexity": 8449.593683382876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00840.warc.gz"}
|
http://www.jeremy-oakley.staff.shef.ac.uk/post/elicitation-cricket/
|
# Eliciting a distribution for a cricket score
In this post I’ll illustrate the process of eliciting (my own) distribution following a SHELF approach. The true value of the uncertain quantity is now known; you’ll have to take my word for it that I haven’t cheated! But in any case, the purpose of this post is not to validate the method, or to test my own ability in making probability judgements: I want to see to the extent to which I can justify my probability values, based on the evidence I have available to me.
## The quantity of interest
I’m going to use a cricket example, but I’ll try to write this so that you can follow the elicitation without knowing anything about cricket.
It’s the end of the first day of a five-day match between Sri Lanka and England (November 6th, 2018). The England batsmen Ben Foakes has scored 87 runs not out. My quantity of interest $X$ is how many additional runs he will score in his first innings (so his total score will be $87+X$). I will treat $X$ as continuous.
## My evidence dossier
I first have to compile all the evidence on which I’ll base my judgements. There is a wealth of data available (which does make the elicitation problem somewhat easier than some others), but none of it is directly relevant, in that Foakes is playing his first international match; this situation has never occurred before. I will have to consider the relevance of each piece of evidence.
One could go a lot more in depth here, but I’ll restrict myself to the following. (If you’re not a cricket enthusiast, you can skip over this list, but one detail that’s helpful to know is that Foakes has to bat with a partner; it’s important how good his two remaining partners are.)
1. The state of the match at the end of day 1. (See the day 1 summary at the bottom of the linked page)
2. Foakes’ domestic cricket batting records (Average 40.6, eight hundreds, highest score: 141 not out)
3. The batting records for the two other remaining batsmen: Leach (domestic) and Anderson (international). (Averages of 12.55 and 9.8 respectively)
4. Previous 9th and 10th wicket partnerships for England in Sri Lanka. (Previous highest 25 and 41 respectively.)
5. Scores above 100 by players on their test match debuts. (Record for the ‘modern era’ is Jacques Rudolph’s 222 not out. Most recent English batsmen to do this was Matt Prior with 126 not out.)
### ‘Missing’ evidence
One can also think about evidence that might exist, but hasn’t been provided. For a real SHELF workshop, the organisers may try to obtain it, but simply making a note would at least flag a point for consideration. An example here would be
1. I think there is an increased risk of a batsman getting out at the start of a new session/day; the bowlers will be rested, and the batsman will need to regain his concentration. I can recall instances of this happening, but I don’t have any data, and there’s a risk of an availability bias.
## My plausible range
Choosing a lower plausible limit is easy: 0 is a real possibility here, so I’ll set $L=0$. Choosing the upper plausible limit $U$ is harder. I could simply pick some large value, say 1000; that would be an absurdly high value (although…) But I don’t think this is good practice. In general, one needs to give serious consideration to what the extremes could be, and if I just causally pick some high number, I haven’t done this. Instead, I might
• search for the most comparable data I can find related to extreme values;
• make a judgement about how relevant those data are to the situation I have here.
I’ve picked out item 5 in my evidence dossier as being most relevant. Now I need to consider:
• how do the conditions relating to the observations in item 5 compare with those here? Are they more or less favourable towards large values of $X$?
I think the conditions here are considerably less favourable to large $X$, based on the match situation (item 1), the two remaining batsmen (item 3), and the point I raised in item 6. I’m going to think in terms of multiples of 50, which I think in the context of cricket makes some sense: each multiple of 50 is acknowledged as a sort of milestone. Looking at item 4, although I think $X>50$ is unlikely, I wouldn’t rule it out. I think $X>150$ is too implausible, from what I’ve said about item 5 and the conditions here, so I’ll settle on $U=100$ for my upper plausible limit.
## A comment on uniform distributions
I can’t claim that there is a ‘right’ distribution given my evidence dossier, but I think one can argue that some distributions are ‘wrong’. What about a uniform distribution over $[0, 100]$? This might seem like a conservative or ‘vaguely informative’ choice, but it’s important to think about just how much probability this distribution would give to larger values of $X$. The evidence really doesn’t support all values in this range equally; I think it’s more supportive of values closer to 0 than to 100, so I think it would be hard to justify this choice of distribution, given the evidence available.
## Eliciting tertiles
I like thinking in terms of tertiles: judging a 33rd and 66th percentile for $X$, which I’ll denote by $T_1$ and $T_2$. I get three intervals $[L, T_1]$, $[T_1, T_2]$, and $[T_2, U]$, each of which I think has a ‘reasonable chance’ (1 in 3) of containing $X$, but I would bet against a specific interval containing $X$. I wouldn’t be claiming, for example, that we should ‘expect’ $X$ to lie in $[T_1, T_2]$: I think it’s twice as likely to lie outside.
In judging $T_1$, I think are two separate plausible ‘mechanisms’ that would give low $X$: either Foakes gets out (item 6), or his two partners get out quickly (item 3). My precise choice of $T_1$ is a little arbitrary, but I think $T_1=10$ is reasonable in describing a moderate (1 in 3) chance that Foakes adds little to his score.
I find it a little harder to decide where to place $T_2$, but I will set it at 25. I think Foakes will need some support from at least one of the other batsmen to do this, and I think it’s possible one of them will stick around (I can still remember the earnest applause for Peter Such’s 50 ball duck…). But based on item 4, I still wouldn’t expect $X$ to be very large.
I also need to choose my median value, which has to be between 10 and 25. I’ll set this slightly closer to 10, choosing a median of $M=15$. One point of reference here is that $X=13$ would take him to 100 overall; he and his partners will be highly motivated to get 100, and I think it more likely than not, that he achieves it. It’s hard to be precise about this, though very precise placement of my median will probably not be important.
## Fitting and feedback
I’ll now fit distributions to these judgements, and look at the implied 5th and 95th percentiles (for three distributions only: a gamma, a log normal, and a log Student-$t_3$):
library(SHELF)
v <- c(10, 15, 25)
p <- c(0.33, 0.5, 0.66)
myfit <- fitdist(vals = v, probs = p, lower = 0, upper = 100)
feedback(myfit, quantiles = c(0.05, 0.95), values = 13)\$fitted.quantiles[, 3:5]
## gamma lognormal logt
## 0.05 1.39 2.64 1.56
## 0.95 65.10 92.70 157.00
There’s nothing to choose between the fitted 5th percentiles, but the fitted 95th for the log normal and log Student-$t$ are too high for me; the gamma seems more reasonable. So this would be my final elicited distribution:
plotfit(myfit, d = "gamma", ql = 0.05, qu = 0.95)
Note that feedback is essential here: there’s nothing to choose between the distributions regarding their fit to my tertiles and median; I have to see what they do in the tail.
## And finally…
Foakes’ was out for 107 on day two, so the true value of $X$ was 20. For this sort of problem, it’s probably not too hard to be conservative and avoid overconfidence; it’s perhaps harder to avoid underconfidence. I was fairly comfortable in using the evidence to choose $T_1$, but I found it harder to think about how large $X$ might reasonably be, and place $T_2$.
|
2021-10-22 13:04:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7535349726676941, "perplexity": 924.8083318836444}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00294.warc.gz"}
|
http://mathhelpforum.com/trigonometry/175362-need-help-using-trig-functions-verify-identities.html
|
Math Help - Need help on using trig functions to verify identities
1. Need help on using trig functions to verify identities
I'm not sure if i'm doing this problem correctly. :
cot z sin z + tan z cos z
Heres one way i did it:
cotz = (cos / sin) sin z
= cos z
= cot z + (sin/cos) cos z
This is what i'm left with:
= cot z + cos z
2. Originally Posted by MajorJohnson
I'm not sure if i'm doing this problem correctly. :
cot z sin z + tan z cos z
Heres one way i did it:
cotz = (cos / sin) sin z
= cos z
= cot z + (sin/cos) cos z
This is what i'm left with:
= cot z + cos z
Hi MajorJohnson,
I don't see an identity here, but if you just want to simplify....
$\cot z \sin z + \tan z \cos z = \dfrac{\cos z}{\sin z}\sin z+\dfrac{\sin z}{\cos z}\cos z= \boxed{\cos z + sin z}$
3. Originally Posted by masters
Originally Posted by MajorJohnson
I'm not sure if i'm doing this problem correctly. :
cot z sin z + tan z cos z
Heres one way i did it:
cotz = (cos / sin) sin z
= cos z
= cot z + (sin/cos) cos z
This is what i'm left with:
= cot z + cos z
Hi MajorJohnson,
I don't see an identity here, but if you just want to simplify....
$\cot z \sin z + \tan z \cos z = \dfrac{\cos z}{\sin z}\sin z+\dfrac{\sin z}{\cos z}\cos z= \boxed{\cos z + sin z}$
Sorry that cot z should be a cos z
And the method you did, was the other way i had tried to solve it and got the same answer. I just wasn't sure if it was right or not.
Thanks.
Code:
Here's another one i'm stuck on:
1 - sin^2 / csc^2 - 1 =
Heres what i've gottten so far:
1/csc ^ 2 - sin ^ 2 / 1 =
I've i try converting them using fundamental I.D. they won't cancel out, it'll just be subtracted to equal 0. Would that be the answer then?
Here's another one
sec x * sin x/tan x =
1 / cos x * sin x / sin x /cos x
(sin x / cos x ) (sin x / cos x) =
sin x /cos x ^ 2 * sin x =
sinx ^ 2 / cos x ^ 2
4. $\cos z \sin z + \dfrac{\sin z}{\cos z} \cos z = \cos z \sin z + \sin z = \sin z(\cos z + 1)$
5. Originally Posted by MajorJohnson
Code:
Here's another one i'm stuck on:
1 - sin^2 / csc^2 - 1 =
Heres what i've gottten so far:
1/csc ^ 2 - sin ^ 2 / 1 =
Hello again MajorJohnson,
$\dfrac{1-\sin^2 \theta}{\csc^2 \theta -1}$
Using your Pythagorean Identities, you can simplify this way:
$\dfrac{1-\sin^2 \theta}{\csc^2 \theta -1}=\dfrac{\cos^2 \theta}{\cot^2\theta}=\dfrac{\cos^2\theta}{\frac{\ cos^2\theta}{\sin^2\theta}}=\dfrac{\cos^2\theta}{1 } \times \dfrac{\sin^2\theta}{\cos^2\theta}=\sin^2\theta$
Originally Posted by MajorJohnson
Here's another one
sec x * sin x/tan x =
1 / cos x * sin x / sin x /cos x
(sin x / cos x ) (sin x / cos x) =
sin x /cos x ^ 2 * sin x =
sinx ^ 2 / cos x ^ 2
For this one...
$\dfrac{\sec x \sin x}{\tan x}=\dfrac{\frac{1}{\cos x} \sin x}{\frac{\sin x}{\cos x}}=\dfrac{\frac{\sin x}{\cos x}}{\frac{\sin x}{\cos x}}=1$
6. okay, thank you. One more question, how do you get your equations to look like that in your post?
7. Originally Posted by MajorJohnson
okay, thank you. One more question, how do you get your equations to look like that in your post?
Latex Help Forum
8. Thanks,
I might just keep using this thread to post more problems, up just to make sure i understand this. After working through these for some time, i feel like i'm almost comfortable with proving trig identities.
1) $sin x * tan x + cos x$
$sin x * sin x / cos x + cos x$
$sin x ^ 2 / cos x + cos x = Answer: sin x ^ 2$
2) $1\div 1 + cos x + 1 \ div 1 - cos x =$
$1 - cos x \div 1 + cos x + 1 + cos x \div 1 - cos x = Answer: 2$
For this problem, does it matter if the right side's answer is the same value on the left side but flipped?
3) $sin t * tan t = 1 - cos ^ 2 t / cos t$
Doing the right side:
$sin ^ 2 t/ cos t$
$sin t / cos t (sin t)$
$tan t sec t$
9. Originally Posted by MajorJohnson
Thanks,
I might just keep using this thread to post more problems, up just to make sure i understand this. After working through these for some time, i feel like i'm almost comfortable with proving trig identities.
1) $sin x * tan x + cos x$
$sin x * sin x / cos x + cos x$
$sin x ^ 2 / cos x + cos x = Answer: sin x ^ 2$ no ... should be sec(x)
2) $1\div 1 + cos x + 1 \ div 1 - cos x =$
$1 - cos x \div 1 + cos x + 1 + cos x \div 1 - cos x = Answer: 2$ no ... 2csc^2(x)
I'm confused with this one:
3) $sin t * tan t = 1 - cos ^ 2 t / cos t$
Doing the right side:
From this point i'm not sure about what to do next. Would i factor sin ^ 2 / cos t or convert it into tan ^ 2 t?
$sin ^ 2 t/ cos t$ \sin^2(t)/cos(t) = sin(t) * sin(t)/cos(t) = sin(t) * tan(t)
1. start a new thread ... this one is getting worn.
2. use parentheses or learn how to form a latex fraction ... e.g. \dfrac{\sin{x}}{\cos{x}} inside of the math tags will yield $\dfrac{\sin{x}}{\cos{x}}$
|
2015-08-05 07:29:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8946062922477722, "perplexity": 1973.9432396321097}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043062635.98/warc/CC-MAIN-20150728002422-00232-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://social.technet.microsoft.com/Forums/en-US/a7c6db82-bf57-405b-8c43-222e13034230/command-to-find-out-office-bit-version-for-remote-computers?forum=winserverpowershell
|
# Command to find out office bit version for remote computers
• ### Question
• Hello,
Is there a cmd line that will return info on whether an office install
is 32 or 64 bit on remote computers?
I've found info on how to obtain installed apps on remote computers,
but not the bit version:
Get-WmiObject -Class Win32_Product -Computer *remote computer name*
Thanks!
Thursday, March 7, 2019 9:22 PM
### All replies
• Look at the installed folder. If it is Programs x86m it is 32 bit.
I can almost guarantee that all are 32 bit as that is the MS recommendation.
\_(ツ)_/
Thursday, March 7, 2019 9:58 PM
• Hi,
Indeed, it will "most likely" be 32bit, however one way is to check the Windows Registry, for example:
Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Office\ClickToRun\Configuration" | Select Platform
Best regards,
Leon
Thursday, March 7, 2019 10:02 PM
• Thanks, but I need something that will obtain this info off of a remote computer.
Is there something I could add to make that work?
Thursday, March 7, 2019 10:04 PM
• I need this info off of remote computers.
Thursday, March 7, 2019 10:05 PM
• The registry method will not work for older versions of Office.
If you are using the O265 install the WIn32_Product and the registry version will not work.
You can run the above code using remoting.
You can look in the registry to find other methods to detect Office installations and configurations.
\_(ツ)_/
Thursday, March 7, 2019 10:10 PM
• You can also query remotes like this:
reg query \\alpha\HKLM\SOFTWARE\Microsoft\Office\ClickToRun\Configuration /v platform
Where "alpha" is the remote system.
\_(ツ)_/
Thursday, March 7, 2019 10:16 PM
• You can also query remotes like this:
reg query \\alpha\HKLM\SOFTWARE\Microsoft\Office\ClickToRun\Configuration /v platform
Where "alpha" is the remote system.
\_(ツ)_/
Thanks. I tried that using both just the computer name, and also the full computer name (i.e. sfd-jkotson10 & sfd-jkotson10.ngad.local) and in both instances i get: ERROR: The network path was not found.
I've verified the computer name by pinging it, and by running the pwrshell cmd to obtain the serial number
Thursday, March 7, 2019 10:33 PM
• The reg query requires the Remote Registry service to enabled and running.
Thursday, March 7, 2019 10:37 PM
• The reg query requires the Remote Registry service to enabled and running.
Ahh... gotcha. Thanks. Our computers are imaged & managed out of our NY office (I'm on the West Coast). I just tried 5 different machines and get the same "network path not found".
The service is disabled on my pc, so it appears that's the case on all of the pcs here.
Thursday, March 7, 2019 10:44 PM
• Only two choices available. Look at the folders or use remoting.
\_(ツ)_/
Thursday, March 7, 2019 10:54 PM
• Hi,
|
2022-07-07 02:22:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29006195068359375, "perplexity": 7905.818659577283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683020.92/warc/CC-MAIN-20220707002618-20220707032618-00009.warc.gz"}
|
https://ask.sagemath.org/questions/27321/revisions/
|
# Revision history [back]
### Seeking an efficient filter for partitions.
From the docs:
sage: Partitions(4, max_part=2).list()
[[2, 2], [2, 1, 1], [1, 1, 1, 1]]
I find this parlance confusing. Obviously the the partition [1, 1, 1, 1] has no max part = 2. Be that as it may, I do want to filter those partitions which greatest part is 2, so in the example would return
[[2, 2], [2, 1, 1]].
What is the most efficient way to implement
P(n,k) = Partitions(n, MAX_PART=k)
where MAX_PART is defined in my sense?
### Seeking an efficient filter for partitions.
From the docs:
sage: Partitions(4, max_part=2).list()
[[2, 2], [2, 1, 1], [1, 1, 1, 1]]
I find this parlance confusing. Obviously the Obviously the partition [1, 1, 1, 1] has no max part = 2. Be that as it may, I do want to filter those partitions which greatest part is 2, so in the example would return
[[2, 2], [2, 1, 1]].
What is the most efficient way to implement
P(n,k) = Partitions(n, MAX_PART=k)
where MAX_PART is defined in my sense?
|
2020-12-05 21:33:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.511717677116394, "perplexity": 1837.1571258840963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141750841.83/warc/CC-MAIN-20201205211729-20201206001729-00667.warc.gz"}
|
https://ok-em.com/
|
# Finding the diagonal of a square
The diagonal of a square formula, is d = a√2; where 'd' is the diagonal and 'a' is the side of the square. The formula for the diagonal of a square is derived using the Pythagoras theorem. A
How do customers think about us
## How to Calculate Diagonal Distance Between Corners of a
To calculate the diagonal of a square, multiply the length of the side by the square root of 2 2 2: d = a ⋅ 2 d = a\cdot\sqrt{2} d = a ⋅ 2 So, for
Get math help online
Get math help online by chatting with a tutor or watching a video lesson.
Get homework writing help
Doing math equations is a great way to keep your mind sharp and improve your problem-solving skills.
Determine mathematic
Looking for someone to help with your homework? We can provide expert homework writing help on any subject.
## How to Find the Diagonal of a Square? (Examples)
What is proportional to the diagonal length of a square? To calculate the diagonal of a square, multiply the length of the side by the square root of 2: d = a√2. So, for example, if
## 3 Ways to Calculate a Diagonal of a Square
Find the side length of the square by using the formula : Plug the side length into the formula : So, the length of the diagonal is about 15.5 cm. 2 Find the distance between
## Diagonals of a square with calculator
The perimeter of a square is 48. What is the length of its diagonal? Possible Answers: Correct answer: Explanation: Perimeter = side * 4 48 = side * 4 Side = 12 We can break up the square into two equal right triangles. The diagonal of the
Do mathematic equation
|
2023-01-28 03:52:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8351569771766663, "perplexity": 668.1215379603049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499470.19/warc/CC-MAIN-20230128023233-20230128053233-00291.warc.gz"}
|
https://blog.h2o.ai/category/technical/
|
## Scalable Automatic Machine Learning: Introducing H2O’s AutoML
Prepared by: Erin LeDell, Navdeep Gill & Ray Peck
In recent years, the demand for machine learning experts has outpaced the supply, despite the surge of people entering the field. To address this gap, there have been big strides in the development of user-friendly machine learning software that can be used by non-experts and experts, alike. The first steps toward simplifying machine learning involved developing simple, unified interfaces to a variety of machine learning algorithms (e.g. H2O).
Although H2O has made it easy for non-experts to experiment with machine learning, there is still a fair bit of knowledge and background in data science that is required to produce high-performing machine learning models. Deep Neural Networks in particular are notoriously difficult for a non-expert to tune properly. We have designed an easy-to-use interface which automates the process of training a large, diverse, selection of candidate models and training a stacked ensemble on the resulting models (which often leads to an even better model). Making it’s debut in the latest “Preview Release” of H2O, version 3.12.0.1 (aka “Vapnik”), we introduce H2O’s AutoML for Scalable Automatic Machine Learning.
H2O’s AutoML can be used for automating a large part of the machine learning workflow, which includes automatic training and tuning of many models within a user-specified time-limit. The user can also use a performance metric-based stopping criterion for the AutoML process rather than a specific time constraint. Stacked Ensembles will be automatically trained on the collection individual models to produce a highly predictive ensemble model which, in most cases, will be the top performing model in the AutoML Leaderboard.
### AutoML Interface
We provide a simple function that performs a process that would typically require many lines of code. This frees up users to focus on other aspects of the data science pipeline tasks such as data-preprocessing, feature engineering and model deployment.
R:
aml <- h2o.automl(x = x, y = y, training_frame = train,
max_runtime_secs = 3600)
Python:
aml = H2OAutoML(max_runtime_secs = 3600)
aml.train(x = x, y = y, training_frame = train)
Flow (H2O's Web GUI):
Each AutoML run returns a "Leaderboard" of models, ranked by a default performance metric. Here is an example leaderboard for a binary classification task:
More information, and full R and Python code examples are available on the H2O 3.12.0.1 AutoML docs page in the H2O User Guide.
## H2O announces GPU Open Analytics Initiative with MapD & Continuum
H2O.ai, Continuum Analytics, and MapD Technologies have announced the formation of the GPU Open Analytics Initiative (GOAI) to create common data frameworks enabling developers and statistical researchers to accelerate data science on GPUs. GOAI will foster the development of a data science ecosystem on GPUs by allowing resident applications to interchange data seamlessly and efficiently. BlazingDB, Graphistry and Gunrock from UC Davis led by CUDA Fellow John Owens have joined the founding members to contribute their technical expertise.
The formation of the Initiative comes at a time when analytics and machine learning workloads are increasingly being migrated to GPUs. However, while individually powerful, these workloads have not been able to benefit from the power of end-to-end GPU computing. A common standard will enable intercommunication between the different data applications and speed up the entire workflow, removing latency and decreasing the complexity of data flows between core analytical applications.
At the GPU Technology Conference (GTC), NVIDIA’s annual GPU developers’ conference, the Initiative announced its first project: an open source GPU Data Frame with a corresponding Python API. The GPU Data Frame is a common API that enables efficient interchange of data between processes running on the GPU. End-to-end computation on the GPU avoids transfers back to the CPU or copying of in-memory data reducing compute time and cost for high-performance analytics common in artificial intelligence workloads.
Users of the MapD Core database can output the results of a SQL query into the GPU Data Frame, which then can be manipulated by the Continuum Analytics’ Anaconda NumPy-like Python API or used as input into the H2O suite of machine learning algorithms without additional data manipulation. In early internal tests, this approach exhibited order-of-magnitude improvements in processing times compared to passing the data between applications on a CPU.
“The data science and analytics communities are rapidly adopting GPU computing for machine learning and deep learning. However, CPU-based systems still handle tasks like subsetting and preprocessing training data, which creates a significant bottleneck,” said Todd Mostak, CEO and co-founder of MapD Technologies. “The GPU Data Frame makes it easy to run everything from ingestion to preprocessing to training and visualization directly on the GPU. This efficient data interchange will improve performance, encouraging development of ever more sophisticated GPU-based applications.”
“GPU Data Frame relies on the Anaconda platform as the foundational fabric that brings data science technologies together to take full advantage of GPU performance gains,” said Travis Oliphant, co-founder and chief data scientist of Continuum Analytics. “Using NVIDIA’s technology, Anaconda is mobilizing the Open Data Science movement by helping teams avoid the data transfer process between CPUs and GPUs and move nimbly toward their larger business goals. The key to producing this kind of innovation are great partners like H2O and MapD.”
“Truly diverse open source ecosystems are essential for adoption – we are excited to start GOAI for GPUs alongside leaders in data and analytics pipeline to help standardize data formats,” said Sri Ambati, CEO and co-founder of H2O.ai. “GOAI is a call for the community of data developers and researchers to join the movement to speed up analytics and GPU adoption in the enterprise.”
The GPU Open Analytics Initiative is actively welcoming participants who are committed to open source and to GPUs as a computing platform.
Details of the GPU Data Frame can be found at the Initiative’s Github repo.
## Use H2O.ai on Azure HDInsight
We’re hosting an upcoming webinar to present you how to use H2O on HDInsight and to answer your questions. Sign up for our upcoming webinar on combining H2O and Azure HDInsight.
We recently announced that H2O and Microsoft Azure HDInsight have integrated to provide Data Scientists with a Leading Combination of Engines for Machine Learning and Deep Learning. Through H2O’s AI platform and its Sparkling Water solution, users can combine the fast, scalable machine learning algorithms of H2O with the capabilities of Spark, as well as drive computation from Scala/R/Python and utilize the H2O Flow UI, providing an ideal machine learning platform for application developers.
In this blog, we will provide a detailed step-by-step guide to help you set up the first H2O on HDInsight solution.
Step 1: setting up the environment
The first step is to create an HDInsight cluster with H2O installed. You can either create an HDInsight cluster and install H2O during provision time, or you can also install H2O on an existing cluster. Please note that H2O on HDInsight only works for Spark 2.0 on HDInsight 3.5 as of today, which is the default version of HDInsight.
Please be noted that we’ve recently updated our UI with less clicks, so you need to click “custom” button to install applications on HDInsight.
Step 2: Setting up the environment
After installing H2O on HDInsight, you can simply use the built-in Jupyter notebooks to write your first H2O on HDInsight applications. You can simply go to (https://yourclustername.azurehdinsight.net/jupyter) to open the Jupyter Notebook. You will see a folder named “H2O-PySparkling-Examples”.
There are a few examples in the folder, but I recommend starting with the one named “Sentiment_analysis_with_Sparkling_Water.ipynb”. Most of the details on how to use the H2O PySparkling Water APIs are already covered in the Notebook itself, so here I will give some high-level overviews.
The first thing you need to do is to configure the environment. Most of the configurations are already taken care by the system, such as the FLOW UI address, Spark jar location, the Sparkling water egg file, etc.
There are three important parameter to configure: the driver memory, executor memory, and the number of executors. The default values are optimized for the default 4 node cluster, but your cluster size might vary.
Tuning these parameters are outside of scope of this blog, as it is more of a Spark resource tuning problem. There are a few good reference articles such as this one.
Note that all spark applications deployed using a Jupyter Notebook will have “yarn-cluster” deploy-mode. This means that the spark driver node will be allocated on any worker node of the cluster, not on the head nodes.
In this example, we simply allocate 75% of an HDInsight cluster worker nodes to the driver and executors (21 GB each), and put 3 executors, since the default HDInsight cluster size is 4 worker nodes (3 executors + 1 driver)
Please refer to the Jupyter Notebook tutorial for more information on how to use Jupyter Notebooks on HDInsight.
The second step here is to create an H2O context. Since one default spark context is already configured in the Jupyter Notebook (called sc), in H2O, we just need to call
h2o_context = pysparkling.H2OContext.getOrCreate(sc)
so H2O can recognize the default spark context.
After executing this line of code, H2O will print out the status, as well as the YARN application it is using.
After this, you can use H2O APIs plus the Spark APIs to write your applications. To learn more about Sparkling Water APIs, refer to the H2O GitHub site here.
This sentiment analysis example has a few steps to analyze the data:
1. Load data to Spark and H2O frames
2. Data munging using H2O API
• Remove columns
• Refine Time Column into Year/Month/Day/DayOfWeek/Hour columns
3. Data munging using Spark API
• Select columns Score, Month, Day, DayOfWeek, Summary
• Define UDF to transform score (0..5) to binary positive/negative
• Use TF-IDF to vectorize summary column
4. Model building using H2O API
• Use H2O Grid Search to tune hyper parameters
• Select the best Deep Learning model
Please refer to the Jupyter Notebook for more details.
Step 3: use FLOW UI to monitor the progress and visualize the model
H2O Flow is an interactive web-based computational user interface where you can combine code execution, text, mathematics, plots and rich media into a single document, much like Jupyter Notebooks. With H2O Flow, you can capture, rerun, annotate, present, and share your workflow. H2O Flow allows you to use H2O interactively to import files, build models, and iteratively improve them. Based on your models, you can make predictions and add rich text to create vignettes of your work – all within Flow’s browser-based environment. In this blog, we will only focus on its visualization part.
H2O FLOW web service lives in the Spark driver and is routed through the HDInsight gateway, so it can only be accessed when the spark application/Notebook is running
You can click the available link in the Jupyter Notebook, or you can directly access this URL: https://yourclustername-h2o.apps.azurehdinsight.net/flow/index.html
In this example, we will demonstrate its visualization capabilities. Simply click “Model > List Grid Search Results” (since we are trying to use Grid Search to tune hyper parameters)
Then you can access the 4 grid search results:
And you can view the details of each model. For example, you can visualize the ROC curve as below:
In Jupyter Notebooks, you can also view the performance in text format:
Summary
In this blog, we have walked you through the detailed steps on how to create your first H2O application on HDInsight for your machine learning applications. For more information on H2O, please visit H2O site; For more information on HDInsight, please visit the HDInsight site
This blog-post is co-authored by Pablo Marin(@pablomarin), Solution Architect in Microsoft.
## Sparkling Water on the Spark-Notebook
This is a guest post from our friends at Kensu.
In the space of Data Science development in enterprises, two outstanding scalable technologies are Spark and H2O. Spark is a generic distributed computing framework and H2O is a very performant scalable platform for AI.
Their complementarity is best exploited with the use of Sparkling Water. Sparkling Water is the solution to get the best of Spark – its elegant APIs, RDDs, multi-tenant Context and H2O’s speed, columnar-compression and fully-featured Machine Learning and Deep-Learning algorithms in an enterprise ready fashion.
Examples of Sparkling Water pipelines are readily available in the H2O github repository, we have revisited these examples using the Spark-Notebook.
The Spark-Notebook is an open source notebook (web-based environment for code edition, execution, and data visualization), focused on Scala and Spark. The Spark-Notebook is part of the Adalog suite of Kensu.io which addresses agility, maintainability and productivity for data science teams. Adalog offers to data scientists a short work cycle to deploy their work to the business reality and to managers a set of data governance giving a consistent view on the impact of data activities on the market.
This new material allows diving into Sparkling Water in an interactive and dynamic way.
Working with Sparking Water in the Spark-Notebook scaffolds an ideal platform for big data /data science agile development. Most notably, this gives the data scientist the power to:
• Write rich documentation of his work alongside the code, thus improving the capacity to index knowledge
• Experiment quickly through interactive execution of individual code cells and share the results of these experiments with his colleagues.
• Visualize the data he/she is feeding H2O through an extensive list of widgets and automatic makeup of computation results.
Most of the H2O/Sparkling water examples have been ported to the Spark-Notebook and are available in a github repository.
We are focussing here on the Chicago crime dataset example and looking at:
• How to take advantage of both H2O and Spark-Notebook technologies,
• How to install the Spark-Notebook,
• How to use it to deploy H2O jobs on a spark cluster,
• How to read, transform and join data with Spark,
• How to render data on a geospatial map,
• How to apply deep learning or Gradient Boosted Machine (GBM) models using Sparkling Water
## Installing the Spark-Notebook:
Installation is very straightforward on a local machine. Follow the steps described in the Spark-Notebook documentation and in a few minutes, you will have it working. Please note that Sparkling Water works only with Scala 2.11 and Spark 2.02 and above currently.
For larger projects, you may also be interested to read the documentation on how to connect the notebook to an on-premise or cloud computing cluster.
The Sparkling Water notebooks repo should be cloned in the “notebooks” directory of your Spark-Notebook installation.
## Integrating H2O with the Spark-Notebook:
In order to integrate Sparkling Water with the Spark-Notebook, we need to tell the notebook to load the Sparkling Water package and specify custom spark configuration, if required. Spark then automatically distributes the H2O libraries on each of your Spark executors. Declaring Sparkling Water dependencies induces some libraries to come along by transitivity, therefore take care to ensure duplication or multiple versions of some dependencies is avoided.
The notebook metadata defines custom dependencies (ai.h2o) and dependencies to not include (because they’re already available, i.e. spark, scala and jetty). The custom local repos allow us to define where dependencies are stored locally and thus avoid downloading these each time a notebook is started.
"customLocalRepo": "/tmp/spark-notebook",
"customDeps": [
"ai.h2o % sparkling-water-core_2.11 % 2.0.2",
"ai.h2o % sparkling-water-examples_2.11 % 2.0.2",
"- org.apache.spark % spark-core_2.11 % _",
"- org.apache.spark % spark-mllib_2.11 % _",
"- org.apache.spark % spark-repl_2.11 % _",
"- org.scala-lang % _ % _",
"- org.scoverage % _ % _",
"- org.eclipse.jetty.aggregate % jetty-servlet % _"
],
"customSparkConf": {
"spark.ext.h2o.repl.enabled": "false"
},
With these dependencies set, we can start using Sparkling Water and initiate an H2O context from within the notebook.
## Benchmark example – Chicago Crime Scenes:
As an example, we can revisit the Chicago Crime Sparkling Water demo. The Spark-Notebook we used for this benchmark can be seen in a read-only mode here.
Step 1: The Three datasets are loaded as spark data frames:
• Chicago weather data : Min, Max and Mean temperature per day
• Chicago Census data : Average poverty, unemployment, education level and gross income per Chicago Community Area
• Chicago historical crime data : Crime description, date, location, community area, etc. Also contains a flag telling whether the criminal has been arrested or not.
The three tables are joined using Spark into a big table with location and date as keys. A view of the first entries of the table are generated by the notebook’s automatic rendering of tables (See a sample on the table below).
Geospatial charts widgets are also available in the Spark-Notebook, for example, the 100 first crimes in the table:
Step 2: We can transform the spark data frame into an H2O Frame and randomly split the H2O Frame into training and validation frames containing 80% and 20% of the rows, respectively. This is a memory to memory transformation, effectively copying and formatting data in the spark data frame into an equivalent representation in the H2O nodes (spawned by Sparkling Water into the spark executors).
We can verify that the frames are loaded into H2O by looking at the H2O Flow UI (available on port 54321 of your spark-notebook installation). We can access it by calling “openFlow” in a notebook cell.
Step 3: From the Spark-Notebook, we train two H2O machine learning models on the training H2O frame. For comparison, we are constructing a Deep Learning MLP model and a Gradient Boosting Machine (GBM) model. Both models are using all the data frame columns as features: time, weather, location, and neighborhood census data. Models are living in the H2O context and thus visible in the H2O flow UI. Sparkling Water functions allow us to access these from the SparkContext.
We compare the classification performance of the two models by looking at the area under the curve (AUC) on the validation dataset. The AUC measures the discrimination power of the model, that is the ability of the model to correctly classify crimes that lead to an arrest or not. The higher, the better.
The Deep Learning model leads to a 0.89 AUC while the GBM gets to 0.90 AUC. The two models are therefore quite comparable in terms of discrimination power.
Step 4: Finally, the trained model is used to measure the probability of arrest for two specific crimes:
• A “narcotics” related crime on 02/08/2015 11:43:58 PM in a street of community area “46” in district 4 with FBI code 18.
The probability of being arrested predicted by the deep learning model is 99.9% and by the GBM is 75.2%.
• A “deceptive practice” related crime on 02/08/2015 11:00:39 PM in a residence of community area “14” in district 9 with FBI code 11.
The probability of being arrested predicted by the deep learning model is 1.4% and by the GBM is 12%.
The Spark-Notebook allows for a quick computation and visualization of the results:
## Summary
Combining Spark and H2O within the Spark-Notebook is a very nice set-up for scalable data science. More examples are available in the online viewer. If you are interested in running them, install the Spark-Notebook and look in this repository. From that point , you are on track for enterprise-ready interactive scalable data science.
Loic Quertenmont,
Data Scientist @ Kensu.io
## Stacked Ensembles and Word2Vec now available in H2O!
Prepared by: Erin LeDell and Navdeep Gill
## Stacked Ensembles
H2O’s new Stacked Ensemble method is a supervised ensemble machine learning algorithm that finds the optimal combination of a collection of prediction algorithms using a process called stacking or “Super Learning.” This method currently supports regression and binary classification, and multiclass support is planned for a future release. A full list of the planned features for Stacked Ensemble can be viewed here.
H2O previously has supported the creation of ensembles of H2O models through a separate implementation, the h2oEnsemble R package, which is still available and will continue to be maintained, however for new projects we’d recommend using the native H2O version. Native support for stacking in the H2O backend brings support for ensembles to all the H2O APIs.
Creating ensembles of H2O models is now dead simple. You simply pass a list of existing H2O model ids to the stacked ensemble function and you are ready to go. This list of models can be a set of manually created H2O models, a random grid of models (of GBMs, for example), or set of grids of different algorithms. Typically, the more diverse the collection of base models, the better the ensemble performance. Thus, using H2O’s Random Grid Search to generate a collection of random models is a handy way of quickly generating a set of base models for the ensemble.
R:
ensemble <- h2o.stackedEnsemble(x = x, y = y, training_frame = train, base_models = my_models)
Python:
ensemble = H2OStackedEnsembleEstimator(base_models=my_models)
ensemble.train(x=x, y=y, training_frame=train)
Full R and Python code examples are available on the Stacked Ensembles docs page. Kagglers rejoice!
## Word2Vec
H2O now has a full implementation of Word2Vec. Word2Vec is a group of related models that are used to produce word embeddings (a language modeling/feature engineering technique in natural language processing where words or phrases are mapped to vectors of real numbers). The word embeddings can subsequently be used in a machine learning model, for example, GBM. This allows user to utilize text based data with current H2O algorithms in a very efficient manner. An R example is available here.
### Technical Details
H2O’s Word2Vec is based on the skip-gram model. The training objective of skip-gram is to learn word vector representations that are good at predicting its context in the same sentence. Mathematically, given a sequence of training words $w_1, w_2, \dots, w_T$, the objective of the skip-gram model is to maximize the average log-likelihood
$$\frac{1}{T} \sum_{t = 1}^{T}\sum_{j=-k}^{j=k} \log p(w_{t+j} | w_t)$$
where $k$ is the size of the training window.
In the skip-gram model, every word w is associated with two vectors $u_w$ and $v_w$ which are vector representations of $w$ as word and context respectively. The probability of correctly predicting word $w_i$ given word $w_j$ is determined by the softmax model, which is
$$p(w_i | w_j ) = \frac{\exp(u_{w_i}^{\top}v_{w_j})}{\sum_{l=1}^{V} \exp(u_l^{\top}v_{w_j})}$$
where $V$ is the vocabulary size.
The skip-gram model with softmax is expensive because the cost of computing $\log p(w_i | w_j)$ is proportional to $V$, which can be easily in order of millions. To speed up training of Word2Vec, we used hierarchical softmax, which reduced the complexity of computing of $\log p(w_i | w_j)$ to $O(\log(V))$
## Tverberg Release (H2O 3.10.3.4)
Below is a detailed list of all the items that are part of the Tverberg release.
List of New Features:
PUBDEV-2058- Implement word2vec in h2o (To use this feature in R, please visit this demo)
PUBDEV-3635- Ability to Select Columns for PDP computation in Flow (With this enhancement, users will be able to select which features/columns to render Partial Dependence Plots from Flow. (R/Python supported already). Known issue PUBDEV-3782: when nbins < categorical levels, PDP won't compute. Please visit also this post.)
PUBDEV-3881- Add PCA Estimator documentation to Python API Docs
PUBDEV-3902- Documentation: Add information about Azure support to H2O User Guide (Beta)
PUBDEV-3739- StackedEnsemble: put ensemble creation into the back end.
List of Improvements:
PUBDEV-3989- Decrease size of h2o.jar
PUBDEV-3257- Documentation: As a K-Means user, I want to be able to better understand the parameters
PUBDEV-3741- StackedEnsemble: add tests in R and Python to ensure that a StackedEnsemble performs at least as well as the base_models
PUBDEV-3857- Clean up the generated Python docs
PUBDEV-3895- Filter H2OFrame on pandas dates and time (python)
PUBDEV-3912- Provide way to specify context_path via Python/R h2o.init methods
PUBDEV-3933- Modify gen_R.py for Stacked Ensemble
PUBDEV-3972- Add Stacked Ensemble code examples to Python docstrings
List of Bugs:
PUBDEV-2464- Using asfactor() in Python client cannot allocate to a variable
PUBDEV-3111- R API's h2o.interaction() does not use destination_frame argument
PUBDEV-3694- Errors with PCA on wide data for pca_method = GramSVD which is the default
PUBDEV-3742- StackedEnsemble should work for regression
PUBDEV-3865- h2o gbm : for an unseen categorical level, discrepancy in predictions when score using h2o vs pojo/mojo
PUBDEV-3883- Negative indexing for H2OFrame is buggy in R API
PUBDEV-3894- Relational operators don't work properly with time columns.
PUBDEV-3966- java.lang.AssertionError when using h2o.makeGLMModel
PUBDEV-3835- Standard Errors in GLM: calculating and showing specifically when called
PUBDEV-3965- Importing data in python returns error - TypeError: expected string or bytes-like object
Hotfix: Remove StackedEnsemble from Flow UI. Training is only supported from Python and R interfaces. Viewing is supported in the Flow UI.
PUBDEV-3336- h2o.create_frame(): if randomize=True, value param cannot be used
PUBDEV-3740- REST: implement simple ensemble generation API
PUBDEV-3843- Modify R REST API to always return binary data
PUBDEV-3844- Safe GET calls for POJO/MOJO/genmodel
PUBDEV-3864- Import files by pattern
PUBDEV-3884- StackedEnsemble: Add to online documentation
PUBDEV-3940- Add Stacked Ensemble code examples to R docs
## Start Off 2017 with Our Stanford Advisors
We were very excited to meet with our advisors (Prof. Stephen Boyd, Prof. Rob Tibshirani and Prof. Trevor Hastie) at H2O.AI on Jan 6, 2017.
Our CEO, Sri Ambati, made two great observations at the start of the meeting:
• First was the hardware trend where hardware companies like Intel/Nvidia/AMD plan to put the various machine learning algorithms into hardware/GPUs.
• Second was the data trend where more and more datasets are images/texts/audio instead of the traditional transactional datasets. To deal with these new datasets, deep learning seems to be the go-to algorithm. However, with deep learning, it might work very well but it was very difficult to explain to business or regulatory professionals how and why it worked.
There were several techniques to get around this problem and make machine learning solutions interpretable to our customers:
• Patrick Hall pointed out that monotonicity determines interpretability, not linearity of systems. He cited a credit scoring system using a constrained neural network, when the input variable was monotonic to the response variable, the system could automatically generate reason codes.
• One could use deep learning and simpler algorithms (like GLM, Random Forest, etc.) on datasets. When the performances were similar, we chose the simple models since they tended to be more interpretable. These meetings were great learning opportunities for us.
• Another suggestion is to use a layered approach:
• Use deep learning to extract a small number of features from a high dimension datasets.
• Next, use a simple model that used these extracted features to perform specific tasks.
This layered approach could provide great speed up as well. Imagine the cases where you could use feature sets for images/text/speech derived from others on your datasets, all you need to do was to build your simple model off the feature sets to perform the functions you desired. In this case, deep learning is the equivalent of PCA for non-linear features. Prof. Boyd seemed to like GLRM (check out H2O GLRM) as well for feature extraction.
With this layered approach, there were more system parameters to tune. Our auto-ML toolbox would be perfect for this! Go team!
Subsequently the conversation turned to visualization of datasets. Patrick Hall brought up the approach to first use clustering to separate the datasets and apply simple models for each cluster. This approach was very similar to their hierarchical mixture of experts algorithm described in their elements of statistical learning book. Basically, you built decision trees from your dataset, then fit linear models at the leaf nodes to perform specific tasks.
Our very own Dr. Wilkinson had built a dataset visualization tool that could summarize a big dataset while maintaining the characteristics of the original datasets (like outliners and others). Totally awesome!
Arno Candel brought up the issue of overfitting and how to detect it during the training process rather than at the end of the training process using the held-out set. Prof. Boyd mentioned that we should checkout Bayesian trees/additive models.
Last Words of Wisdom from our esteemed advisors: Deep learning was powerful but other algorithms like random forest could beat deep learning depending on the datasets. Deep learning required big datasets to train. It worked best with datasets that had some kind of organization in it like spatial features (in images) and temporal trends (in speech/time series). Random forest, on the other hand, worked perfectly well with dataset with no such organization/features.
## Indexing 1 Billion Time Series with H2O and ISax
At H2O, we have recently debuted a new feature called ISax that works on time series data in an H2O Dataframe. ISax stands for Indexable Symbolic Aggregate ApproXimation, which means it can represent complex time series patterns using a symbolic notation and thereby reducing the dimensionality of your data. From there you can run H2O’s ML algos or use the index for searching or data analysis. ISax has many uses in a variety of fields including finance, biology and cybersecurity.
Today in this blog we will use H2O to create an ISax index for analytical purposes. We will generate 1 Billion time series of 256 steps on an integer U(-100,100) distribution. Once we have the index we’ll show how you can search for similar patterns using the index.
We’ll show you the steps and you can run along, assuming you have enough hardware and patience. In this example we are using a 9 machine cluster, each with 32 cores and 256GB RAM. We’ll create a 1B row synthetic data set and form random walks for more interesting time series patterns. We’ll run ISax and perform the search, the whole process takes ~30 minutes with our cluster.
Raw H2O Frame Creation
In the typical use case, H2O users would be importing time series data from disk. H2O can read from local filesystems, NFS, or distributed systems like Hadoop. H2O cluster file reads are parallelized across the nodes for speed. In our case we’ll be generating a 256 column, 1B row frame. By the way H2O Dataframes scales better by increasing rows instead of columns. Each row will be an individual time series. The ISax algo assumes the time series data is row based.
rawdf = h2o.create_frame(cols=256, rows=1000000000, real_fraction=0.0, integer_fraction=1.0,missing_fraction=0.0)
Random Walk
Here we do a row wise cumulative sum to simulate random walks. The .head call triggers the execution graph so we can do a time measurement.
tsdf = rawdf.cumsum(axis=1)
print tsdf.head()
Lets take a quick peek at our time series
tsdf[0:2,:].transpose().as_data_frame(use_pandas=True).plot()
Run ISax
Now we’re ready to run isax and generate the index. The output of this command is another H2O Frame that contains the string representation of the isax word, along with the numeric columns in case you want to run ML algos.
res = tsdf.isax(num_words=20,max_cardinality=10)
Takes 10 minutes and H2O’s MapReduce framework makes efficient use of all 288 cpu cores.
Now that we have the index done, lets search for similar time series patterns in our 1B time series data set. Lets make indexes on the isax result frame and the original time series frame.
res["idx"] =1
res["idx"] = res["idx"].cumsum(axis=0)
tsdf["idx"] = 1
tsdf["idx"] = tsdf["idx"].cumsum(axis=0)
Im going to pick the second time series that we plotted (the green “C2”) time series.
myidx = res[res["iSax_index"]=="5^20_5^20_7^20_9^20_9^20_9^20_9^20_9^20_8^20_6^20
_4^20_3^20_2^20_1^20_1^20_0^20_0^20_0^20_0^20_0^20"]["idx"]
There are 4342 other time series with the same index in the 1B time series dataframe. Lets just plot the first 10 and see how similar they look
mylist = myidx.as_data_frame(use_pandas=True)["idx"][0:10].tolist()
mydf = tsdf[tsdf["idx"].isin(mylist)].as_data_frame(use_pandas=True)
mydf.ix[:,0:256].transpose().plot(figsize=(20,10))
The successful implementation of a fast in memory ISax algo can be attributed to the H2O platform having a highly efficient, easy to code, open source MapReduce framework, and the Rapids api that can deploy your distributed algos to Python or R. In my next blog, I will show how to get started with writing your own MapReduce functions in H2O on structured data by using ISax as an example.
http://cs.gmu.edu/~jessica/SAX_DAMI_preprint.pdf
## H2O GBM Tuning Tutorial for R
In this tutorial, we show how to build a well-tuned H2O GBM model for a supervised classification task. We specifically don’t focus on feature engineering and use a small dataset to allow you to reproduce these results in a few minutes on a laptop. This script can be directly transferred to datasets that are hundreds of GBs large and H2O clusters with dozens of compute nodes.
This tutorial is written in R Markdown. You can download the source from H2O’s github repository.
A port to a Python Jupyter Notebook version is available as well.
## Installation of the H2O R Package
# The following two commands remove any previously installed H2O packages for R.
if ("package:h2o" %in% search()) { detach("package:h2o", unload=TRUE) }
if ("h2o" %in% rownames(installed.packages())) { remove.packages("h2o") }
pkgs <- c("methods","statmod","stats","graphics","RCurl","jsonlite","tools","utils")
for (pkg in pkgs) {
if (! (pkg %in% rownames(installed.packages()))) { install.packages(pkg) }
}
# Now we download, install and initialize the H2O package for R.
install.packages("h2o", type="source", repos=(c("http://h2o-release.s3.amazonaws.com/h2o/rel-turchin/8/R")))
## Launch an H2O cluster on localhost
library(h2o)
## optional: connect to a running H2O cluster
#h2o.init(ip="mycluster", port=55555)
Starting H2O JVM and connecting: . Connection successful!
R is connected to the H2O cluster:
H2O cluster uptime: 1 seconds 248 milliseconds
H2O cluster version: 3.8.2.8
H2O cluster name: H2O_started_from_R_arno_wyu958
H2O cluster total nodes: 1
H2O cluster total memory: 3.56 GB
H2O cluster total cores: 8
H2O cluster allowed cores: 8
H2O cluster healthy: TRUE
H2O Connection ip: localhost
H2O Connection port: 54321
H2O Connection proxy: NA
R Version: R version 3.2.2 (2015-08-14)
## Import the data into H2O
Everything is scalable and distributed from now on. All processing is done on the fully multi-threaded and distributed H2O Java-based backend and can be scaled to large datasets on large compute clusters.
Here, we use a small public dataset (Titanic), but you can use datasets that are hundreds of GBs large.
## 'path' can point to a local file, hdfs, s3, nfs, Hive, directories, etc.
df <- h2o.importFile(path = "http://s3.amazonaws.com/h2o-public-test-data/smalldata/gbm_test/titanic.csv")
dim(df)
tail(df)
summary(df,exact_quantiles=TRUE)
## pick a response for the supervised problem
response <- "survived"
## the response variable is an integer, we will turn it into a categorical/factor for binary classification
df[[response]] <- as.factor(df[[response]])
## use all other columns (except for the name) as predictors
predictors <- setdiff(names(df), c(response, "name"))
> summary(df,exact_quantiles=TRUE)
pclass survived name sex age sibsp parch ticket fare cabin embarked
Min. :1.000 Min. :0.000 male :843 Min. : 0.1667 Min. :0.0000 Min. :0.000 Min. : 680 Min. : 0.000 C23 C25 C27 : 6 S :914
1st Qu.:2.000 1st Qu.:0.000 female:466 1st Qu.:21.0000 1st Qu.:0.0000 1st Qu.:0.000 1st Qu.: 19950 1st Qu.: 7.896 B57 B59 B63 B66: 5 C :270
Median :3.000 Median :0.000 Median :28.0000 Median :0.0000 Median :0.000 Median : 234604 Median : 14.454 G6 : 5 Q :123
Mean :2.295 Mean :0.382 Mean :29.8811 Mean :0.4989 Mean :0.385 Mean : 249039 Mean : 33.295 B96 B98 : 4 NA: 2
3rd Qu.:3.000 3rd Qu.:1.000 3rd Qu.:39.0000 3rd Qu.:1.0000 3rd Qu.:0.000 3rd Qu.: 347468 3rd Qu.: 31.275 C22 C26 : 4
Max. :3.000 Max. :1.000 Max. :80.0000 Max. :8.0000 Max. :9.000 Max. :3101298 Max. :512.329 C78 : 4
NA's :263 NA's :352 NA's :1 NA :1014
boat body home.dest
Min. : 1.000 Min. : 1.0 New York NY : 64
1st Qu.: 5.000 1st Qu.: 72.0 London : 14
Median :10.000 Median :155.0 Montreal PQ : 10
Mean : 9.405 Mean :160.8 Cornwall / Akron OH: 9
3rd Qu.:13.000 3rd Qu.:256.0 Paris France : 9
Max. :16.000 Max. :328.0 Philadelphia PA : 8
NA's :911 NA's :1188 NA :564
From now on, everything is generic and directly applies to most datasets. We assume that all feature engineering is done at this stage and focus on model tuning. For multi-class problems, you can use h2o.logloss() or h2o.confusionMatrix() instead of h2o.auc() and for regression problems, you can use h2o.deviance() or h2o.mse().
## Split the data for Machine Learning
We split the data into three pieces: 60% for training, 20% for validation, 20% for final testing.
Here, we use random splitting, but this assumes i.i.d. data. If this is not the case (e.g., when events span across multiple rows or data has a time structure), you’ll have to sample your data non-randomly.
splits <- h2o.splitFrame(
data = df,
ratios = c(0.6,0.2), ## only need to specify 2 fractions, the 3rd is implied
destination_frames = c("train.hex", "valid.hex", "test.hex"), seed = 1234
)
train <- splits[[1]]
valid <- splits[[2]]
test <- splits[[3]]
## Establish baseline performance
As the first step, we’ll build some default models to see what accuracy we can expect. Let’s use the AUC metric for this demo, but you can use h2o.logloss and stopping_metric="logloss" as well. It ranges from 0.5 for random models to 1 for perfect models.
The first model is a default GBM, trained on the 60% training split
## We only provide the required parameters, everything else is default
gbm <- h2o.gbm(x = predictors, y = response, training_frame = train)
## Show a detailed model summary
gbm
## Get the AUC on the validation set
h2o.auc(h2o.performance(gbm, newdata = valid))
The AUC is over 94%, so this model is highly predictive!
[1] 0.9431953
The second model is another default GBM, but trained on 80% of the data (here, we combine the training and validation splits to get more training data), and cross-validated using 4 folds.
Note that cross-validation takes longer and is not usually done for really large datasets.
## h2o.rbind makes a copy here, so it's better to use splitFrame with ratios = c(0.8) instead above
gbm <- h2o.gbm(x = predictors, y = response, training_frame = h2o.rbind(train, valid), nfolds = 4, seed = 0xDECAF)
## Show a detailed summary of the cross validation metrics
## This gives you an idea of the variance between the folds
gbm@model$cross_validation_metrics_summary ## Get the cross-validated AUC by scoring the combined holdout predictions. ## (Instead of taking the average of the metrics across the folds) h2o.auc(h2o.performance(gbm, xval = TRUE)) We see that the cross-validated performance is similar to the validation set performance: [1] 0.9403432 Next, we train a GBM with “I feel lucky” parameters. We’ll use early stopping to automatically tune the number of trees using the validation AUC. We’ll use a lower learning rate (lower is always better, just takes more trees to converge). We’ll also use stochastic sampling of rows and columns to (hopefully) improve generalization. gbm <- h2o.gbm( ## standard model parameters x = predictors, y = response, training_frame = train, validation_frame = valid, ## more trees is better if the learning rate is small enough ## here, use "more than enough" trees - we have early stopping ntrees = 10000, ## smaller learning rate is better (this is a good value for most datasets, but see below for annealing) learn_rate=0.01, ## early stopping once the validation AUC doesn't improve by at least 0.01% for 5 consecutive scoring events stopping_rounds = 5, stopping_tolerance = 1e-4, stopping_metric = "AUC", ## sample 80% of rows per tree sample_rate = 0.8, ## sample 80% of columns per split col_sample_rate = 0.8, ## fix a random number generator seed for reproducibility seed = 1234, ## score every 10 trees to make early stopping reproducible (it depends on the scoring interval) score_tree_interval = 10 ) ## Get the AUC on the validation set h2o.auc(h2o.performance(gbm, valid = TRUE)) This model doesn’t seem to be much better than the previous models: [1] 0.939335 For this small dataset, dropping 20% of observations per tree seems too aggressive in terms of adding regularization. For larger datasets, this is usually not a bad idea. But we’ll let this parameter tune freshly below, so no worries. Note: To see what other stopping_metric parameters you can specify, simply pass an invalid option: gbm <- h2o.gbm(x = predictors, y = response, training_frame = train, stopping_metric = "yada") Error in .h2o.checkAndUnifyModelParameters(algo = algo, allParams = ALL_PARAMS, : "stopping_metric" must be in "AUTO", "deviance", "logloss", "MSE", "AUC", "lift_top_group", "r2", "misclassification", but got yada ## Hyper-Parameter Search Next, we’ll do real hyper-parameter optimization to see if we can beat the best AUC so far (around 94%). The key here is to start tuning some key parameters first (i.e., those that we expect to have the biggest impact on the results). From experience with gradient boosted trees across many datasets, we can state the following “rules”: 1. Build as many trees (ntrees) as it takes until the validation set error starts increasing. 2. A lower learning rate (learn_rate) is generally better, but will require more trees. Using learn_rate=0.02and learn_rate_annealing=0.995 (reduction of learning rate with each additional tree) can help speed up convergence without sacrificing accuracy too much, and is great to hyper-parameter searches. For faster scans, use values of 0.05 and 0.99 instead. 3. The optimum maximum allowed depth for the trees (max_depth) is data dependent, deeper trees take longer to train, especially at depths greater than 10. 4. Row and column sampling (sample_rate and col_sample_rate) can improve generalization and lead to lower validation and test set errors. Good general values for large datasets are around 0.7 to 0.8 (sampling 70-80 percent of the data) for both parameters. Column sampling per tree (col_sample_rate_per_tree) can also be tuned. Note that it is multiplicative with col_sample_rate, so setting both parameters to 0.8 results in 64% of columns being considered at any given node to split. 5. For highly imbalanced classification datasets (e.g., fewer buyers than non-buyers), stratified row sampling based on response class membership can help improve predictive accuracy. It is configured with sample_rate_per_class (array of ratios, one per response class in lexicographic order). 6. Most other options only have a small impact on the model performance, but are worth tuning with a Random hyper-parameter search nonetheless, if highest performance is critical. First we want to know what value of max_depth to use because it has a big impact on the model training time and optimal values depend strongly on the dataset. We’ll do a quick Cartesian grid search to get a rough idea of good candidate max_depth values. Each model in the grid search will use early stopping to tune the number of trees using the validation set AUC, as before. We’ll use learning rate annealing to speed up convergence without sacrificing too much accuracy. ## Depth 10 is usually plenty of depth for most datasets, but you never know hyper_params = list( max_depth = seq(1,29,2) ) #hyper_params = list( max_depth = c(4,6,8,12,16,20) ) ##faster for larger datasets grid <- h2o.grid( ## hyper parameters hyper_params = hyper_params, ## full Cartesian hyper-parameter search search_criteria = list(strategy = "Cartesian"), ## which algorithm to run algorithm="gbm", ## identifier for the grid, to later retrieve it grid_id="depth_grid", ## standard model parameters x = predictors, y = response, training_frame = train, validation_frame = valid, ## more trees is better if the learning rate is small enough ## here, use "more than enough" trees - we have early stopping ntrees = 10000, ## smaller learning rate is better ## since we have learning_rate_annealing, we can afford to start with a bigger learning rate learn_rate = 0.05, ## learning rate annealing: learning_rate shrinks by 1% after every tree ## (use 1.00 to disable, but then lower the learning_rate) learn_rate_annealing = 0.99, ## sample 80% of rows per tree sample_rate = 0.8, ## sample 80% of columns per split col_sample_rate = 0.8, ## fix a random number generator seed for reproducibility seed = 1234, ## early stopping once the validation AUC doesn't improve by at least 0.01% for 5 consecutive scoring events stopping_rounds = 5, stopping_tolerance = 1e-4, stopping_metric = "AUC", ## score every 10 trees to make early stopping reproducible (it depends on the scoring interval) score_tree_interval = 10 ) ## by default, display the grid search results sorted by increasing logloss (since this is a classification task) grid ## sort the grid models by decreasing AUC sortedGrid <- h2o.getGrid("depth_grid", sort_by="auc", decreasing = TRUE) sortedGrid ## find the range of max_depth for the top 5 models topDepths = sortedGrid@summary_table$max_depth[1:5]
minDepth = min(as.numeric(topDepths))
maxDepth = max(as.numeric(topDepths))
> sortedGrid
H2O Grid Details
================
Grid ID: depth_grid
Used hyper parameters:
- max_depth
Number of models: 15
Number of failed models: 0
Hyper-Parameter Search Summary: ordered by decreasing auc
max_depth model_ids auc
1 27 depth_grid_model_13 0.95657931811778
2 25 depth_grid_model_12 0.956353902507749
3 29 depth_grid_model_14 0.956241194702733
4 21 depth_grid_model_10 0.954663285432516
5 19 depth_grid_model_9 0.954494223724993
6 13 depth_grid_model_6 0.954381515919978
7 23 depth_grid_model_11 0.954043392504931
8 11 depth_grid_model_5 0.952183713722175
9 15 depth_grid_model_7 0.951789236404621
10 17 depth_grid_model_8 0.951507466892082
11 9 depth_grid_model_4 0.950436742744435
12 7 depth_grid_model_3 0.946942800788955
13 5 depth_grid_model_2 0.939306846999155
14 3 depth_grid_model_1 0.932713440405748
15 1 depth_grid_model_0 0.92902225979149
It appears that max_depth values of 19 to 29 are best suited for this dataset, which is unusally deep!
> minDepth
[1] 19
> maxDepth
[1] 29
Now that we know a good range for max_depth, we can tune all other parameters in more detail. Since we don’t know what combinations of hyper-parameters will result in the best model, we’ll use random hyper-parameter search to “let the machine get luckier than a best guess of any human”.
hyper_params = list(
## restrict the search to the range of max_depth established above
max_depth = seq(minDepth,maxDepth,1),
## search a large space of row sampling rates per tree
sample_rate = seq(0.2,1,0.01),
## search a large space of column sampling rates per split
col_sample_rate = seq(0.2,1,0.01),
## search a large space of column sampling rates per tree
col_sample_rate_per_tree = seq(0.2,1,0.01),
## search a large space of how column sampling per split should change as a function of the depth of the split
col_sample_rate_change_per_level = seq(0.9,1.1,0.01),
## search a large space of the number of min rows in a terminal node
min_rows = 2^seq(0,log2(nrow(train))-1,1),
## search a large space of the number of bins for split-finding for continuous and integer columns
nbins = 2^seq(4,10,1),
## search a large space of the number of bins for split-finding for categorical columns
nbins_cats = 2^seq(4,12,1),
## search a few minimum required relative error improvement thresholds for a split to happen
min_split_improvement = c(0,1e-8,1e-6,1e-4),
## try all histogram types (QuantilesGlobal and RoundRobin are good for numeric columns with outliers)
)
search_criteria = list(
## Random grid search
strategy = "RandomDiscrete",
## limit the runtime to 60 minutes
max_runtime_secs = 3600,
## build no more than 100 models
max_models = 100,
## random number generator seed to make sampling of parameter combinations reproducible
seed = 1234,
## early stopping once the leaderboard of the top 5 models is converged to 0.1% relative difference
stopping_rounds = 5,
stopping_metric = "AUC",
stopping_tolerance = 1e-3
)
grid <- h2o.grid(
## hyper parameters
hyper_params = hyper_params,
## hyper-parameter search configuration (see above)
search_criteria = search_criteria,
## which algorithm to run
algorithm = "gbm",
## identifier for the grid, to later retrieve it
grid_id = "final_grid",
## standard model parameters
x = predictors,
y = response,
training_frame = train,
validation_frame = valid,
## more trees is better if the learning rate is small enough
## use "more than enough" trees - we have early stopping
ntrees = 10000,
## smaller learning rate is better
## since we have learning_rate_annealing, we can afford to start with a bigger learning rate
learn_rate = 0.05,
## learning rate annealing: learning_rate shrinks by 1% after every tree
## (use 1.00 to disable, but then lower the learning_rate)
learn_rate_annealing = 0.99,
## early stopping based on timeout (no model should take more than 1 hour - modify as needed)
max_runtime_secs = 3600,
## early stopping once the validation AUC doesn't improve by at least 0.01% for 5 consecutive scoring events
stopping_rounds = 5, stopping_tolerance = 1e-4, stopping_metric = "AUC",
## score every 10 trees to make early stopping reproducible (it depends on the scoring interval)
score_tree_interval = 10,
## base random number generator seed for each model (automatically gets incremented internally for each model)
seed = 1234
)
## Sort the grid models by AUC
sortedGrid <- h2o.getGrid("final_grid", sort_by = "auc", decreasing = TRUE)
sortedGrid
We can see that the best models have even better validation AUCs than our previous best models, so the random grid search was successful!
Hyper-Parameter Search Summary: ordered by decreasing auc
col_sample_rate col_sample_rate_change_per_level col_sample_rate_per_tree histogram_type max_depth
1 0.49 1.04 0.94 QuantilesGlobal 28
2 0.92 0.93 0.56 QuantilesGlobal 27
3 0.35 1.09 0.83 QuantilesGlobal 29
4 0.42 0.98 0.53 UniformAdaptive 24
5 0.7 1.02 0.56 UniformAdaptive 25
min_rows min_split_improvement nbins nbins_cats sample_rate model_ids auc
1 2 0 32 256 0.86 final_grid_model_68 0.974049027895182
2 4 0 128 128 0.93 final_grid_model_96 0.971400394477318
3 4 1e-08 64 128 0.69 final_grid_model_38 0.968864468864469
4 1 1e-04 64 16 0.69 final_grid_model_55 0.967793744716822
5 2 1e-08 32 256 0.34 final_grid_model_22 0.966553958861651
We can inspect the best 5 models from the grid search explicitly, and query their validation AUC:
for (i in 1:5) {
gbm <- h2o.getModel(sortedGrid@model_ids[[i]])
print(h2o.auc(h2o.performance(gbm, valid = TRUE)))
}
[1] 0.974049
[1] 0.9714004
[1] 0.9688645
[1] 0.9677937
[1] 0.966554
You can also see the results of the grid search in Flow:
## Model Inspection and Final Test Set Scoring
Let’s see how well the best model of the grid search (as judged by validation set AUC) does on the held out test set:
gbm <- h2o.getModel(sortedGrid@model_ids[[1]])
print(h2o.auc(h2o.performance(gbm, newdata = test)))
Good news. It does as well on the test set as on the validation set, so it looks like our best GBM model generalizes well to the unseen test set:
[1] 0.9712568
We can inspect the winning model’s parameters:
gbm@parameters
> gbm@parameters
$model_id [1] "final_grid_model_68"$training_frame
[1] "train.hex"
$validation_frame [1] "valid.hex"$score_tree_interval
[1] 10
$ntrees [1] 10000$max_depth
[1] 28
$min_rows [1] 2$nbins
[1] 32
$nbins_cats [1] 256$stopping_rounds
[1] 5
$stopping_metric [1] "AUC"$stopping_tolerance
[1] 1e-04
$max_runtime_secs [1] 3414.017$seed
[1] 1234
$learn_rate [1] 0.05$learn_rate_annealing
[1] 0.99
$distribution [1] "bernoulli"$sample_rate
[1] 0.86
$col_sample_rate [1] 0.49$col_sample_rate_change_per_level
[1] 1.04
$col_sample_rate_per_tree [1] 0.94$histogram_type
[1] "QuantilesGlobal"
$x [1] "pclass" "sex" "age" "sibsp" "parch" "ticket" "fare" "cabin" [9] "embarked" "boat" "body" "home.dest"$y
[1] "survived"
Now we can confirm that these parameters are generally sound, by building a GBM model on the whole dataset (instead of the 60%) and using internal 5-fold cross-validation (re-using all other parameters including the seed):
model <- do.call(h2o.gbm,
## update parameters in place
{
p <- gbm@parameters
p$model_id = NULL ## do not overwrite the original grid model p$training_frame = df ## use the full dataset
p$validation_frame = NULL ## no validation frame p$nfolds = 5 ## cross-validation
p
}
)
model@model$cross_validation_metrics_summary > model@model$cross_validation_metrics_summary
Cross-Validation Metrics Summary:
mean sd cv_1_valid cv_2_valid cv_3_valid cv_4_valid cv_5_valid
F0point5 0.9082877 0.017469764 0.9448819 0.87398374 0.8935743 0.9034908 0.9255079
F1 0.8978795 0.008511053 0.9099526 0.8820513 0.8989899 0.9119171 0.8864865
F2 0.8886758 0.016845208 0.8775137 0.89026916 0.9044715 0.92050207 0.8506224
accuracy 0.9236877 0.004604631 0.92883897 0.9151291 0.92248064 0.93307084 0.9189189
auc 0.9606385 0.006671454 0.96647465 0.9453869 0.959375 0.97371733 0.95823866
err 0.076312296 0.004604631 0.07116105 0.084870845 0.07751938 0.06692913 0.08108108
err_count 20 1.4142135 19 23 20 17 21
lift_top_group 2.6258688 0.099894695 2.3839285 2.8229167 2.632653 2.6736841 2.6161616
logloss 0.23430987 0.019006629 0.23624699 0.26165685 0.24543843 0.18311584 0.24509121
max_per_class_error 0.11685239 0.025172591 0.14285715 0.104166664 0.091836736 0.07368421 0.17171717
mcc 0.8390522 0.011380583 0.8559271 0.81602895 0.83621955 0.8582395 0.8288459
mean_per_class_accuracy 0.91654545 0.0070778215 0.918894 0.9107738 0.91970664 0.9317114 0.9016414
mean_per_class_error 0.08345456 0.0070778215 0.08110599 0.089226194 0.080293365 0.06828865 0.09835859
mse 0.06535896 0.004872401 0.06470373 0.0717801 0.0669676 0.052562267 0.07078109
precision 0.9159663 0.02743855 0.969697 0.86868685 0.89 0.8979592 0.95348835
r2 0.7223932 0.021921812 0.7342935 0.68621415 0.7157123 0.7754977 0.70024836
recall 0.8831476 0.025172591 0.85714287 0.8958333 0.90816325 0.9263158 0.82828283
specificity 0.94994324 0.016345335 0.9806452 0.9257143 0.93125 0.9371069 0.975
Ouch! So it looks like we overfit quite a bit on the validation set as the mean AUC on the 5 folds is “only” 96.06% +/- 0.67%. So we cannot always expect AUCs of 97% with these parameters on this dataset. So to get a better estimate of model performance, the Random hyper-parameter search could have used nfolds = 5 (or 10, or similar) in combination with 80% of the data for training (i.e., not holding out a validation set, but only the final test set). However, this would take more time, as nfolds+1 models will be built for every set of parameters.
Instead, to save time, let’s just scan through the top 5 models and cross-validated their parameters with nfolds=5 on the entire dataset:
for (i in 1:5) {
gbm <- h2o.getModel(sortedGrid@model_ids[[i]])
cvgbm <- do.call(h2o.gbm,
## update parameters in place
{
p <- gbm@parameters
p$model_id = NULL ## do not overwrite the original grid model p$training_frame = df ## use the full dataset
p$validation_frame = NULL ## no validation frame p$nfolds = 5 ## cross-validation
p
}
)
print(gbm@model_id)
print(cvgbm@model$cross_validation_metrics_summary[5,]) ## Pick out the "AUC" row } [1] "final_grid_model_68" Cross-Validation Metrics Summary: mean sd cv_1_valid cv_2_valid cv_3_valid cv_4_valid cv_5_valid auc 0.9606385 0.006671454 0.96647465 0.9453869 0.959375 0.97371733 0.95823866 [1] "final_grid_model_96" Cross-Validation Metrics Summary: mean sd cv_1_valid cv_2_valid cv_3_valid cv_4_valid cv_5_valid auc 0.96491456 0.0052218214 0.9631913 0.9597024 0.9742985 0.9723933 0.95498735 [1] "final_grid_model_38" Cross-Validation Metrics Summary: mean sd cv_1_valid cv_2_valid cv_3_valid cv_4_valid cv_5_valid auc 0.9638506 0.004603204 0.96134794 0.9573512 0.971301 0.97192985 0.95732325 [1] "final_grid_model_55" Cross-Validation Metrics Summary: mean sd cv_1_valid cv_2_valid cv_3_valid cv_4_valid cv_5_valid auc 0.9657447 0.0062724343 0.9562212 0.95428574 0.9686862 0.97490895 0.97462124 [1] "final_grid_model_22" Cross-Validation Metrics Summary: mean sd cv_1_valid cv_2_valid cv_3_valid cv_4_valid cv_5_valid auc 0.9648925 0.0065437974 0.96633065 0.95285714 0.9557398 0.9736511 0.97588384 The avid reader might have noticed that we just implicitly did further parameter tuning using the “final” test set (which is part of the entire dataset df), which is not good practice – one is not supposed to use the “final” test set more than once. Hence, we’re not going to pick a different “best” model, but we’re just learning about the variance in AUCs. It turns out, for this tiny dataset, that the variance is rather large, which is not surprising. Keeping the same “best” model, we can make test set predictions as follows: gbm <- h2o.getModel(sortedGrid@model_ids[[1]]) preds <- h2o.predict(gbm, test) head(preds) gbm@model$validation_metrics@metrics$max_criteria_and_metric_scores Note that the label (survived or not) is predicted as well (in the first predict column), and it uses the threshold with the highest F1 score (here: 0.528098) to make labels from the probabilities for survival (p1). The probability for death (p0) is given for convenience, as it is just 1-p1. > head(preds) predict p0 p1 1 0 0.98055935 0.01944065 2 0 0.98051200 0.01948800 3 0 0.81430963 0.18569037 4 1 0.02121241 0.97878759 5 1 0.02528104 0.97471896 6 0 0.92056020 0.07943980 > gbm@model$validation_metrics@metrics$max_criteria_and_metric_scores Maximum Metrics: Maximum metrics at their respective thresholds metric threshold value idx 1 max f1 0.528098 0.920792 96 2 max f2 0.170853 0.926966 113 3 max f0point5 0.767931 0.959488 90 4 max accuracy 0.767931 0.941606 90 5 max precision 0.979449 1.000000 0 6 max recall 0.019425 1.000000 206 7 max specificity 0.979449 1.000000 0 8 max absolute_MCC 0.767931 0.878692 90 9 max min_per_class_accuracy 0.204467 0.928994 109 10 max mean_per_class_accuracy 0.252473 0.932319 106 You can also see the “best” model in more detail in Flow: The model and the predictions can be saved to file as follows: h2o.saveModel(gbm, "/tmp/bestModel.csv", force=TRUE) h2o.exportFile(preds, "/tmp/bestPreds.csv", force=TRUE) The model can also be exported as a plain old Java object (POJO) for H2O-independent (standalone/Storm/Kafka/UDF) scoring in any Java environment. h2o.download_pojo(gbm) /* Licensed under the Apache License, Version 2.0 http://www.apache.org/licenses/LICENSE-2.0.html AUTOGENERATED BY H2O at 2016-06-02T17:06:34.382-07:00 3.9.1.99999 Standalone prediction code with sample test data for GBMModel named final_grid_model_68 How to download, compile and execute: mkdir tmpdir cd tmpdir curl http://172.16.2.75:54321/3/h2o-genmodel.jar > h2o-genmodel.jar curl http://172.16.2.75:54321/3/Models.java/final_grid_model_68 > final_grid_model_68.java javac -cp h2o-genmodel.jar -J-Xmx2g -J-XX:MaxPermSize=128m final_grid_model_68.java (Note: Try java argument -XX:+PrintCompilation to show runtime JIT compiler behavior.) */ import java.util.Map; import hex.genmodel.GenModel; import hex.genmodel.annotations.ModelPojo; ... class final_grid_model_68_Tree_0_class_0 { static final double score0(double[] data) { double pred = (data[9 /* boat */] <14.003472f ? (!Double.isNaN(data[9]) && data[9 /* boat */] != 12.0f ? 0.13087687f : (data[3 /* sibsp */] <7.3529413E-4f ? 0.13087687f : 0.024317414f)) : (data[5 /* ticket */] <2669.5f ? (data[5 /* ticket */] <2665.5f ? (data[10 /* body */] <287.5f ? -0.08224204f : (data[2 /* age */] <14.2421875f ? 0.13087687f : (data[4 /* parch */] <4.892368E-4f ? (data[6 /* fare */] <39.029896f ? (data[1 /* sex */] <0.5f ? (data[5 /* ticket */] <2659.5f ? 0.13087687f : -0.08224204f) : -0.08224204f) : 0.08825309f) : 0.13087687f))) : 0.13087687f) : (data[9 /* boat */] <15.5f ? 0.13087687f : (!GenModel.bitSetContains(GRPSPLIT0, 42, data[7 ... ## Ensembling Techniques After learning above that the variance of the test set AUC of the top few models was rather large, we might be able to turn this into our advantage by using ensembling techniques. The simplest one is taking the average of the predictions (survival probabilities) of the top k grid search model predictions (here, we use k=10): prob = NULL k=10 for (i in 1:k) { gbm <- h2o.getModel(sortedGrid@model_ids[[i]]) if (is.null(prob)) prob = h2o.predict(gbm, test)$p1
else prob = prob + h2o.predict(gbm, test)$p1 } prob <- prob/k head(prob) We now have a blended probability of survival for each person on the Titanic. > head(prob) p1 1 0.02258923 2 0.01615957 3 0.15837298 4 0.98565663 5 0.98792208 6 0.17941366 We can bring those ensemble predictions to our R session’s memory space and use other R packages. probInR <- as.vector(prob) labelInR <- as.vector(as.numeric(test[[response]])) if (! ("cvAUC" %in% rownames(installed.packages()))) { install.packages("cvAUC") } library(cvAUC) cvAUC::AUC(probInR, labelInR) [1] 0.977534 This simple blended ensemble test set prediction has an even higher AUC than the best single model, but we need to do more validation studies, ideally using cross-validation. We leave this as an exercise for the reader – take the parameters of the top 10 models, retrain them with nfolds=5 on the full dataset, set keep_holdout_predictions=TRUE and average the predicted probabilities in h2o.getFrame(cvgbm[i]@model$cross_validation_holdout_predictions_frame_id), then score that with cvAUC as shown above).
For more sophisticated ensembling approaches, such as stacking via a superlearner, we refer to the H2O Ensemble github page.
## Summary
We learned how to build H2O GBM models for a binary classification task on a small but realistic dataset with numerical and categorical variables, with the goal to maximize the AUC (ranges from 0.5 to 1). We first established a baseline with the default model, then carefully tuned the remaining hyper-parameters without “too much” human guess-work. We used both Cartesian and Random hyper-parameter searches to find good models. We were able to get the AUC on a holdout test set from the low 94% range with the default model to the mid 97% after tuning, and to the high 97% with some simple ensembling technique known as blending. We performed simple cross-validation variance analysis to learn that results were slightly “lucky” due to the specific train/valid/test set splits, and settled to expect mid 96% AUCs instead.
Note that this script and the findings therein are directly transferrable to large datasets on distributed clusters including Spark/Hadoop environments.
## Hyperparameter Optimization in H2O: Grid Search, Random Search and the Future
“Good, better, best. Never let it rest. ‘Til your good is better and your better is best.” – St. Jerome
## tl;dr
H2O now has random hyperparameter search with time- and metric-based early stopping. Bergstra and Bengio[1] write on p. 281:
Compared with neural networks configured by a pure grid search, we find that random search over the same domain is able to find models that are as good or better within a small fraction of the computation time.
Even smarter means of searching the hyperparameter space are in the pipeline, but for most use cases random search does as well.
## What Are Hyperparameters?
Nearly all model algorithms used in machine learning have a set of tuning “knobs” which affect how the learning algorithm fits the model to the data. Examples are the regularization settings alpha and lambda for Generalized Linear Modeling or ntrees and max_depth for Gradient Boosted Models. These knobs are called hyperparameters to distinguish them from internal model parameters, such as GLM’s beta coefficients or Deep Learning’s weights, which get learned from the data during the model training process.
## What Is Hyperparameter Optimization?
The set of all combinations of values for these knobs is called the hyperparameter space. We’d like to find a set of hyperparameter values which gives us the best model for our data in a reasonable amount of time. This process is called hyperparameter optimization.
H2O contains good default values for many datasets, but to get the best performance for your data you will want to tune at least some of these hyperparameters to maximize the predictive performance of your models. You should start with the most important hyperparameters for your algorithm of choice, for example ntrees and max_depth for the tree models or the hidden layers for Deep Learning.
H2O provides some guidance by grouping the hyperparameters by their importance in the Flow UI. You should look carefully at the values of the ones marked critical, while the secondary or expert ones are generally used for special cases or fine tuning.
Note that some hyperparameters, such as learning_rate, have a very wide dynamic range. You should choose values that reflect this for your search (e.g., powers of 10 or of 2) to ensure that you cover the most relevant parts of the hyperparameter space. (Bergstra and Bengio p. 290)
#### Measuring Model Quality
There are many different ways to measure model quality. If you don’t know which to use, H2O will choose a good general-purpose metric for you based on the category of your model (binomial or multinomial classification, regression, clustering, …). However, you may want to choose a metric to compare your models based on your specific goals (e.g., maximizing AUC, minimizing log loss, minimizing false negatives, minimizing mean squared error, …).
#### Overfitting
Overfitting is the phenomenon of fitting a model so thoroughly to your training data that it begins to memorize the fine details of that specific data, rather than finding general characteristics of that data which will also apply to future data on which you want to make predictions.
Overfitting not only applies to the model training process, but also to the model selection process. During the process of tuning the hyperparameters and selecting the best model you should avoid overfitting them to your training data. Otherwise, the hyperparameter values that you choose will be too highly tuned to your selection data, and will not generalize as well as they could to new data. Note that this is the same principle as, but subtly different from, overfitting during model training. Ideally you should use cross-validation or a validation set during training and then a final holdout test (validation) dataset for model selection. As Bergstra and Bengio write on p. 290,
The standard practice for evaluating a model found by cross-validation is to report [test set error] for the [hyperparameter vector] that minimizes [validation error].
You can read much more on this topic in Chapter 7 of Elements of Statistical Learning from H2O advisors and Stanford professors Trevor Hastie and Rob Tibshirani with Jerome Friedman [2].
## Selecting Hyperparameters Manually and With Cartesian Grid
The traditional method of selecting the values for your hyperparameters has been to individually train a number of models with different combinations of values, and then to compare the model performance to choose the best model. For example, for a tree-based model you might choose ntrees of (50, 100 and 200) and max_depth of (5, 10, 15 and 20) for a total of 3 x 4 = 12 models. This process of trying out hyperparameter sets by hand is called manual search. By looking at the models’ predictive performance, as measured by test-set, cross-validation or validation metrics, you select the best hyperparameter settings for your data and needs.
As the number of hyperparameters and the lists of desired values increase this obviously becomes quite tedious and difficult to manage.
#### A Little Help?
For several years H2O has included grid search, also known as Cartesian Hyperparameter Search or exhaustive search. Grid search builds models for every combination of hyperparameter values that you specify.
Bergstra and Bengio write on p. 283:
Grid search … typically finds a better [set of hyperparameters] than purely manual sequential optimization (in the same amount of time)
H2O keeps track of all the models resulting from the search, and allows you to sort the list based on any supported model metric (e.g., AUC or log loss). For the example above, H2O would build your 12 models and return the list sorted with the best first, either using the metric of your choice or automatically using one that’s generally appropriate for your model’s category.
H2O allows you to run multiple hyperparameter searches and to collect all the models for comparison in a single sortable result set: just name your grid and run multiple searches. You can even add models from manual searches to the result set by specifying a grid search with a single value for each interesting hyperparameter:
# Begin with a random search of a space of 6 * 5 = 30 possible models:
hyper_parameters = { 'alpha': [0.01,0.1,0.3,0.5,0.7,0.9],
'lambda': [1e-4,1e-5,1e-6,1e-7,1e-8] }
search_criteria = { 'strategy': "RandomDiscrete", 'seed': 42,
'stopping_metric': "AUTO",
'stopping_tolerance': 0.001,
'stopping_rounds': 2 }
random_plus_manual =
H2OGridSearch(H2OGeneralizedLinearEstimator(family='binomial', nfolds=5),
hyper_parameters,
grid_id="random_plus_manual",
search_criteria=search_criteria)
random_plus_manual.train(x=x,y=y, training_frame=training_data)
# Now add a manual search to the results:
manual_hyper_parameters = {'alpha': [0.05], 'lambda': [1e-4]}
random_plus_manual =
H2OGridSearch(H2OGeneralizedLinearEstimator(family='binomial', nfolds=5),
manual_hyper_parameters,
grid_id="random_plus_manual")
random_plus_manual.train(x=x,y=y, training_frame=training_data)
random_plus_manual.show()
print(random_plus_manual.sort_by('F1', False))
## Searching Large Hyperparameter Spaces
As the number of hyperparameters being tuned increases, and the values that you would like to explore for those hyperparameters increases, you obviously get a combinatorial explosion in the number of models required for an exhaustive search. Since we always have time constraints on the model tuning process the obvious thing to do is to narrow down our choices by doing a coarser search of the space. Given a fixed amount of time, making random choices of hyperparameter values generally gives results that are better than the best results of an Cartesian (exhaustive) search.
Bergstra and Bengio write on p. 281:
Compared with neural networks configured by a pure grid search,
we find that random search over the same domain is able to find models that are as good or better within a small fraction of the computation time.
Granting random search the same computational budget, random search finds better models by effectively searching a larger, less promising configuration
space.
[F]or most data sets only a few of the hyper-parameters really matter,
but … different hyper-parameters are important on different data sets. This phenomenon makes grid search a poor choice for configuring algorithms for new data sets.
We propose random search as a substitute and baseline that is both reasonably efficient (roughly equivalent to or better than combinining manual search and grid search, in our experiments) and keeping the advantages of implementation simplicity and reproducibility of pure grid search.
[R]andom search … trades a small reduction in efficiency in low-dimensional spaces for a large improvement in efficiency in high-dimensional search spaces.
After doing a random search, if desired you can then iterate by “zooming in” on the regions of the hyperparameter space which look promising. You can do this by running additional, more targeted, random or Cartesian hyperparameter searches or manual searches. For example, if you started with alpha values of [0.0, 0.25, 0.5, 0.75, 1.0] and the middle values look promising, you can follow up with a finer grid of [0.3, 0.4, 0.5, 0.6, 0.7].
## Random Hyperparameter Search in H2O
H2O has supported random hyperparameter search since version 3.8.1.1. To use it, specify a grid search as you would with a Cartesian search, but add search criteria parameters to control the type and extent of the search. You can specify a max runtime for the grid, a max number of models to build, or metric-based automatic early stopping. If you choose several of these then H2O will stop when the first of the criteria are met. As an example, you might specify “stop when MSE has improved over the moving average of the best 5 models by less than 0.0001, but take no more than 12 hours”.
H2O will choose a random set of hyperparameter values from the ones that you specify, without repeats, and build the models sequentially. You can look at the incremental results while the models are being built by fetching the grid with the h2o.getGrid (R) or h2o.get_grid (Python) functions. There’s also a getGrids command in Flow that will allow you to click on any of the grids you’ve built. H2O’s Flow UI will soon plot the error metric as the grid is being built to make the progress easy to visualize, something like this:
#### Choosing Search Criteria
In general, metric-based early stopping optionally combined with max runtime is the best choice. The number of models it will take to converge toward a global best can vary a lot (see below), and metric-based early stopping accounts for this automatically by stopping the search process when the error curve (learning curve[3]) flattens out.
The number of models required for convergence depends on a number of things, but mostly on the “shape” of the error function in the hyperparameter space [Bergstra and Bengio p. 291]. While most algorithms perform well in a fairly large region of the hyperparameter space on most datasets, some combinations of dataset and algorithm are very sensitive: they have a very “peaked” error functions. In tuning neural networks with a large numbers of hyperparameters and various datasets Bergstra and Bengio find convergence within 2-64 trials (models built), depending largely on which hyperparameters they choose to tune. In some classes of search they reach convergence in 4-8 trials, even with a very large search space:
Random experiment efficiency curves of a single-layer neural network for eight of the data sets used in Larochelle et al. (2007) … (7 hyper-parameters to optimize). … Random searches of 8 trials match or outperform grid searches of (on average) 100 trials.
Simpler algorithms such as GBM and GLM should require few trials to get close to a global minimum.
#### Examples: R
This example is clipped from GridSearch.md:
# Construct a large Cartesian hyper-parameter space
ntrees_opts = c(10000) # early stopping will stop earlier
max_depth_opts = seq(1,20)
min_rows_opts = c(1,5,10,20,50,100)
learn_rate_opts = seq(0.001,0.01,0.001)
sample_rate_opts = seq(0.3,1,0.05)
col_sample_rate_opts = seq(0.3,1,0.05)
col_sample_rate_per_tree_opts = seq(0.3,1,0.05)
#nbins_cats_opts = seq(100,10000,100) # no categorical features
# in this dataset
hyper_params = list( ntrees = ntrees_opts,
max_depth = max_depth_opts,
min_rows = min_rows_opts,
learn_rate = learn_rate_opts,
sample_rate = sample_rate_opts,
col_sample_rate = col_sample_rate_opts,
col_sample_rate_per_tree = col_sample_rate_per_tree_opts
#,nbins_cats = nbins_cats_opts
)
# Search a random subset of these hyper-parmameters. Max runtime
# and max models are enforced, and the search will stop after we
# don't improve much over the best 5 random models.
search_criteria = list(strategy = "RandomDiscrete",
max_runtime_secs = 600,
max_models = 100,
stopping_metric = "AUTO",
stopping_tolerance = 0.00001,
stopping_rounds = 5,
seed = 123456)
gbm_grid <- h2o.grid("gbm",
grid_id = "mygrid",
x = predictors,
y = response,
# faster to use a 80/20 split
training_frame = trainSplit,
validation_frame = validSplit,
nfolds = 0,
# alternatively, use N-fold cross-validation:
# training_frame = train,
# nfolds = 5,
# Gaussian is best for MSE loss, but can try
# other distributions ("laplace", "quantile"):
distribution="gaussian",
# stop as soon as mse doesn't improve by
# more than 0.1% on the validation set,
# for 2 consecutive scoring events:
stopping_rounds = 2,
stopping_tolerance = 1e-3,
stopping_metric = "MSE",
# how often to score (affects early stopping):
score_tree_interval = 100,
## seed to control the sampling of the
## Cartesian hyper-parameter space:
seed = 123456,
hyper_params = hyper_params,
search_criteria = search_criteria)
gbm_sorted_grid <- h2o.getGrid(grid_id = "mygrid", sort_by = "mse")
print(gbm_sorted_grid)
best_model <- h2o.getModel(gbm_sorted_grid@model_ids[[1]])
summary(best_model)
You can find another example here.
#### Examples: Python
This example is clipped from pyunit_benign_glm_grid.py:
hyper_parameters = {'alpha': [0.01,0.3,0.5], 'lambda': [1e-5,1e-6,1e-7,1e-8]}
# test search_criteria plumbing and max_models
search_criteria = { 'strategy': "RandomDiscrete", 'max_models': 3 }
max_models_g = H2OGridSearch(H2OGeneralizedLinearEstimator(family='binomial'),
hyper_parameters, search_criteria=search_criteria)
max_models_g.train(x=x,y=y, training_frame=training_data)
max_models_g.show()
print(max_models_g.grid_id)
print(max_models_g.sort_by('F1', False))
assert len(max_models_g.models) == 3, "expected 3 models, got: {}".format(len(max_models_g.models))
print(max_models_g.sorted_metric_table())
print(max_models_g.get_grid("r2"))
# test search_criteria plumbing and asymptotic stopping
search_criteria = { 'strategy': "RandomDiscrete", 'seed': 42,
'stopping_metric': "AUTO", 'stopping_tolerance': 0.1,
'stopping_rounds': 2 }
asymp_g = H2OGridSearch(H2OGeneralizedLinearEstimator(family='binomial', nfolds=5),
hyper_parameters, search_criteria=search_criteria)
asymp_g.train(x=x,y=y, training_frame=training_data)
asymp_g.show()
print(asymp_g.grid_id)
print(asymp_g.sort_by('F1', False))
assert len(asymp_g.models) == 5, "expected 5 models, got: {}".format(len(asymp_g.models))
#### Examples: Flow
Flow includes an example called GBM_GridSearch.flow which does both Cartesian and random searches:
This section covers possible improvements for hyperparameter search in H2O and lays out a roadmap.
#### Ease of Use
With the addition of random hyperparameter search it becomes more practical for non-experts to get good, albeit not expert, results with the ML model training process:
Algorithmic approaches to hyper-parameter optimization make machine learning results easier to disseminate, reproduce, and transfer to other domains.[4] p. 8
We are looking into adding either fixed or heuristically-driven hyperparameter spaces for use with random search, essentially an “I’m Feeling Lucky” button for model building.
#### Covering the Space Better
One possibility for improving random search is choosing sets of hyperparameters which cover the space more efficiently than randomly choosing each value independently. Bergstra and Bengio cover this on pages 295-297, and find a potential improvement of only a few percentage points and only when doing searches of 100-500 models. This is because, as they state earlier, the number of hyperparameters which are important for a given dataset is quite small (1-4), and the random search process covers this low number of dimensions quite well. See the illustration of the projection of high hyperparameter space dimensions onto low on Bergstra and Bengio p. 284 and the plots of hyperparameter importance by dataset on p. 294. On p. 295 they show that the speed of convergence of the search is directly related to the number of hyperparameters which are important for the given dataset.
There is ongoing research on trying to predetermine the “variable importance” of hyperparameters for a given dataset. If this bears fruit we will be able to narrow the search so that we converge to a globally-good model more quickly.
#### Learning the Hyperparameter Space
Bergstra and Bengio and Bergstra, Bengio, Bardenet and Kegl note that random hyperparameter search works almost as well as more sophisticated methods for the types of algorithms available in H2O. For very complex algorithms like Deep Belief Networks (not available in H2O) they can be insufficient:
Random search has been shown to be sufficiently efficient for learning neural networks for several datasets, but we show it is unreliable for training DBNs.
1. Random search is competitive with the manual optimization of DBNs … and 2) Automatic sequential optimization outperforms both manual and random search.
Automatic sequential optimization refers here to techniques which build a model of the hyperparameter space and use it to guide the search process. The most well-known of these is the use of Gaussian Process (GP) models. Bergstra, Bengio, Bardenet and Kegl compare random search against both Gaussian Process and Tree-structured Parzen Estimator (TPE) learning techniques. They train Deep Belief Networks of 10 hyperparameters on a very tiny dataset of 506 rows and 13 columns. [Bergstra, Bengio, Bardenet and Kegl p. 5], initializing the GP and TPE models with the results of a 30-model random search.
They find that for this test case the TPE method outperforms GP and GP outperforms random search beyond the initial 30 models. However, they can’t explain whether TPE does better because it narrows in on good hyperparameters more quickly or conversely because it searches more randomly than GP. [Bergstra, Bengio, Bardenet and Kegl p. 7] Also note that the size of the dataset is very, very small compared with the number of internal model parameters and model tuning hyperparameters. It is a bit hard to believe that these results apply to datasets of typical sizes for users of H2O (hundreds of millions or billions of rows, and hundreds or thousands of columns).
Experimentation and prototyping is clearly needed here to see which of these techniques, if any, are worth adding to H2O.
1. Bergstra and Bengio. Random Search for Hyper-Parameter Optimization, 2012
2. Trevor Hastie, Rob Tibshirani and Jerome Friedman. The Elements of Statistical Learning, 2008
3. Andrew Ng. Machine Learning, 2016
4. Bergstra, Bengio, Bardenet and Kegl. Algorithms for Hyper-parameter Optimization, 2011
## Spam Detection with Sparkling Water and Spark Machine Learning Pipelines
This short post presents the “ham or spam” demo, which has already been posted earlier by Michal Malohlava, using our new API in latest Sparkling Water for Spark 1.6 and earlier versions, unifying Spark and H2O Machine Learning pipelines. It shows how to create a simple Spark Machine Learning pipeline and a model based on the fitted pipeline, which can be later used for prediction whether a particular message is spam or not.
Before diving into the demo steps, we would like to provide some details about the new features in the upcoming Sparkling Water 2.0:
• Support for Apache Spark 2.0 and backwards compatibility with all previous versions.
• The ability to run Apache Spark and Scala through H2O’s Flow UI.
• H2O feature improvements and visualizations for MLlib algorithms, including the ability to score feature importance.
• Visual intelligence for Apache Spark.
• The ability to build Ensembles using H2O plus MLlib algorithms.
• The power to export MLlib models as POJOs (Plain Old Java Objects), which can be easily run on commodity hardware.
• A toolchain for ML pipelines.
• Debugging support for Spark pipelines.
• Model and data governance through Steam.
• Bringing H2O’s powerful data munging capabilities to Apache Spark.
• In order to run the code below, start your Spark shell with attached Sparkling Water JAR or use sparkling-shell script that already does this for you.
You can start the Spark shell with Sparkling Water as follows:
\$SPARK_HOME/bin/spark-submit \
--class water.SparklingWaterDriver \
--packages ai.h2o:sparkling-water-examples_2.10:1.6.5 \
--executor-memory=6g \
--driver-memory=6g /dev/null
Preferable Spark is Spark 1.6 and Sparkling Water 1.6.x.
## Prepare the coding environment
Here we just import all required libraries.
import org.apache.spark.SparkFiles
import org.apache.spark.ml.PipelineModel
import org.apache.spark.ml.feature._
import org.apache.spark.ml.h2o.H2OPipeline
import org.apache.spark.ml.h2o.features.{ColRemover, DatasetSplitter}
import org.apache.spark.ml.h2o.models.H2ODeepLearning
import org.apache.spark.sql.types.{StringType, StructField, StructType}
import org.apache.spark.sql.{DataFrame, Row, SQLContext}
import water.support.SparkContextSupport
import water.fvec.H2OFrame
Add our dataset to Spark environment. The dataset consists of 2 columns where the first one is the label ( ham or spam ) and the second one is the message itself. We don’t have to explicitly ask for Spark context since it’s already available via sc variable.
val smsDataFileName = "smsData.txt"
val smsDataFilePath = "examples/smalldata/" + smsDataFileName
Create SQL support.
implicit val sqlContext = SQLContext.getOrCreate(sc)
Start H2O services.
import org.apache.spark.h2o._
implicit val h2oContext = H2OContext.getOrCreate(sc)
Create helper method which loads the dataset, performs some basic filtering and at last creates Spark’s DataFrame with 2 columns – label and text.
def load(dataFile: String)(implicit sqlContext: SQLContext): DataFrame = {
val smsSchema = StructType(Array(
StructField("label", StringType, nullable = false),
StructField("text", StringType, nullable = false)))
val rowRDD = sc.textFile(SparkFiles.get(dataFile)).map(_.split("\t")).filter(r => !r(0).isEmpty).map(p => Row(p(0),p(1)))
sqlContext.createDataFrame(rowRDD, smsSchema)
}
## Define the pipeline stages
In Spark, a pipeline is formed of two basic elements – transformers and estimators. Estimators usually encapsulate an algorithm for model generation and their output are transformers. During fitting the pipeline stage, all transformers and estimators are executed and estimators are converted to transformers. The model generated by the pipeline contains only transformers. More about Spark pipelines can be found on Spark’s pipeline overview
In H2O we created a new type of pipeline stage, which is called OneTimeTransformer. This transformer works similarly to Spark’s estimator in a way that it is only executed during fitting the pipeline stage. It does not however produces a transformer during fitting pipeline stage and the model generated by the pipeline does not contain this OneTimeTransformer.
An example for one-time transformer is splitting the input data into a validation and training dataset using H2O Frames. We don’t need this one-time transformer to be executed every time we do prediction on the model. We just need this code to be executed when we are fitting the pipeline to the data.
This pipeline stage is using Spark’s RegexTokenizer to tokenize the messages. We just specify input column and output column for tokenized messages.
val tokenizer = new RegexTokenizer().
setInputCol("text").
setOutputCol("words").
setMinTokenLength(3).
setGaps(false).
setPattern("[a-zA-Z]+")
Remove unnecessary words using Spark’s StopWordsRemover.
val stopWordsRemover = new StopWordsRemover().
setInputCol(tokenizer.getOutputCol).
setOutputCol("filtered").
setStopWords(Array("the", "a", "", "in", "on", "at", "as", "not", "for")).
setCaseSensitive(false)
Vectorize the words using Spark’s HashingTF.
val hashingTF = new HashingTF().
setNumFeatures(1 << 10).
setInputCol(tokenizer.getOutputCol).
setOutputCol("wordToIndex")
Create inverse document frequencies based on hashed words. It creates a numerical representation of how much information a
given word provides in the whole message.
val idf = new IDF().
setMinDocFreq(4).
setInputCol(hashingTF.getOutputCol).
setOutputCol("tf_idf")
This pipeline stage is one-time transformer. If setKeep(true) is called in it, it preserves specified columns instead
of deleting them.
val colRemover = new ColRemover().
setKeep(true).
setColumns(Array[String]("label", "tf_idf"))
Split the dataset and store the splits with the specified keys into H2O’s distributed storage called DKV. This is one-time transformer which is executed only during fitting stage. It determines the frame, which is passed on the output in the following order:
1. If the train key is specified using setTrainKey method and the key is also specified in the list of keys, then frame with this key is passed on the output
2. Otherwise, if the default key – ‚Äútrain.hex‚Äù is specified in the list of keys, then frame with this key is passed on the output
3. Otherwise the first frame specified in the list of keys is passed on the output
val splitter = new DatasetSplitter().
setKeys(Array[String]("train.hex", "valid.hex")).
setRatios(Array[Double](0.8)).
setTrainKey("train.hex")
Create H2O’s deep learning model.
If the key specifying the training set is set using setTrainKey, then frame with this key is used as the training frame, otherwise it uses the frame from the previous stage as the training frame
val dl = new H2ODeepLearning().
setEpochs(10).
setL1(0.001).
setL2(0.0).
setHidden(Array[Int](200, 200)).
setValidKey(splitter.getKeys(1)).
setResponseColumn("label")
## Create and fit the pipeline
Create the pipeline using the stages we defined earlier. As a normal Spark pipeline, it can be formed of Spark’s transformers and estimators, but it also may contain H2O’s one-time transformers.
val pipeline = new H2OPipeline().
setStages(Array(tokenizer, stopWordsRemover, hashingTF, idf, colRemover, splitter, dl))
Train the pipeline model by fitting it to a Spark’s DataFrame
val data = load("smsData.txt")
val model = pipeline.fit(data)
Now we can optionally save the model to disk and load it again.
model.write.overwrite().save("/tmp/hamOrSpamPipeline")
We can also save this unfitted pipeline to disk and load it again.
pipeline.write.overwrite().save("/tmp/unfit-hamOrSpamPipeline")
Train the pipeline model again on loaded pipeline just to show deserialized model works as it should.
val modelOfLoadedPipeline = loadedPipeline.fit(data)
Create helper function for predictions on unlabeled data. This method is using model generated by the pipeline. To make a prediction we call transform method with Spark’s Dataframe as an argument on the generated model. This call executes each transformer specified in the pipeline one after one producing Spark’s DataFrame with predictions.
def isSpam(smsText: String,
model: PipelineModel,
h2oContext: H2OContext,
hamThreshold: Double = 0.5):Boolean = {
import h2oContext.implicits._
val smsTextDF = sc.parallelize(Seq(smsText)).toDF("text") // convert to dataframe with one column named "text"
val prediction: H2OFrame = model.transform(smsTextDF)
prediction.vecs()(1).at(0) < hamThreshold
}
## Try it!
println(isSpam("Michal, h2oworld party tonight in MV?", modelOfLoadedPipeline, h2oContext))
println(isSpam("We tried to contact you re your reply to our offer of a Video Handset? 750 anytime any networks mins? UNLIMITED TEXT?", loadedModel, h2oContext))
In this article we showed how Spark’s pipelines and H2O algorithms work together seamlessly in Spark environment. We strive to be consistent with Spark API in H2O.ai and make the life of a developer/data scientist easier by hiding H2O internals and exposing the APIs that are natural for Spark users.
|
2017-07-25 20:37:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20196661353111267, "perplexity": 3898.910735924224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425381.3/warc/CC-MAIN-20170725202416-20170725222416-00081.warc.gz"}
|
https://studydaddy.com/question/how-will-u-justify-a-lone-pair-on-carbon-in-co
|
Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer.
QUESTION
# How will u justify a lone pair on carbon in CO ?
By formal charge separation..........
A typical Lewis structure used for the carbon monoxide molecule is ""^(-):C-=O^(+). Of course this is a formalism, that is it is a way to view the molecule in an abstract sense. However, such a Lewis structure does reflect the experimental fact that when carbon monoxide binds to a metal centre, it usually binds thru the carbon.
|
2019-04-20 15:04:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34624889492988586, "perplexity": 1235.528818741556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529839.0/warc/CC-MAIN-20190420140859-20190420162047-00019.warc.gz"}
|
https://indico.desy.de/event/28202/contributions/105865/
|
# EPS-HEP2021 conference
26-30 July 2021
Zoom
Europe/Berlin timezone
## Search for rare electroweak decay $B^{+} \to K^{+}νν$ in early Belle II dataset
29 Jul 2021, 10:15
15m
Zoom
#### Zoom
Parallel session talk Flavour Physics and CP Violation
### Speaker
Simon Kurz (BELLE (BELLE II Experiment))
### Description
In the recent years, several measurements of $B$-decays with flavor changing neutral currents (FCNC), i.e. $b\to s \ell \ell$ transitions, hint at deviations from the Standard Model (SM) predictions.
A search for the flavor-changing neutral current decay $B^{+}→K^{+}\nu\bar{\nu}$ is performed with data sample corresponding to $63~fb^{−1}$ collected at the Υ(4S) resonance by the Belle II experiment. A novel measurement method is developed, which exploits topological properties of the decay that differ from both generic B-meson decays and light-quark pair-production. This inclusive tagging approach has the benefit of a higher signal efficiency compared to previous searches for this rare decay. As no significant signal is observed, an upper limit on the branching fraction of $B^{+}→K^{+}\nu\bar{\nu}$ = $4.1 \times 10^{−5}$ is set at the 90% confidence level. We will talk about this novel analysis technique and the result.
Email libby@iitm.ac.in Belle II Jim Libby
|
2021-10-22 00:42:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7891824841499329, "perplexity": 4336.4798065872665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00301.warc.gz"}
|
https://stats.stackexchange.com/questions/95432/hidden-state-models-vs-stateless-models-for-time-series-regression
|
# Hidden state models vs. stateless models for time series regression
This is a quite generic question: assume I want to build a model to predict the next observation based on the previous $N$ observations ($N$ can be a parameter to optimize experimentally). So we basically have a sliding window of input features to predict the next observation.
I can use a Hidden Markov Model approach, i.e. Baum-Welch to estimate a model, then Viterbi to predict a current state based on the last $N$ observations, then predict the most likely next state based on the current state, and then predict the next observation using the most likely next state and the HMM parameters (or variants such as find the predictive distribution of the next observation).
Or I can use a much simpler approach, using a stateless model (which can get as input the previous $N$ observations), e.g. SVM, linear regression, splines, regression trees, nearest neighbors, etc. Such models are based on minimizing some prediction error over the training set and are therefore, conceptually, much simpler than a hidden state based model.
Can someone share her/his experience in dealing with such a modelling choice? What would speak in favour of the HMM and what in favour of a regression approach? Intuitively one should take the simpler model possible to avoid over-fitting; this speaks in favour of a stateless approach...We also have to consider that both approaches get the same input data for training (I think this implies that if we do not incorporate additional domain knowledge in the modelling of a hidden state model, e.g. fix certain states and transition probabilities, there is no reason of why a hidden state model should perform better). At the end one can of course play with both approaches and see what performs better on a validation set, but some heuristics based on practical experience might also be helpful...
Note: for me it is important to predict only certain events; I prefer a model which predicts few "interesting/rare" events well, rather than a model which predicts "average/frequent" events but the interesting ones not so well . Perhaps this has an implication for the modelling choice. Thanks.
• Can you clarify why you believe regression models are necessarily stateless? Dynamic linear regression models (in which previous values of the predictand are included on the right-hand side of the model equation) would seem very much to be state-conditioned. But perhaps I am missing something. – Alexis May 1 '14 at 18:12
• thanks for reading the question. I would say it is a bit a question of semantics, I also give an example of regression models which include the n-past observation values on the right hand side of the model, such a model is of course dynamic. However, I was more referring to the concept of a hidden/latent variable for which usually EM techniques are used to find the model vs models for which we do not have such hidden states (i.e. the states are observable, they are the observations). From a practical and pragmatic perspective, is it possible to tell what works better and when? – Mannaggia May 1 '14 at 20:19
• I missed the fact that you refer to past values of the prediction as inputs.Are such models the equivalent of a hidden state model (in principle they would just include more than N observation, replacing the equation for the past predictions)?For me the question is more if we observe the state and model it or if we infer the state given an assumption of the model. I am more interested in the practical aspect however, not the mathematical one. I.e. is it possible to tell under which conditions the one or the other approach works better?(I think no theorem can provide an answer to this question) – Mannaggia May 1 '14 at 20:28
• Perhaps this earlier question is one half of the question presented here. – Meadowlark Bradsher May 2 '14 at 3:37
In short, I think they are working in different learning paradigm.
State-space model (hidden state model) and other stateless model you mentioned are going to discover the underlying relationship of your time series in different learning paradigm: (1) maximum-likelihood estimation, (2) Bayes' inference, (3) empirical risk minimization.
In state-space model,
Let $x_t$ as the hidden state, $y_t$ as the observables, $t>0$ (assume there is no control)
You assume the following relationship for the model:
$P(x_0)$ as a prior
$P(x_t | x_{t-1})$ for $t \geq 1$ as how your state change (in HMM, it is a transition matrix)
$P(y_t | x_t)$ for $t \geq 1$ as how you observe (in HMM, it could be normal distributions that conditioned on $x_t$)
and $y_t$ only depends on $x_t$.
When you use Baum-Welch to estimate the parameters, you are in fact looking for a maximum-likelihood estimate of the HMM. If you use Kalman filter, you are solving a special case of Bayesian filter problem (which is in fact an application of Bayes' theorem on update step):
Prediction step:
$\displaystyle P(x_t|y_{1:t-1}) = \int P(x_t|x_{t-1})P(x_{t-1}|y_{1:t-1}) \, dx_{t-1}$
Update step:
$\displaystyle P(x_t|y_{1:t}) = \frac{P(y_t|x_t)P(x_t|y_{1:t-1})}{\int P(y_t|x_t)P(x_t|y_{1:t-1}) \, dx_t}$
In Kalman filter, since we assume the noise statistic is Gaussian and the relationship of $P(x_t|x_{t-1})$ and $P(y_t|x_t)$ are linear. Therefore you can write $P(x_t|y_{1:t-1})$ and $P(x_t|y_{1:t})$ simply as the $x_t$ (mean + variance is sufficient for normal distribution) and the algorithm works as matrix formulas.
On the other hand, for other stateless model you mentioned, like SVM, splines, regression trees, nearest neighbors. They are trying to discover the underlying relationship of $(\{y_0,y_1,...,y_{t-1}\}, y_t)$ by empirical risk minimization.
For maximum-likelihood estimation, you need to parametrize the underlying probability distribution first (like HMM, you have the transition matrix, the observable are $(\mu_j,\sigma_j)$ for some $j$)
For application of Bayes' theorem, you need to have "correct" a priori $P(A)$ first in the sense that $P(A) \neq 0$. If $P(A)=0$, then any inference results in $0$ since $P(A|B) = \frac{P(B|A)P(A)}{P(B)}$.
For empirical risk minimization, universal consistency is guaranteed for any underlying probability distribution if the VC dimension of the learning rule is not growing too fast as the number of available data $n \to \infty$
|
2020-09-23 10:56:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.767855167388916, "perplexity": 477.5603033882229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400210616.36/warc/CC-MAIN-20200923081833-20200923111833-00153.warc.gz"}
|
https://socratic.org/questions/absolute-zero-is-what-temperature-on-the-celsius-and-fahrenheit-scales
|
# Absolute zero is what temperature on the Celsius and Fahrenheit scales?
Aug 3, 2017
Well, on the Celsius scale it is just additively different:
"0 K" harr ul(-273.15^@ "C")
The Fahrenheit scale has different increments to confuse us...
$\text{^@ "C" xx 9/5 + 32 = ""^@ "F}$
And thus, the temperature in $\text{^@ "F}$ would be...
-273.15^@ "C" xx 9/5 + 32 = ul(-459.67^@ "F")
Aug 3, 2017
Absolute zero is $\text{0 Kelvins}$. On the Celsius scale it is $- {273.15}^{\circ} \text{C}$, and on the Fahrenheit scale it is $- {459.67}^{\circ} \text{F}$.
#### Explanation:
There is no formula for converting Kelvins to degrees Fahrenheit, so you need to convert from Kelvins to degrees Celsius, and then degrees Celsius to degrees Fahrenheit.
Absolute Zero in Kelvins to Celsius
$\text{K"-273.15}$$=$$\text{^@"C}$
$\text{0 K} - 273.15$$=$$- {273.15}^{\circ} \text{C}$
Absolute Zero in Celsius to Fahrenheit
$\text{^@"F}$$=$$\text{^@"C} \times \frac{9}{5} + 32$
$\text{^@"F}$$=$$- {273.15}^{\circ} \text{C"xx9/5+32}$$=$$- {459.67}^{\circ} \text{F}$
You can substitute $1.8$ for the fraction $\frac{9}{5}$.
|
2019-09-20 06:27:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 23, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7008123397827148, "perplexity": 3835.3004874408607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573832.23/warc/CC-MAIN-20190920050858-20190920072858-00050.warc.gz"}
|
http://physics.stackexchange.com/questions/65784/origin-of-ladder-operator-methods
|
# Origin of Ladder Operator methods
Ladder operators are found in various contexts (such as calculating the spectra of the harmonic oscillator and angular momentum) in almost all introductory Quantum Mechanics textbooks. And every book I have consulted starts by defining the ladder operators. This makes me wonder why do these operators have their respective forms? I.e. why is the ladder operator for the harmonic oscillator
$$\hat{a}=\sqrt{\frac{m\omega}{2\hbar}} \left( \hat{x} + \frac{i}{m\omega}\hat{p} \right)$$
and not something else?
On a similar note, does anyone know the physicist/paper who/which proposed this method? Wikipedia mentions Dirac, but I have been unable to find any leads.
-
When you say "why do these operators have their respective forms?" do you mean "why is it useful to consider this particular combination of operators?" or perhaps "are the ladder operators of quantum mechanics a special case of a construction that has wider applicability?" or perhaps both? – joshphysics May 24 at 22:57
Perhaps something along the lines of the creation and annihilation operators arising when you try to factorize the Hamiltonian? This link has the details for the harmonic oscillator. – zkf May 25 at 7:57
Have a look to the Quantum harmonic oscillator – Trimok May 25 at 9:38
Joshphysics, my question is more along the lines of the first. Zkf, the link is pretty insightful, thanks. – Comp_Warrior May 25 at 10:52
You may recall from high school algebra that $x^2 + y^2 = (x + iy)(x - iy)$. Because the way the adjoint operator works, you could define an operator $\hat a = x + iy$, and its adjoint becomes $\hat a^\dagger = x - iy$. The hamiltonian for the quantum oscillator is just this relation with some constants. You have to be careful because the ladder operators don't commute; that causes the constant $\frac{1}{2}\hbar\omega$ to show up. Of all the sources that I've seen discuss the oscillator with the ladder operators Griffiths (section 2.3.1) is the only one who actually explains the problem this way. The others just pull the ladder operators seemingly out of nowhere, then demonstrate that they work.
The ladder operators date at least to Dirac's Principles of Quantum Mechanics, first published in 1930. That's a really good example of Dirac just inventing the ladder operators and then showing that they solve the problem. Dirac had a tendency to bring in math that physicists at the time weren't familiar with. So it's possible that he saw the ladder operators in math, realized they could solve physics problems, and introduced them to physics. He doesn't provide a citation in Principles, so it's also possible that he invented them. The best citation for where Dirac got the ladder operators should be in one of his original papers.
-
Comment to the answer (v1): It may be worthwhile stressing that $x^2 + y^2 = (x + iy)(x - iy)$ is only true if $[x,y]=0$. – Qmechanic Dec 12 at 21:22
Ladder operators are usually constructed to form a Lie algebra (we want them to have specific conmutation relations). The mathematical basis is weight theory.
The important thing of Lie algebras is that they are a vector space and their elements, which are called generators obbey this conmutation rule: $$[X_i,X_j]=f_{ijk}X_k$$ Where we have used the summation convention. $f_{ijk}$ are just constants, so we call them structure constants.
In our case generators will be just matrices.
In general, we will have n number of generators, which will form an algebra. There will be m simultaneously diaganolizable generators (i.e. they conmute with each other). These generators are called Cartan generators and they form the Cartan subalgebra. We will denote them by $H^i$ and the non Cartan generators by $E^i$.
Each eigenvector asociated to the Cartan generators is called a weight vector, $|t_i\rangle$. Their components $t_i$ are called weights. Weight vectors will correspond to physical states.
A Cartan generator will act on a weight vector as: $$H^i|t_j\rangle =t^i_j|t_j\rangle$$
At this point I should explain roots, but we shall just skip them.
Now, here is when ladder operators come into play. When a non Cartan generator acts on a state (weight vector) the new eigenvalue will be shifted by $\pm e_j^i+t_k^i$ . When the value is raised we denote the generator by $E^j$ and when its lowered $E^{-j}$. We take that they are the hermitian conjugates of each other.
Then, it possible to prove that $[H^i, E^j]=e^i_jE^j$ and $[E^j,E^{-j}]=e^k_jH^k$. These conmutation relations are very important and they will be used in the angular momentum and harmonic oscillator case.
So we are done, we just need to identify our Cartan and non Cartan generators. Then, the non Cartan generators will move us around the possible states.
Angular momentum
We have that $J^1,J^2,J^3$ are the generators of SU(2). We choose one of this generators to be diagonal one, typically it's $J_3$ (this is the Cartan generator). Then, each state $|j,m\rangle$ is labeled by the eigenvalues of $J_3$, which we'll identify as the angular momentum $m$ and the maximum angular momentum is $j$.
Since $J^1,J^2$ don't satisfy $[J^3,J^i]=\alpha J^i$ nor $[J^i,J^{-i}]=\alpha J^3$, we have to take linear combinations of them. We could show, solving a linear system, that this combination is: $$N^\pm=\frac{1}{\sqrt{2}}(J_1\mp J_2)$$
These operators will change the value of the angular momentum. We can check that they satisfy the conmutation rules. $$[J^3,J^\pm]=\pm J^\pm$$ $$[J^+,J^-]=J^3$$
Harmonic oscillator
(I'm a bit confused with SU(1,1) algebras and that stuff, so someone else should explain it)
In this case the Cartan generators are two, the identity $\mathbb{I}$ and the Hamiltonian $H$ (I think that the Hamiltonian could be interchanged by the number operator $N=a^\dagger a$). We also know from QM that $[x,p]=i$ ($h=1$). As in the previous case, we take linear combinations to form the ladder operators. We obtain: $$[H,\hat{a}]=-\hat{a}$$ $$[H,\hat{a}^\dagger]=\hat{a}^\dagger$$ $$[\hat{a},\hat{a}^\dagger]=\mathbb{I}$$ $$[\hat{a},\hat{a}]=0$$ $$[\hat{a}^\dagger,\hat{a}^\dagger]=0$$
The harmonic oscillator can be extended in QFT to study bosons and fermions.
If you want more information about the math of ladder operators in angular momentum you should have a look at Georgi's book. For the harmonic oscillator there is not so much information, I like this notes: http://www.math.columbia.edu/~woit/QM/old-fermions-clifford.pdf .
-
Wow, this is pretty detailed, but I can't say I understand ; especially since I don't know about group theory orLie algebras! – Comp_Warrior May 25 at 10:53
@Comp_Warrior No problem, I'll try to explain it better. – jinawee May 25 at 11:38
Can you give a link to the Georgi's book you refered? – Urukec Jun 4 at 17:43
@Urukec The book is this. It explains weight representations and almost every type of algebra in Particle Physics (SU(n), SO(n), E6, etc). – jinawee Jun 5 at 17:55
@jinawee Thanks! – Urukec Jun 6 at 6:30
Why do they have that form and not some other? I suppose one answer is "the form of the Hamiltonian".
Because of the form of the Hamiltonian for the QHO, there is a "number" basis for the states.
Suppose you don't use the ladder operator algebra to solve for the energy eigenstates of the Hamiltonian. You still find that the energy eigenvalues are of the form $(n + \frac{1}{2})\hbar \omega, \ n = 0,1,2,...$
Thus, there is a basis, the number basis, consisting of states with eigenvalue $n$ and an associated number operator, $\hat N$.
$$\hat N | n \rangle = n | n \rangle$$
Then the Hamiltonian can be written as:
$$\hat H = (\hat N + \frac{1}{2}) \hbar \omega$$
Factor $\hat N$ into the product of an operator and its Hermitian adjoint:
$$\hat N = \hat a^\dagger \, \hat a$$
Thus:
$$\hat H = ( \hat a^\dagger \, \hat a+ \frac{1}{2}) \hbar \omega$$
But, we also have:
$$\hat H = \frac{\hat P^2}{2m} + \frac{m \omega^2 \hat X}{2}$$
Equating these gives the form for $\hat a$ and $\hat a^\dagger$.
But what do these operators, $\hat a$ and $\hat a^\dagger$ do?
Using the commutation relations for $\hat X$ and $\hat P$, find that:
$$[\hat a, \hat a^\dagger] = 1$$
Thus:
$$[\hat N, \hat a^\dagger] = \hat a^\dagger$$
Operating on a number eigenstate with the above, find that:
$$\hat N (\hat a^\dagger | n \rangle) = (n+1)(\hat a^\dagger | n \rangle) = \lambda |n+1\rangle$$
So, we find that $\hat a^\dagger$ is a raising operator connecting the number state $|n\rangle$ to the state $|n+1\rangle$.
By similar reasoning, we find that $\hat a$ is a lowering operator.
So, without assuming ladder operators or their form, we necessarily arrive at them.
-
– joshphysics May 25 at 16:40
I wonder for your last equation how can you form |n+1> ? It should be eigenvalue not ket. – Outrageous Nov 28 at 15:45
@Outrageous, You would contract the ket with its associated bra to get just the eigenvalue. The number operator on a number eigenket is the number eigenket scaled by the eigenvalue. $$\hat N | n \rangle = n | n \rangle$$ – Alfred Centauri Nov 29 at 20:42
|
2013-12-21 03:55:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9318045377731323, "perplexity": 474.4982326689635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345774929/warc/CC-MAIN-20131218054934-00067-ip-10-33-133-15.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/171071/lie-algebra-representation-induced-from-homomorphism-between-spin-group-and-son
|
# Lie algebra representation induced from homomorphism between spin group and SO(n,n)
Consider the spin group, we know it is a double cover with the map:
$\rho: Spin(n,n)\longrightarrow SO(n,n)$ s.t $\rho(x)(v)= xvx^{-1}$ where $v$ is an element of 2n dimensional vector space V and $x$ is an element of spin group (multiplications are Clifford multiplication). I read that this map induces a lie algebra representation given by: $d\rho:so(n,n) \longrightarrow so(n,n)$ s.t $d \rho_{x}(v)=xv-vx$ here $x$ is an element of $so(n,n)$ and $v$ is again an element of V.
I cannot understand the derivation of this lie algebra representation. Can anyone help me? :)
-
This basically comes from the product rule of differentiation. Recall that for a general Lie algebra homomorphism $\rho : G\to H$ you can compute its derivative $d\rho: Lie(G) \to Lie(H)$ by the formula $$d\rho(X) = \frac{d}{dt}\vert_{t=0} \rho(\exp(tX)).$$ In the case you have, this gives $$d\rho(x) (v) = \frac{d}{dt}\vert_{t=0} \exp(tx) v \exp(-tx),$$ where $x \in so(n,n)$, which is identified with the second filtration of the Clifford algebra. Now you can think of the right hand side is taking place in the Clifford algebra and the product rule of differentiation holds so you get $$d\rho(x)(v) = \frac{d}{dt} \vert_{t=0} \exp(tx) v \exp(0x) + \exp(0x) v \frac{d}{dt}\vert_{t=0} \exp(-tx) = xv +v(-x).$$
Notice that this formula does not make sense if you think of $so(n,n)$ in terms of $2n \times 2n$ matrices since right multiplying a column vector by a matrix doesn't make sense. The $xv - vx$ is taking place in the Clifford algebra.
|
2015-08-30 22:32:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9199775457382202, "perplexity": 60.276063309669105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065375.30/warc/CC-MAIN-20150827025425-00175-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://rdrr.io/cran/peacots/man/significanceOfLocalPeak.html
|
# significanceOfLocalPeak: Statistical significance of local periodogram peaks In peacots: Periodogram Peaks in Correlated Time Series
## Description
Calculate statistical significance for a secondary periodogram peak (i.e. a non-global periodogram maximum), based on the null hypothesis of an OUSS process.
## Usage
```1 2 3``` ```significanceOfLocalPeak(power_o, lambda, power_e, time_step, series_size, Nfreq, peakFreq, peakPower) ```
## Arguments
`power_o` Positive number. Power at zero-frequency stemming from the underlying OU process. `lambda` Positive number. Resilience (or relaxation rate) of the OU process, in inverse time units. This is also the inverse correlation time of the OU process. `power_e` Non-negative number. Asymptotic power at large frequencies due to random measurement errors. Setting this to zero corresponds to the classical OU process. `time_step` Positive number. The time step of the time series that was used to calculate the periodogram. `series_size` Positive integer. The size of the time series for which the periodogram peak was calculated. `Nfreq` The number of frequencies from which the local periodogram peak was picked. Typically equal to the number of frequencies in the periodogram. `peakFreq` Single number. The frequency of the focal peak. `peakPower` Single number. The periodogram power calculated for the focal peak.
## Details
The OUSS parameters `power_o`, `lambda` and `power_e` will typically be maximum-likelihood fitted values returned by `evaluate.pm`. The `time_step` is also returned by `evaluate.pm` and is inferred from the analysed time series. The examined periodogram peak (as defined by `peakFreq`) will typically be a secondary peak of interest, masked by other stronger peaks or a low-frequency maximum. The significance of such a peak is not defined by standard tests.
## Value
The returned P-value (referred to as “local P-value”) is the probability that an OUSS process with the specified parameters would generate a periodogram with a power-to-expectation ratio greater than `peakPower/E`, where `E` is the power spectrum of the OUSS process at frequency `peakFreq`. Hence, the significance is a measure for how much the peak power deviates from its expectation. The calculated value is an approximation. It becomes exact for long regular time series.
## Note
This statistical significance is not equivalent to the one calculated by `evaluate.pm` for the global periodogram maximum. If the investigated periodogram peak is a global maximum, then the P-value returned by `evaluate.pm` should be preferred, as it also takes into account the absolute magnitude of the peak.
Stilianos Louca
## References
Louca, S., Doebeli, M. (2015) Detecting cyclicity in ecological time series, Ecology 96: 1724–1732
`evaluate.pm`
``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46``` ``` # In this example we generate a random cyclic time series, where the peak is (most likely) # masked by a strong low-frequency maximum. # We will use significanceOfLocalPeak() to evaluate its significance # based on its deviation from the expected power. # generate cyclic time series by adding a periodic signal to an OUSS process period = 1; times = seq(0,20,0.2); signal = 0.5 * cos(2*pi*times/period) + generate_ouss(times, mu=0, sigma=1, lambda=1, epsilon=0.5); # calculate periodogram and fit OUSS model report = evaluate.pm(times=times, signal=signal); print(report) # find which periodogram mode approximately corresponds to the frequency we are interested in cycleMode = which(report\$frequencies>=0.99/period)[1]; # calculate P-value for local peak Pvalue = significanceOfLocalPeak(power_o = report\$power_o, lambda = report\$lambda, power_e = report\$power_e, time_step = report\$time_step, series_size = length(times), Nfreq = length(report\$frequencies), peakFreq = report\$frequencies[cycleMode], peakPower = report\$periodogram[cycleMode]); # plot time series old.par <- par(mfrow=c(1, 2)); plot(ts(times), ts(signal), xy.label=FALSE, type="l", ylab="signal", xlab="time", main="Time series (cyclic)", cex=0.8, cex.main=0.9); # plot periodogram title = sprintf("Periodogram OUSS analysis\nfocusing on local peak at freq=%.3g\nPlocal=%.2g", report\$frequencies[cycleMode],Pvalue); plot(ts(report\$frequencies), ts(report\$periodogram), xy.label=FALSE, type="l", ylab="power", xlab="frequency", main=title, col="black", cex=0.8, cex.main=0.9); # plot fitted OUSS power spectrum lines(report\$frequencies, report\$fittedPS, col="red"); par(old.par) ```
|
2022-10-03 15:11:10
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8235647082328796, "perplexity": 1577.2762564382385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00396.warc.gz"}
|
https://community.wolfram.com/groups/-/m/t/1349508
|
Define InputAlias shortcuts for quickly entering nicely formatted units?
Posted 8 months ago
1224 Views
|
4 Replies
|
6 Total Likes
|
Ref the attached notebook, I'd like to be able to define InputAlias shortcuts for quickly entering nicely formatted units. The desired format is shown at Section 3 in the attached notebook. My approach at present is to use the InputAlias values defined at Section 1. This produces input in the format shown at Section 2. I then evaluate the Quantity[] parts of these expressions in place (in the input cells) to produce nicely formatted units shown at Section 3.My question is whether there's a way to evaluate the Quantity[] function within the AddInputAlias automatically when the InputAlias is invoked so that the nicely formatted form of units are shown at Section 3 is displayed straight off without having to evaluate in place? This would save a lot of time.Thanks in anticipation,Ian Attachments:
4 Replies
Sort By:
Posted 8 months ago
Ian, would you settle for having a palette to paste in the units in a Mathematica expression? Or being able to quickly generate a custom palette that has the units for a particular problem type?I have an application called UnitsHelper. It's a package, with an advanced and basic palette and a custom style sheet that displays the units in black instead of grey. (It just seems to me that it's important information and should be displayed as all important information. But you don't have to use the style sheet or could change it to your preference.) There are also quick links to Wolfram and NIST documentation. The advanced palette displays all units by group and then either alphabetically or by size. Physical constants are also on the palette. There is a facility for using reduced units such as geometric or atomic units and also a facility to deunitize an expression to use implied input and output units. Also facilities for general decibel units.If you're interested send me an email from my profile page and I'll send you a Dropbox link.
Posted 8 months ago
Thanks David. I’ve sent you an email. I have a palette for units, but it’s a bit slow to use and want to establish whether it’s possible to achieve the same thing using just keyboard input. I’ve also figured out how to customise the QuantityPanel style to ditch the odd choice of a greyed out font colour for unitiized values. Look forward to seeing your UnitsHelper package.All the best,Ian
For your question regarding section 3, you can use an EventHandler so that the boxes are evaluated when a particular key is pressed. For example: CurrentValue[ SelectedNotebook[ ], { InputAliases, "qty" } ] = ToBoxes @ EventHandler[ Defer @ Quantity @ SelectionPlaceholder[ ], { "RightArrowKeyDown" :> Module[ { box, input, new }, box = EvaluationBox[ ]; input = First @ NotebookRead @ box; new = Quiet @ Check[ ToExpression[ input, StandardForm, ToBoxes ], input ]; NotebookWrite[ box, new ] ] } ] With this, you can use the input alias qty to generate a Quantity template, but pressing the right arrow key will automatically evaluate the quantity in place (see attached video).Edit: DynamicModule wasn't necessary Attachments:
|
2019-01-17 01:06:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2882552444934845, "perplexity": 2612.417365649085}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658662.31/warc/CC-MAIN-20190117000104-20190117022104-00385.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=131&t=62029
|
## Determining the Change in Entropy of the Surroundings without Being Given the Reversibility
$\Delta S = \frac{q_{rev}}{T}$
Cole Woulbroun 1J
Posts: 56
Joined: Thu Jul 25, 2019 12:16 am
### Determining the Change in Entropy of the Surroundings without Being Given the Reversibility
In the homework problem 4I.3, the chanfe in the entropy of the surroundings is asked for, however it does not specify whether the process was reversible or irreversible. Will this information be specified if asked for on the final?
ShravanPatel2B
Posts: 100
Joined: Fri Aug 30, 2019 12:18 am
### Re: Determining the Change in Entropy of the Surroundings without Being Given the Reversibility
I usually assume that the reaction is irreversible unless stated that it is reversible in the problem.
Frank He 4G
Posts: 50
Joined: Tue Nov 12, 2019 12:19 am
### Re: Determining the Change in Entropy of the Surroundings without Being Given the Reversibility
For this problem, it seems like we don't have to know the reversibility. I solved it correctly myself under the assumption that the Ssur simply equals the standard entropy of vaporization of the substance (which is given) adjusted for the amount of the substance. So I suppose the underlying assumption here is that standard entropy of vaporizations are equal to the Ssur of vaporization, at standard conditions.
|
2021-03-05 14:31:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7611762285232544, "perplexity": 785.7577917805384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178372367.74/warc/CC-MAIN-20210305122143-20210305152143-00171.warc.gz"}
|
http://enehana.nohea.com/page/2/
|
# Continuous Deployment for ASP.NET using Git, MSBuild, MSDeploy, and TeamCity
Continuous Deployment goes a step further than Continuous Integration, but based on the same principle: the more painless the deployment process is, the more often you will do it, leading to faster development in smaller, manageable chunks.
As a C#/ASP.NET developer deploying to an IIS server, the go-to tool from Microsoft is MSDeploy (aka WebDeploy). This article primarily discusses steps in Visual Studio 2010, Web Deploy 2.0, and TeamCity 7.1. I have read numerous articles which explain using Git w/TeamCity and MSBuild, but not so much specifically with MSDeploy.
My ideal setup is to have the CI server automate all the steps which would otherwise be done manually by the developer. I am using the TeamCity 7 continuous integration server. You can mix/match your own tools, but the basic steps would be the same:
• Edit your VS web project “Package/Publish” settings
• New code changes are committed to source control branch (in my case, Git)
• TeamCity build configuration triggers builds from VCS repository (Git) when new commits are pushed up
• Build step: MSBuild builds code from .csproj, .sln or .msbuild xml file
• Build step: Run unit tests (xUnit.net or other)
• Build step: MSBuild packages code to ZIP file
• Build step: MSDeploy deploys ZIP package to remote server (development or production)
I’ll go thru the steps in detail (except test running, which is important, but a separate focus).
## Step 1: edit the Visual Studio project properties
When deploying, there are some important settings in the project which affect deployment. To see them, in your solution explorer, right-click (project name) -> Properties… , tab “Package/Publish Web” …
• Configuration: Active (Debug) – this means the ‘Debug’ config is active in VS, and you are editing it. The ‘Debug’ and ‘Release’ configurations both can be selected and independently edited.
• Web Deployment Package Settings – check “Create deployment package as zip file”. We want the ZIP file so it can be deployed separately later.
• IIS Web Site/application name – This must match the IIS Web site entry on the target server. Note i use “MyWebApp/” with no app name after the path. That is how it looks on the web server config.
Save it with your project, and make sure your changes are checked into Git (pushed to origin/master). Those settings will be pulled from version control when the CI server runs the build steps.
## Step 2: add a Build Step in the TeamCity config
I edit the Build Steps, and add a second build step, to build the MyWebApp.sln directly, using msbuild.
MSBuild
Build file path: MyWebApp/MyWebApp.sln
Targets: Build
Command line parameters: /verbosity:diagnostic
## Step 3: fix build error by installing Microsoft Visual Studio 2010 Shell (Integrated) Redistributable Package
My first build after adding the web project did fail. Here’s the error:
C:\TeamCity\buildAgent\work\be5c9bc707460fdf\MyWebApp\MyWebApp\MyWebApp.csproj(727, 3): error MSB4019: The imported project “C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets” was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.
I did a little research, and found this link:
http://stackoverflow.com/questions/3980909/microsoft-webapplication-targets-was-not-found-on-the-build-server-whats-your
Basically, either we need to install VS on the build server, manually copy files, or install the Microsoft Visual Studio 2010 Shell (Integrated) Redistributable Package. I’m going to try door #3.
## Step 4: Install the Microsoft Visual Studio 2010 Shell (Integrated) Redistributable Package
After installing the Microsoft Visual Studio 2010 Shell (Integrated) Redistributable Package on the build server, i go back in TeamCity and click the [Run…] button, which will force a new build. I have to do this because nothing changed in the Git source repository (i only installed new stuff on the server), so that won’t trigger a build.
Luckily, that satisfied the Web App build– success!
Looking in the build log, i do see it built MyWebApp.sln and MyWebApp.dll.
So build is good. Still no deployment to a server yet.
## Step 5: Install the MS Web Deployment tool
FYI, i’m following some hints from:
I get the Web Deployment Tool here and install. After reboot, the TeamCity login has a 404 error. Turns out Web Deploy has a service which listens on port 80, but so does TeamCity Tomcat server. For short term, i stop the Web Deploy web service in control panel, and start the TeamCity web service. The purpose of the Web Deployment Agent Service is to accept requests to that server from other servers. We don’t need that, because the TeamCity server will act as a client, and deploy out to other web servers.
The Web Deployment Tool also has to be installed on the target web server. I’m not going to go too far into detail here, but you have to configure the service to listen as well, so when you run the deployment command, it accepts it and installs on the server. For the development server, i set up a new account named ‘webdeploy’ with permission to install. For production web servers, i’m not enabling it yet, but i did install Web Deploy so i can do a manual run on the server using Remote Desktop (will explain later).
## Step 6: Create a MSBuild command to package the Web project
http://www.troyhunt.com/2010/11/you-deploying-it-wrong-teamcity_24.html
In that post, the example “build-it-all” command is this:
msbuild Web.csproj
/P:Configuration=Deploy-Dev
/P:DeployOnBuild=True
/P:DeployTarget=MSDeployPublish
/P:MsDeployServiceUrl=https://AutoDeploy:8172/MsDeploy.axd
/P:AllowUntrustedCertificate=True
/P:MSDeployPublishMethod=WMSvc
/P:CreatePackageOnPublish=True
/P:Password=Passw0rd
This is a package and deploy in one step. However, i opted for a different path – separate steps for packaging and deployment. This will allow cases for building a Release package but manually deploying it.
So in our case, we’ll need to do the following:
• Try using the “Debug” config. That will use our dev server web.config settings. XML transformations in Web.Debug.config get applied to Web.config during the MSBuild packaging (just as if you ran ‘Publish’ in Visual Studio).
This is the msbuild package command:
"C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe"
MyWebApp/MyWebApp/MyWebApp.csproj
/T:Package
/P:Configuration=Debug;PackageLocation="C:\Build\MyWebApp.Debug.zip"
Let me explain the command parts:
• MyWebApp.csproj : path to VS project file to build. There are important options in there which get set from the project Properties tabs.
• /T:Package : create a ZIP package
• /P:Configuration=Debug;PackageLocation=*** : run the Debug configuration. This is the same as Build in Visual Studio with the ‘Debug’ setting selected. The ‘Package Location’ is what it created. We will reference the package file later in the deployment command.
I tested this command running on my local PC first. When it was working, i ran the same on the CI server via Remote Desktop (for me, it’s a remote Windows 7 instance).
## Step 7: Create a Web Deploy command to deploy the project
• MsDeployServiceUrl – we’ll have to configure the development web server with Web Deploy service.
• Set up user account to connect as (deployuser)
• Have a complete working MSbuild.exe command which works on the command line
• Put the MSBuild command into a new “Deploy” step in TeamCity
After a lot of testing, i got a good command, which is here:
"C:\Program Files\IIS\Microsoft Web Deploy V2\msdeploy.exe" -verb:sync
-source:package="C:\Build\MyWebApp.Debug.zip"
-allowUntrusted=true
This command is also worth explaining in detail:
• -verb:sync : makes the web site sync from the source to the destination
• -source:package=”C:\Build\MyWebApp.Debug.zip” : source is an MSBuild zip file package
• -dest:auto,wmsvc=devserver : use the settings in the package file to deploy to the server. The user account is an OS-level account with permission (i tried IIS users, but didn’t get it working). The hostname is specified, but not the IIS web site name (which is previously specified in the MSBuild project file in the project properties).
After deployment, i checked the IIS web server files, to make sure they had the latest DLLs and web.config file.
## Step 8: Package and Deploy from the TeamCity build steps
Since we now have 2 good commands, we have to add them to the build steps:
### MSBuild – Package step
Note – there is a special TeamCity MSBuild option, but i went with the command-line runner, just because i already had it set.
### MSDeploy – Deploy step
In this case, i had to use the command-line runner, since there is no MSDeploy option.
When you run the build with these steps, if they succeed, we finally have automatic deployment directly from git!
You can review the logs in TeamCity interface after a build/deployment, to verify everything is as expected. If there are errors, those are also in the logs.
Now every time new code gets merged and pushed to git origin/master branch, it will automatically build and deploy the the development server. Another benefit is that the installed .NET assemblies will have version numbers which match the TeamCity build number, is you use the AssemblyInfo.cs patcher feature.
It will dramatically reduce the time needed to deploy to development – just check in your code, and it will build/deploy in a few minutes.
# ASP.NET MVC Custom Model Binder – Safe Updates for Unspecified Fields
Model Binders are one of the ASP.NET MVC framework’s celebrated features.
The typical way web apps work with a form POST is that the forms key/value pairs are iterated through and processed. In MVC, this works in the Action method’s FormCollection.
[HttpPost]
public ActionResult Edit(int id, FormCollection collection)
You create your data object and have a line per field.
dataObject.First_name = collection["first_name"].ToString();
dataObject.Age = (int)collection["age"];
This gets a little tedious, especially when you have to check values for null or other invalid values.
MVC Model Binders do some “magic” to handle the details of mapping your HTTP POST to an object. You specify the typed parameter in the ActionResult method signature…
[HttpPost]
public ActionResult Edit(int id, MyCompany.POCO.MyModel model)
… and the framework handles the mapping to the object for you.
The good part: you just saved a lot of code, which is good for efficiency and for supporting/debugging.
The bad part: what happens when we edit/update an object and the form does not include all the fields? We just overwrote the value to the default .NET value and saved to the db.
For example, if the model record had a property called [phone_number], and this MVC form did not have it. Maybe the form had to hide some values from update, or else the data model changed and added a field. In an Edit/update, the steps would be:
1. creates the object from the class,
2. copy the values from the form
3. save/update to the db
… we never actually grab the current values of [phone_number], and we just set it to the .NET default value for the string type. Lost some real data. Not good.
## ActionResult method and Model Binder steps
What’s actually happening:
• framework looks at the parameter type and executes the registered IModelBinder for it. If there is none, it uses DefaultModelBinder
DefaultModelBinder will do the following: (source here)
• create a new instance of the model – default values , i.e. default(MyModel)
• read the form POST collection from HttpRequestBase
• copy all the matching fields from the Request collection to the model properties
• run it thru the MVC Validator, if any
• return it to the controller ActionResult method for further action
## Writing code in the Action method to fix the problem
My first step to deal with the issue was to fall back to the FormCollection model binder and hand-code the fix. It looks something like this:
[HttpPost]
public ActionResult Edit(int id, MyCompany.POCO.MyModel model, FormCollection collection)
{
// update
if (!ModelState.IsValid)
{
return View("Edit", model);
}
var poco = modelRepository.GetByID(id);
// map form collection to POCO
// * IMPORTANT - we only want to update entity properties which have been
// passed in on the Form POST.
// Otherwise, we could be setting fields = default when they have real data in db.
foreach (string key in collection)
{
// key = "Id", "Name", etc.
// use reflection to set the POCO property from the FormCollection
System.Reflection.PropertyInfo propertyInfo = poco.GetType().GetProperty(key);
if (propertyInfo != null)
{
// poco has the form field as a property
// convert from string to actual type
propertyInfo.SetValue(poco, Convert.ChangeType(collection[key], propertyInfo.PropertyType), null);
// InvalidCastException if failed.
}
}
modelRepository.Save(poco);
return RedirectToAction("Index");
}
In this example, modelRepository could be using NHibernate, EF, or stored procs under the hood, but it could be any data source. We loop thru each form post key and try to find a matching property on the model (using reflection). If it matches, convert the string value from the form collection and set it as the value for that propery (also using reflection).
This works and is good, until you realize you have to insert it into every Action method. We could also go traditional, and just stick it in a function call. But we want to leverage the MVC convention-over-configuration philosophy. So now we’re going to try wrapping it in a custom model binder class.
## Creating a Custom Model Binder to fix the problem
To avoid the “unspecified field” problem, we want a model binder to actually do the following on Edit:
• Get() the model from the repository by id to create a new instance of the model
• Update the fields of the persisted model which match from the FormCollection
• run it thru the MVC Validator, if any
• return it to the controller ActionResult method for further action (like Save() )
I am going to define a generic class which is good for any of my POCO types, and inherit from DefaultModelBinder:
public class PocoModelBinder<TPoco> : DefaultModelBinder
{
MyCompany.Repository.IPocoRepository<TPoco> ModelRepository;
public PocoModelBinder(MyCompany.Repository.IPocoRepository<TPoco> modelRepository)
{
this.ModelRepository = modelRepository;
}
Note, i also inject my Repository (i use IoC), so that i can retrieve the object before update.
DefaultModelBinder has the methods CreateModel() and BindModel(), and we’re going to go with that.
public object CreateModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
{
// http://stackoverflow.com/questions/752/get-a-new-object-instance-from-a-type-in-c
TPoco poco = (TPoco)typeof(TPoco).GetConstructor(new Type[] { }).Invoke(new object[] { });
// this is from the Route url: ~/{controller}/{action}/{id}
if (controllerContext.RouteData.Values["action"].ToString() == "Edit")
{
// for Edit(), get from Repository/database
string id = controllerContext.RouteData.Values["id"].ToString();
poco = this.ModelRepository.GetByID(Int32.Parse(id));
}
else
{
// call default CreateModel() -- for the Create method
poco = (TPoco)base.CreateModel(controllerContext, bindingContext, poco.GetType());
}
return poco;
}
As you can see, with CreateModel(), if it is an Edit call, we retrieve the model object by the id specified in the URL. This is already parsed out in the RouteData collection. If it is not an Edit, we just call the base class CreateModel(). For example, a Create() call may also use the same ModelBinder.
Now, in the BindModel() method, this is where we move our logic to iterate thru the Form key/value pairs and update the POCO. But in this version, we only update fields in the form, and leave other properties alone:
public override object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
{
object model = this.CreateModel(controllerContext, bindingContext);
// map form collection to POCO
// * IMPORTANT - we only want to update entity properties which have been
// passed in on the Form POST.
// Otherwise, we could be setting fields = default when they have real data in db.
foreach (string key in controllerContext.HttpContext.Request.Form.Keys )
{
// key = "Pub_id", "Name", etc.
// use reflection to set the POCO property from the FormCollection
// http://stackoverflow.com/questions/531025/dynamically-getting-setting-a-property-of-an-object-in-c-2005
// poco.GetType().GetProperty(key).SetValue(poco, collection[key], null);
System.Reflection.PropertyInfo propertyInfo = model.GetType().GetProperty(key);
if (propertyInfo != null)
{
// poco has the form field as a property
// convert from string to actual type
// http://stackoverflow.com/questions/1089123/c-setting-a-property-by-reflection-with-a-string-value
propertyInfo.SetValue(model, Convert.ChangeType(controllerContext.HttpContext.Request.Form[key], propertyInfo.PropertyType), null);
// InvalidCastException if failed.
}
}
return model;
}
Great. Now that we have our ModelBinder, we have to tell our MvcApplication to use it. We add it the following line to Application_Start():
// Custom Model Binders
);
|
2022-01-18 04:02:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3141138553619385, "perplexity": 5133.340419733096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300722.91/warc/CC-MAIN-20220118032342-20220118062342-00445.warc.gz"}
|
http://tex.stackexchange.com/questions/63194/my-newcommand-does-not-work-within-other-environments-align
|
My newcommand does not work within other environments (align*)
Minimal Example that demonstrates the problem:
\documentclass[12pt]{article}
\usepackage{amsmath}
%%% variable declaration:
\newlength{\temp}%
\newlength{\tempp}%
\newlength{\Flinewidth}%
\setlength{\Flinewidth}{0.5pt}%
\newlength{\Fraiseheight}%
\setlength{\Fraiseheight}{1ex}%
\newlength{\Fantecedentheight}%
\newlength{\Fconsequentdepth}%
\newsavebox{\Fantecedent}%
\newsavebox{\Fconsequent}%
%%% conditional stroke \Fconditional[content]{consequent}{antecedent}:
\newcommand{\Fconditional}[3][]%
{%
\unskip
\sbox{\Fantecedent}{%
\rule{0pt}{\baselineskip}% this is a strut
\ensuremath{#3}}%
\settoheight{\Fantecedentheight}{%
\rule{0pt}{\baselineskip}%
\ensuremath{#3}}%
\sbox{\Fconsequent}{%
\rule[-0.3\baselineskip]{0pt}{0.3\baselineskip}% this is a strut
\ensuremath{#2}}%
\settodepth{\Fconsequentdepth}{%
\rule[-0.3\baselineskip]{0pt}{0.3\baselineskip}%
\ensuremath{#2}}%
\setlength{\temp}{\lineskip}%
\setlength{\tempp}{\temp}%
\mbox{%
\ensuremath{#1\unskip}%
\kern-\Flinewidth%
\rule[-\tempp]{\Flinewidth}{\temp}%
\settowidth{\temp}{\usebox{\Fconsequent}\\\usebox{\Fantecedent}}%
\parbox[t]{\temp}{\usebox{\Fconsequent}\\\usebox{\Fantecedent}}}%
}%
\begin{document}
$\Fconditional[A]{B}{C}$
$\Fconditional[A]{B}{C\Fconditional{D}{E}}$
\begin{align*}
\Fconditional[A]{B}{C}
\end{align*}
\end{document}
This command does exactly what it is supposed to do as called in in-line maths mode like so:
$\Fconditional[A]{B}{C}$
Thanks to the comment about removing definitions from the macro, the command now also works nested in itself like so:
$\Fconditional[A]{B}{C\Fconditional{D}{E}}$
However if I call the same command within an align* environment like so:
\begin{align*}
\Fconditional[A]{B}{C}
\end{align*}
I get the following error message:
! Misplaced alignment tab character &.
\math@cr@@@ ->&
\omit \global \advance \row@ \@ne \ifst@rred \nonumber \fi \i...
l.51 \end{align*}
This happens regardless of whether I actually use any & characters in the align* environment. Ignoring the error produces part of the output.
Sorry I did not include a proper minimal example earlier.
-
AMS environments are executed twice so that things get measured so all your \new... will generate errors the second internal run. It is almost always a bad idea to have \newsavebox and \newlength inside macros as that means you allocate new registers each time, the intended usage is that you allocate the registers you need at the start and re-use the same registers. – David Carlisle Jul 13 '12 at 15:27
OK if you need further help please edit the question so it is a complete document using ams alignment and generating the error. – David Carlisle Jul 13 '12 at 15:42
sorry, what you used, I just meant any of align or align* or alignedat etc from amsmath package. But don't just say you get an error in that case, make a document that shows the error. – David Carlisle Jul 13 '12 at 15:54
So... moving the \new... commands outside the macro has fixed half the problem. I have updated the question to reflect this, and included a minimal working example. – Psachnodaimonia Jul 13 '12 at 16:40
The use of \\ in
\settowidth{\temp}{\usebox{\Fconsequent}\\\usebox{\Fantecedent}}%
makes no sense. There is no "context" there to give meaning to a newline.
Consequently, the \\ is "captured" by the align environment, leading to the error.
I assume you want \temp to assume the maximum width of \Fconsequent and \Fantecedent. Replacing the \settowidth expression by the TeX construct
\ifdim\wd\Fconsequent>\wd\Fantecedent
\setlength{\temp}{\wd\Fconsequent}%
\else
\setlength{\temp}{\wd\Fantecedent}%
\fi
will do the job. I assume there are more elegant, more LaTeXy ways...
-
That makes sense. And yes, I want \temp to take on the width of the wider of \Fconsequent and \Fantecedent whichever that is. If somebody does still know how to that in a "LaTeXy" way, I would appreciate it... I will have a think about it myself. – Psachnodaimonia Jul 13 '12 at 18:20
One not so elegant way to solve the problem that does not rely on pure TeX, is to measure the width of \usebox{\Fconsequent} and \usebox{\Fantecedent} separately using \settowidth{}{} and then placing both lengths as vertical struts into a \makebox{} the height of which is determined by it's contents and can be measured with \settoheight{}{}... – Psachnodaimonia Jul 14 '12 at 0:28
You could also use \settowidth{\temp}{\shortstack{\usebox{\Fconsequent}\\\usebox{\Fantecedent}}}% to get a solution which is nearer to your original idea. Furthermore, the ifthen package provides a means to compare lengths in a LaTeXy way. – Stephan Lehmke Jul 14 '12 at 3:01
|
2016-02-09 08:06:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9721580147743225, "perplexity": 1908.947644820306}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701156627.12/warc/CC-MAIN-20160205193916-00155-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://www.ques10.com/p/16364/digital-signal-processing-question-paper-jun-201-2/
|
Question Paper: Digital Signal Processing : Question Paper Jun 2015 - Electronics & Telecomm (Semester 5) | Visveswaraya Technological University (VTU)
0
## Digital Signal Processing - Jun 2015
### Electronics & Communication (Semester 5)
TOTAL MARKS: 100
TOTAL TIME: 3 HOURS
(1) Question 1 is compulsory.
(2) Attempt any four from the remaining questions.
(3) Assume data wherever required.
(4) Figures to the right indicate full marks.
1 (a) Compute the DFT of the sequence $$x(n)=\cos\left ( \dfrac{n\pi}{4} \right )$$ for N=4, plot |x(k)| and ∠x(k).(9 marks) 1 (b) Find the DFT of the sequence x(n)=0.5n u(n) for 0 < n <3 by evaluating x(n)=an for 0< n < N - 1.(7 marks) 1 (c) Find the relation between DFT and Z transform.(4 marks) 2 (a) State and prove the linearity property of DFT and symmetrical property.(5 marks) 2 (b) The five samples of the 8 point DFT x(k) are given as
x(0)=0.25, x(1)=1.25 - j0.3018, x(6)=x(4)=0, x(5)=0.125 - j0.0518.
(5 marks)
2 (c) For x(n)={1,-2,3,-4,5-6}, without computing its DFT, find the following.
$$i)\ x(o)\ ii)\ \sum_{k=0}^{5}\ iii)\ X(3)\ iv)\ \sum_{k=0}^{5}1 \times(k)|^{2}\ v)\ \sum_{k=0}^{5}(-1)^{k}\times (k)$$
(10 marks)
3 (a) Consider a FIR filter with impulse response
h(n)={1,1,1}, if the input is
X(n)={1,2,0,-3,4,2,-1,1,-2,3,2,1,-3}. Find the output y(a) using overlap add method.
(12 marks)
3 (b) What is an plane computation? What is total number of complex additions and multiplication required for N=256 point, if DFT is computed directly and if FFT is used?(3 marks) 3 (c) For sequence x(n)={2,0,2,0} determine x(2) using Goertzel Filter. Assume the zero initial conditions.(5 marks) 4 (a) Find the circular convolution of x(n)={1,1,1,1} and h(n)={1,0,1,0} using DIF-FFT algorithm.(12 marks) 4 (b) Derive DIT-FFT algorithm for N=4. Draw the complete signal How graph?(8 marks) 5 (a) Design a Chebyshev analog filter (low pass) that has a-3dB cut-off frequency of 100 rad/sce and a stopband attenuation 25dB or greater for all radian frequencies past 250 rad/sec.(14 marks) 5 (b) Compare Butterworth and Chebyshev filters.(3 marks) 5 (c) Let $$H(s)=\dfrac{1}{s^{2}+s+1}$$ represent the transfer function of LPF with a passband of 1 rad/sec. Use frequency transformation (Analog to Analog) to find the transfer function of a band pass filter with passband 10 rad/sec and a centre frequency of 100 rad/sec.(3 marks) 6 (a) Obtain block diagram of the direct form I and direct from II realization for a digital IIR filter described by the system function.
$$H(z)=\dfrac{8z^{3}-4z^{2}+11z-2}{\left ( z-\dfrac{1}{4} \right )\left ( z^{2}-z+\dfrac{1}{2} \right )}$$
(10 marks)
6 (b) Find the transfer function and difference equation realization shown in fig Q6(b)
(6 marks) 6 (c) Obtain the direct form realization of liner phase FIR system given by
$$H(z)=1+\dfrac{2}{3}z^{-1}+\dfrac{15}{8}z^{-2}$$
(4 marks)
7 (a) "The desired frequency response of low pass filter is given by
$$H_{d}(e^{jw})=H_{d}(\infty )=\left\{\begin{matrix} e^{-j3w} &|\infty|\dfrac{3 \pi}{4} \\0 &\dfrac{3\pi}{4}<|\infty|<\pi \end{matrix}\right$$
Determine the frequency response of the FIR if Hamming window is used with N=7."
(10 marks)
7 (b) Compare IIR filter and FIR filters.(6 marks) 7 (c) Consider the pole-zero plot as shown in FigQ7(c). i) Does it represent an FIR filter? ii) Is it linear phase system?
(4 marks) 8 (a) Design a digital filter H(z) that when used in an A/D-H(z)-D/A structure gives an equivalent analog filter with the following specification:
Passband ripple:≤3.01dB
Passband edge : 500Hz
Stopband attenuation : ≥ 15dB
Stopband edge : 750 Hz
Sample Rate : 2 KHz
Use Bilinear transformation to design the filter on an analog system function. Use Butterworth filter prototype. Also obtain the difference equation.
(14 marks)
8 (b) Transform the analog filter
$$H_{a}(s)=\dfrac{s+1}{s^{2}+5s+6}$$
Into H(z) using impulse invariant transformation Take T=0.1 Sec.
(6 marks)
|
2019-01-24 09:43:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7728424072265625, "perplexity": 7961.684008401939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584519757.94/warc/CC-MAIN-20190124080411-20190124102411-00122.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-11-quadratic-functions-and-equations-11-1-quadratic-equations-11-1-exercise-set-page-706/30
|
# Chapter 11 - Quadratic Functions and Equations - 11.1 Quadratic Equations - 11.1 Exercise Set: 30
$x=\{-7,13\}$
#### Work Step by Step
Using $a^2+2ab+b^2=(a+b)^2$ or the factoring of perfect square trinomials, the given equation, $x^2-6x+9=100 ,$ is equivalent to \begin{array}{l}\require{cancel} (x-3)^2=100 .\end{array} Since $x^2=a$ implies $x=\pm\sqrt{a}$ (the Square Root Principle), the solutions to the equation, $(x-3)^2=100 ,$ are \begin{array}{l}\require{cancel} x-3=\pm\sqrt{100} \\\\ x-3=\pm\sqrt{(10)^2} \\\\ x-3=\pm10 \\\\ x=3\pm10 \\\\ x=\{-7,13\} .\end{array}
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-04-24 09:26:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9875339269638062, "perplexity": 3902.7997020467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946578.68/warc/CC-MAIN-20180424080851-20180424100851-00020.warc.gz"}
|
https://zbmath.org/?q=an:0856.57026
|
## Generalized counterexamples to the Seifert conjecture.(English)Zbl 0856.57026
Ann. Math. (2) 143, No. 3, 547-576 (1996); correction ibid. 144, No. 2, 239-268 (1996).
The famous Seifert problem asserts that every dynamical system on the 3-sphere $$S^3$$ with no singular points has a periodic trajectory. The main purpose of this note is to establish that a foliation of any codimension of any manifold can be modified in a real analytic or piecewise-linear fashion so that all minimal sets have codimension 1. The dynamical systems on $$S^3$$ are investigated.
Theorem: The sphere $$S^3$$ has an analytic dynamical system such that all limit sets are 2-dimensional. In particular, it has no circular trajectories.
In PL category, the authors prove the following theorem:
A 1-foliation of a manifold of dimension at least 3 can be modified in a PL fashion so there are no closed leaves but all minimal sets are 1-dimensional. Moreover, if the manifold is closed, then there is an aperiodic PL modification with only one minimal set, and the minimal set is 1-dimensional.
The method: all counterexamples to the Seifert conjecture described here are based on constructions of aperiodic plugs. The exposition is clear and all the fundamental concepts are defined.
### MSC:
57R30 Foliations in differential topology; geometric theory 57N12 Topology of the Euclidean $$3$$-space and the $$3$$-sphere (MSC2010) 53C12 Foliations (differential geometric aspects)
Full Text:
|
2022-06-29 01:22:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5980578064918518, "perplexity": 530.3278324151445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103619185.32/warc/CC-MAIN-20220628233925-20220629023925-00488.warc.gz"}
|
https://intelligencemission.com/free-energy-graph-labeled-electricity-free-amusement-park.html
|
But to make Free Energy about knowing the universe, its energy , its mass and so on is hubris and any scientist acknowledges the real possibility that our science could be proven wrong at any given point. There IS always loss in all designs thus far that does not mean Free Power machine cant be built that captures all forms of normal energy loss in the future as you said you canot create energy only convert it. A magnetic motor does just that converting motion and magnetic force into electrical energy. Ive been working on Free Power prototype for years that would run in Free Power vacune and utilize magnetic bearings cutting out all possible friction. Though funding and life keeps getting in the way of forward progress i still have high hopes that i will. Create Free Power working prototype that doesnt rip itself apart. You are really an Free Power*. I went through Free Electricity. Free Power years of pre-Vet. I went to one of the top HS. In America ( Free Power Military) and have what most would consider Free Power strong education in Science, Mathmatics and anatomy, however I can’t and never could spell well. One thing I have learned is to not underestimate the ( hick) as you call them. You know the type. They speak slow with Free Power drawl. Wear jeans with tears in them. Maybe Free Power piece of hay sticking out of their mouths. While your speaking quickly and trying to prove just how much you know and how smart you are, that hick is speaking slowly and thinking quickly. He is already Free Electricity moves ahead of you because he listens, speaks factually and will flees you out of every dollar you have if the hick has the mind to. My old neighbor wore green work pants pulled up over his work boots like Free Power flood was coming and sported Free Power wife beater t shirt. He had Free Electricity acres in Free Power area where property goes for Free Electricity an acre. Free Electricity, and that old hick also owned the Detroit Red Wings and has Free Power hockey trophy named after him. Ye’re all retards.
The song’s original score designates the duet partners as “wolf” and “mouse, ” and genders are unspecified. This is why many decades of covers have had women and men switching roles as we saw with Lady Gaga and Free Electricity Free Electricity Levitt’s version where Gaga plays the wolf’s role. Free Energy, even Miss Piggy of the Muppets played the wolf as she pursued ballet dancer Free Energy NureyeFree Power
I have had many as time went by get weak. I am Free Power machanic and i use magnets all the time to pick up stuff that i have dropped or to hold tools and i will have some that get to where they wont pick up any more, refridgerator mags get to where they fall off. Dc motors after time get so they don’t run as fast as they used to. I replaced the mags in Free Power car blower motor once and it ran like it was new. now i do not know about the neo’s but i know that mags do lose there power. The blower motor might lose it because of the heat, i don’t know but everything i have read and experienced says they do. So whats up with that? Hey Free Electricity, ok, i agree with what you are saying. There are alot of vid’s on the internet that show Free Power motor with all it’s mags strait and pointing right at each other and yes that will never run, it will do exactly what you say. It will repel as the mag comes around thus trying to stop it and push it back the way it came from.
I have had many as time went by get weak. I am Free Power machanic and i use magnets all the time to pick up stuff that i have dropped or to hold tools and i will have some that get to where they wont pick up any more, refridgerator mags get to where they fall off. Dc motors after time get so they don’t run as fast as they used to. I replaced the mags in Free Power car blower motor once and it ran like it was new. now i do not know about the neo’s but i know that mags do lose there power. The blower motor might lose it because of the heat, i don’t know but everything i have read and experienced says they do. So whats up with that? Hey Free Electricity, ok, i agree with what you are saying. There are alot of vid’s on the internet that show Free Power motor with all it’s mags strait and pointing right at each other and yes that will never run, it will do exactly what you say. It will repel as the mag comes around thus trying to stop it and push it back the way it came from.
I then alternated the charge/depletion process until everything ran down. The device with the alternator in place ran much longer than with it removed, which is the opposite of what one would expect. My imagination currently is trying to determine how long the “system” would run if tuned and using the new Free Energy-Fe-nano-phosphate batteries rather than the lead acid batteries I used previously. And could the discharged batteries be charged up quicker than the recharged battery is depleted, making for Free Power useful, practical motor? Free Energy are claiming to have invented perpetual motion MACHINES. That is my gripe. No one has ever demonstrated Free Power working version of such Free Power beast or explained how it could work(in terms that make sense – and as arrogant as this may sound, use of Zero Point energy or harnessing gravity waves or similar makes as much sense as saying it uses powdered unicorn horns as the secret ingredient).
How do you gather and retrain the RA? Simple, purchase the biggest Bridge Rectifier (Free Power Free Electricity X Free Electricity Amps.) Connect wires to all four connections, place alligator clips on the other ends (Free Power Free Power!) Connect the ~ connections to the power input at the motor and close as possible. Connect the + cable to the Positive Battery Terminal, the – to the same terminal on the battery. Connect the battery Alligator Clip AFTER the Motor is running full on. That’s it! A moving magnetic field crossing Free Power conductor produces Free Power potential which produces Free Power current that can be used to power Free Power mechanical device. Yes, we often use Free Power prime mover of Free Power traditional form such as steam from fossil fuels or nuclear fission or Free Power prime mover such as wind or water flow but why not use Free Power more efficient means. Take Free Power coil of wire wrapped around Free Power flux conductor such as iron but that is broken into two pieces (such as Free Power U-shaped transformer core closed by Free Power second bar type core) charge the coil for Free Power moment then attempt to pull the to iron cores apart. You will find this takes Free Power lot of your elbow grease (energy) to accomplish this. This is due to the breaking of the flux circuit within the iron core. An example of energy store as magnetic flux. Isn’t this what Free Power permanent magnet is? Transfering one form of energy to another.
“Ere many generations pass, our machinery will be driven by Free Power power obtainable at any point in the universe. This idea is not novel…We find it in the delightful myth of Antheus, who derives power from the earth; we find it among subtle speculations of one of your splendid mathematicians…. Throughout space there is energy. Is this energy static, or kinetic? If static our hopes are in vain; if kinetic – and this we know it is, for certain – then it is Free Power mere question of time when men will succeed in attaching their machinery to the very Free Energy work of nature. ” – Nikola Free Electricity (source)
Never before has pedophilia and ritualistic child abuse been on the radar of so many people. Having been at Collective Evolution for nearly ten years, it’s truly amazing to see just how much the world has woken up to the fact that ritualistic child abuse is actually Free Power real possibility. The people who have been implicated in this type of activity over the years are powerful, from high ranking military people, all the way down to the several politicians around the world, and more.
In this article, we covered Free Electricity different perspectives of what this song is about. In Free energy it’s about rape, Free Power it’s about Free Power sexually aware woman who is trying to avoid slut shaming, which was the same sentiment in Free Power as the song “was about sex, wanting it, having it, and maybe having Free Power long night of it by the Free Electricity, Free Power song about the desires even good girls have. ”
Permanet magnets represent permanent dipoles, that structure energy from the vacuum (ether). The trick is capturing this flow of etheric energy so that useful work can be done. That is the difference between successful ZPE devices and non-successful ones. Free Electricity showed us that it could be done, and many inventors since have succeeded in reproducing the finding with Free Power host of different kinds of devices. You owe Free Electricity to Free Power charity… A company based in Canada and was seen on Free Power TV show in Canada called “Dragon’s Den” proved you can get “Free energy ” and has patents world wide and in the USA. Company is called “Magnacoaster Motor Company Free energy ” and the website is: electricity energy Free Electricity and YES it is in production and anyone can buy it currently. Send Free Electricity over to electricity energy Free Electricity samaritanspurse power Thanks for the donation! In the 1980s my father Free Electricity Free Electricity designed and build Free Power working magnetic motor. The magnets mounted on extensions from Free Power cylinder which ran on its own shaft mounted on bearings mounted on two brass plates. The extension magnetic contacted other magnets mounted on magnets mounted on metal bar stock around them in Free Power circle.
The solution to infinite energy is explained in the bible. But i will not reveal it since it could change our civilization forever. Transportation and space travel all together. My company will reveal it to thw public when its ready. My only hint to you is the basic element that was missing. Its what we experience in Free Power everyday matter. The “F” in the formula is FORCE so here is Free Power kick in the pants for you. “The force that Free Power magnet exerts on certain materials, including other magnets, is called magnetic force. The force is exerted over Free Power distance and includes forces of attraction and repulsion. Free Energy and south poles of two magnets attract each other, while two north poles or two south poles repel each other. ” What say to that? No, you don’t get more out of it than you put in. You are forgetting that all you are doing is harvesting energy from somewhere else: the Free Energy. You cannot create energy. Impossible. All you can do is convert energy. Solar panels convert energy from the Free Energy into electricity. Every second of every day, the Free Energy slowly is running out of fuel.
|
2019-03-19 22:24:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3355424106121063, "perplexity": 1783.685509727929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202131.54/warc/CC-MAIN-20190319203912-20190319225912-00498.warc.gz"}
|
https://math.stackexchange.com/questions/2069532/what-is-integral-quadrature
|
Differential Quadrature and Its Application in Engineering
trying to find out what is "Integral Quadrature". here is a snapshot of the page which is about Integral Quadrature:
My questions is what are w(weighting coefficiont) and f(functional value) and could you please show me them on the figure 1.1?
It's so close to the concept of "Integration" which we learnt at school but I don't know why it make me crazy. this is what we learnt about integrating at school:
Introduction to Integration
• Probably : Numerical integration or numerical quadrature. – Mauro ALLEGRANZA Dec 23 '16 at 12:25
• Thank you. good clue. I need somebody explain it in a simple manner to me and especially by that figure. – Roh Dec 23 '16 at 12:31
• What figure ? The integral is the area... The numerical techniques are used to approximate the curve (by way of polynomials or "simple" curves in general) in order to compute the integral. – Mauro ALLEGRANZA Dec 23 '16 at 12:35
• The figure I posted in the question. it's a part of the book. e.g. could you show me w1, w2,w3.... and f1, f2, f3,... – Roh Dec 23 '16 at 12:37
I changed a little of one of the pictures in your link so you see those $x_1, x_2, \dots, x_n$ and their corresponding $y$-values $f_1=f(x_1), f_2=f(x_2),\dots, f_n=f(x_n)$:
Suppose the original interval is $[a,b]$, that is , you want to find the area under the curve $f(x)$ in the interval $[a,b].$ Now if $w_1=(b-a)/(n-1), w_2=(b-a)/(n-1), \dots, w_{n-1}=(b-a)/(n-1), w_n=0$, we get $$\frac{b-a}{n-1}(f_1+\cdots+f_{n-1}),$$ which is the sum of the area of the rectangles using the height of the left boundaries. Notice that $(b-a)/(n-1)$ is the width of each rectangle.
Otherwise, if $w_1=0, w_2=(b-a)/(n-1), \dots, w_{n-1}=(b-a)/(n-1), w_n=(b-a)/(n-1)$, we get $$\frac{b-a}{n-1}(f_2+\cdots+f_{n}),$$ which is the sum of the area of the rectangles using the height of the right boundaries.
The integration quadrature is a generalization of this idea, with different weights on different $y$-values.
• @Roh: $n$ is the number of points when you divide the interval $[a,b]$ into $n-1$ subintervals, so that you can sum up the area of the rectangles to get an approximation of the area under the curve. Sorry $1/(n-1)$ should be $(b-a)/(n-1)$. It is the length of the subinterval. – KittyL Dec 23 '16 at 13:09
|
2019-05-24 16:21:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.805493950843811, "perplexity": 274.1442080624467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257660.45/warc/CC-MAIN-20190524144504-20190524170504-00444.warc.gz"}
|
https://analytixon.com/2018/11/27/whats-new-on-arxiv-825/
|
Automatic voice-controlled systems have changed the way humans interact with a computer. Voice or speech recognition systems allow a user to make a hands-free request to the computer, which in turn processes the request and serves the user with appropriate responses. After years of research and developments in machine learning and artificial intelligence, today voice-controlled technologies have become more efficient and are widely applied in many domains to enable and improve human-to-human and human-to-computer interactions. The state-of-the-art e-commerce applications with the help of web technologies offer interactive and user-friendly interfaces. However, there are some instances where people, especially with visual disabilities, are not able to fully experience the serviceability of such applications. A voice-controlled system embedded in a web application can enhance user experience and can provide voice as a means to control the functionality of e-commerce websites. In this paper, we propose a taxonomy of speech recognition systems (SRS) and present a voice-controlled commodity purchase e-commerce application using IBM Watson speech-to-text to demonstrate its usability. The prototype can be extended to other application scenarios such as government service kiosks and enable analytics of the converted text data for scenarios such as medical diagnosis at the clinics.
In recent years, machine learning researchers have focused on methods to construct flexible and interpretable prediction models. However, the interpretability evaluation, the relationship between the generalization performance and the interpretability of the model and the method for improving the interpretability are very important factors to consider. In this paper, the quantitative index of the interpretability is proposed and its rationality is given, and the relationship between the interpretability and the generalization performance is analyzed. For traditional supervised kernel machine learning problem, a universal learning framework is put forward to solve the equilibrium problem between the two performances. The uniqueness of solution of the problem is proved and condition of unique solution is obtained. Probability upper bound of the sum of the two performances is analyzed.
Emotion recognition and classification is a very active area of research. In this paper, we present a first approach to emotion classification using persistent entropy and support vector machines. A topology-based model is applied to obtain a single real number from each raw signal. These data are used as input of a support vector machine to classify signals into 8 different emotions (calm, happy, sad, angry, fearful, disgust and surprised).
It is widely recognized that the deeper networks or networks with more feature maps have better performance. Existing studies mainly focus on extending the network depth and increasing the feature maps of networks. At the same time, horizontal expansion network (e.g. Inception Model) as an alternative way to improve network performance has not been fully investigated. Accordingly, we proposed NeuroTreeNet (NTN), as a new horizontal extension network through the combination of random forest and Inception Model. Based on the tree structure, in which each branch represents a network and the root node features are shared to child nodes, network parameters are effectively reduced. By combining all features of leaf nodes, even less feature maps achieved better performance. In addition, the relationship between tree structure and the performance of NTN was investigated in depth. Comparing to other networks (e.g. VDSR\_5) with equal magnitude parameters, our model showed preferable performance in super resolution reconstruction task.
Class-imbalance refers to classification problems in which many more instances are available for certain classes than for others. Such imbalanced datasets require special attention because traditional classifiers generally favor the majority class which has a large number of instances. Ensemble of classifiers have been reported to yield promising results. However, the majority of ensemble methods applied to imbalanced learning are static ones. Moreover, they only deal with binary imbalanced problems. Hence, this paper presents an empirical analysis of dynamic selection techniques and data preprocessing methods for dealing with multi-class imbalanced problems. We considered five variations of preprocessing methods and fourteen dynamic selection schemes. Our experiments conducted on 26 multi-class imbalanced problems show that the dynamic ensemble improves the AUC and the G-mean as compared to the static ensemble. Moreover, data preprocessing plays an important role in such cases.
The basic goal of computer engineering is the analysis of data. Such data are often large data sets distributed according to various distribution models. In this manuscript we focus on the analysis of non-Gaussian distributed data. In the case of univariate data analysis we discuss stochastic processes with auto-correlated increments and univariate distributions derived from specific stochastic processes, i.e. Levy and Tsallis distributions. Deep investigation of multivariate non-Gaussian distributions requires the copula approach. A copula is an component of multivariate distribution that models the mutual interdependence between marginals. There are many copula families characterised by various measures of the dependence between marginals. Importantly, one of those are `tail’ dependencies that model the simultaneous appearance of extreme values in many marginals. Those extreme events may reflect a crisis given financial data, outliers in machine learning, or a traffic congestion. Next we discuss higher order multivariate cumulants that are non-zero if multivariate distribution is non-Gaussian. However, the relation between cumulants and copulas is not straight forward and rather complicated. We discuss the application of those cumulants to extract information about non-Gaussian multivariate distributions, such that information about non-Gaussian copulas. The use of higher order multivariate cumulants in computer science is inspired by financial data analysis, especially by the safe investment portfolio evaluation. There are many other applications of higher order multivariate cumulants in data engineering, especially in: signal processing, non-linear system identification, blind sources separation, and direction finding algorithms of multi-source signals.
Distributed machine learning (ML) systems today use an unsophisticated threat model: data sources must trust a central ML process. We propose a brokered learning abstraction that allows data sources to contribute towards a globally-shared model with provable privacy guarantees in an untrusted setting. We realize this abstraction by building on federated learning, the state of the art in multi-party ML, to construct TorMentor: an anonymous hidden service that supports private multi-party ML. We define a new threat model by characterizing, developing and evaluating new attacks in the brokered learning setting, along with new defenses for these attacks. We show that TorMentor effectively protects data providers against known ML attacks while providing them with a tunable trade-off between model accuracy and privacy. We evaluate TorMentor with local and geo-distributed deployments on Azure/Tor. In an experiment with 200 clients and 14 MB of data per client, our prototype trained a logistic regression model using stochastic gradient descent in 65s.
There has been significant interest of late in generating behavior of agents that is interpretable to the human (observer) in the loop. However, the work in this area has typically lacked coherence on the topic, with proposed solutions for ‘explicable’, ‘legible’, ‘predictable’ and ‘transparent’ planning with overlapping, and sometimes conflicting, semantics all aimed at some notion of understanding what intentions the observer will ascribe to an agent by observing its behavior. This is also true for the recent works on ‘security’ and ‘privacy’ of plans which are also trying to answer the same question, but from the opposite point of view — i.e. when the agent is trying to hide instead of revealing its intentions. This paper attempts to provide a workable taxonomy of relevant concepts in this exciting and emerging field of inquiry.
Deep neural networks (DNNs) have become core computation components within low latency Function as a Service (FaaS) prediction pipelines: including image recognition, object detection, natural language processing, speech synthesis, and personalized recommendation pipelines. Cloud computing, as the de-facto backbone of modern computing infrastructure for both enterprise and consumer applications, has to be able to handle user-defined pipelines of diverse DNN inference workloads while maintaining isolation and latency guarantees, and minimizing resource waste. The current solution for guaranteeing isolation within FaaS is suboptimal — suffering from ‘cold start’ latency. A major cause of such inefficiency is the need to move large amount of model data within and across servers. We propose TrIMS as a novel solution to address these issues. Our proposed solution consists of a persistent model store across the GPU, CPU, local storage, and cloud storage hierarchy, an efficient resource management layer that provides isolation, and a succinct set of application APIs and container technologies for easy and transparent integration with FaaS, Deep Learning (DL) frameworks, and user code. We demonstrate our solution by interfacing TrIMS with the Apache MXNet framework and demonstrate up to 24x speedup in latency for image classification models and up to 210x speedup for large models. We achieve up to 8x system throughput improvement.
The current landscape of Machine Learning (ML) and Deep Learning (DL) is rife with non-uniform frameworks, models, and system stacks but lacks standard tools to facilitate the evaluation and measurement of model. Due to the absence of such tools, the current practice for evaluating and comparing the benefits of proposed AI innovations (be it hardware or software) on end-to-end AI pipelines is both arduous and error prone — stifling the adoption of the innovations. We propose MLModelScope — a hardware/software agnostic platform to facilitate the evaluation, measurement, and introspection of ML models within AI pipelines. MLModelScope aids application developers in discovering and experimenting with models, data scientists developers in replicating and evaluating for publishing models, and system architects in understanding the performance of AI workloads. We describe the design and implementation of MLModelScope and show how it is able to give users a holistic view into the execution of models within AI pipelines. Using AlexNet as a case study, we demonstrate how MLModelScope aids in identifying deviation in accuracy, helps in pin pointing the source of system bottlenecks, and automates the evaluation and performance aggregation of models across frameworks and systems.
When labeled data is scarce for a specific target task, transfer learning often offers an effective solution by utilizing data from a related source task. However, when transferring knowledge from a less related source, it may inversely hurt the target performance, a phenomenon known as negative transfer. Despite its pervasiveness, negative transfer is usually described in an informal manner, lacking rigorous definition, careful analysis, or systematic treatment. This paper proposes a formal definition of negative transfer and analyzes three important aspects thereof. Stemming from this analysis, a novel technique is proposed to circumvent negative transfer by filtering out unrelated source data. Based on adversarial networks, the technique is highly generic and can be applied to a wide range of transfer learning algorithms. The proposed approach is evaluated on six state-of-the-art deep transfer methods via experiments on four benchmark datasets with varying levels of difficulty. Empirically, the proposed method consistently improves the performance of all baseline methods and largely avoids negative transfer, even when the source data is degenerate.
The article describes the new approach for quality improvement of automated dialogue systems for customer support service. Analysis produced in the paper demonstrates the dependency of the quality of the retrieval-based dialogue system quality on the choice of negative responses. The proposed approach implies choosing the negative samples according to the distribution of responses in the train set. In this implementation the negative samples are randomly chosen from the original response distribution and from the ‘artificial’ distribution of negative responses, such as uniform distribution or the distribution obtained by transformation of the original one. The results obtained for the implemented systems and reported in this paper confirm the significant improvement of automated dialogue systems quality in case of using the negative responses from transformed distribution.
Recurrent neural networks (RNNs) such as long short-term memory and gated recurrent units are pivotal building blocks across a broad spectrum of sequence modeling problems. This paper proposes a recurrently controlled recurrent network (RCRN) for expressive and powerful sequence encoding. More concretely, the key idea behind our approach is to learn the recurrent gating functions using recurrent networks. Our architecture is split into two components – a controller cell and a listener cell whereby the recurrent controller actively influences the compositionality of the listener cell. We conduct extensive experiments on a myriad of tasks in the NLP domain such as sentiment analysis (SST, IMDb, Amazon reviews, etc.), question classification (TREC), entailment classification (SNLI, SciTail), answer selection (WikiQA, TrecQA) and reading comprehension (NarrativeQA). Across all 26 datasets, our results demonstrate that RCRN not only consistently outperforms BiLSTMs but also stacked BiLSTMs, suggesting that our controller architecture might be a suitable replacement for the widely adopted stacked architecture.
Many prediction tasks, especially in computer vision, are often inherently ambiguous. For example, the output of semantic segmentation may depend on the scale one is looking at, and image saliency or video summarization is often user or context dependent. Arguably, in such scenarios, exploiting instance specific evidence, such as scale or user context, can help resolve the underlying ambiguity leading to the improved predictions. While existing literature has considered incorporating such evidence in classical models such as probabilistic graphical models (PGMs), there is limited (or no) prior work looking at this problem in the context of deep neural network (DNN) models. In this paper, we present a generic multi task learning (MTL) based framework which handles the evidence as the output of one or more secondary tasks, while modeling the original problem as the primary task of interest. Our training phase is identical to the one used by standard MTL architectures. During prediction, we back-propagate the loss on secondary task(s) such that network weights are re-adjusted to match the evidence. An early stopping or two norm based regularizer ensures weights do not deviate significantly from the ones learned originally. Implementation in two specific scenarios (a) predicting semantic segmentation given the image level tags (b) predicting instance level segmentation given the text description of the image, clearly demonstrates the effectiveness of our proposed approach.
We present a collection of algorithms to filter a stream of documents in such a way that the filtered documents will cover as well as possible the interest of a person, keeping in mind that, at any given time, the offered documents should not only be relevant, but should also be diversified, in the sense not only of avoiding nearly identical documents, but also of covering as well as possible all the interests of the person. We use a modification of the WEBSOM algorithm, with limited architectural adaptation, to create a user model (which we call the ‘user context’ or simply the ‘context’) based on a network of units laid out in the word space and trained using a collection of documents representative of the context. We introduce the concepts of novelty and coverage. Novelty is related to, but not identical to, the homonymous information retrieval concept: a document is novel it it belongs to a semantic area of interest to a person for which no documents have been seen in the recent past. A group of documents has coverage to the extent to which it is a good representation of all the interests of a person. In order to increase coverage, we introduce an ‘interest’ (or ‘urgency’) factor for each unit of the user model, modulated by the scores of the incoming documents: the interest of a unit is decreased drastically when a document arrives that belongs to its semantic area and slowly recovers its initial value if no documents from that semantic area are displayed. Our tests show that these algorithms can effectively increase the coverage of the documents that are shown to the user without overly affecting precision.
This paper presents a method called One-class Classification using Length statistics of Emerging Patterns Plus (OCLEP+).
How many bits of information are revealed by a learning algorithm for a concept class of VC-dimension $d$? Previous works have shown that even for $d=1$ the amount of information may be unbounded (tend to $\infty$ with the universe size). Can it be that all concepts in the class require leaking a large amount of information? We show that typically concepts do not require leakage. There exists a proper learning algorithm that reveals $O(d)$ bits of information for most concepts in the class. This result is a special case of a more general phenomenon we explore. If there is a low information learner when the algorithm {\em knows} the underlying distribution on inputs, then there is a learner that reveals little information on an average concept {\em without knowing} the distribution on inputs.
Group fairness is an important concern for machine learning researchers, developers, and regulators. However, the strictness to which models must be constrained to be considered fair is still under debate. The focus of this work is on constraining the expected outcome of subpopulations in kernel regression and, in particular, decision tree regression, with application to random forests, boosted trees and other ensemble models. While individual constraints were previously addressed, this work addresses concerns about incorporating multiple constraints simultaneously. The proposed solution does not affect the order of computational or memory complexity of the decision trees and is easily integrated into models post training.
Recently, graph Convolutional Neural Networks (graph CNNs) have been widely used for graph data representation and semi-supervised learning tasks. However, existing graph CNNs generally use a fixed graph which may be not optimal for semi-supervised learning tasks. In this paper, we propose a novel Graph Learning-Convolutional Network (GLCN) for graph data representation and semi-supervised learning. The aim of GLCN is to learn an optimal graph structure that best serves graph CNNs for semi-supervised learning by integrating both graph learning and graph convolution together in a unified network architecture. The main advantage is that in GLCN, both given labels and the estimated labels are incorporated and thus can provide useful ‘weakly’ supervised information to refine (or learn) the graph construction and also to facilitate the graph convolution operation in GLCN for unknown label estimation. Experimental results on seven benchmarks demonstrate that GLCN significantly outperforms state-of-the-art traditional fixed structure based graph CNNs.
|
2020-08-12 08:51:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3768285810947418, "perplexity": 1062.1476903496482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738888.13/warc/CC-MAIN-20200812083025-20200812113025-00062.warc.gz"}
|
https://mathstats.uncg.edu/sites/pauli/112/HTML/secdisclog.html
|
## Section15.4Discrete Logarithm
The discrete logarithm to the base $$b$$ of $$a$$ is the answer to the question:
Given $$a$$ and $$b\text{,}$$ what is the non-negative integer $$m$$ such that $$a=\gexp{b}{m}{\star}$$ ?
We will see that such an answer does not always exists and describe an algorithm that gives an answer provided it exists. In the video in Figure 15.4.1 we give an overview of this section. Details and more examples can be found below.
We investigate in a concrete example which elements of a group are powers of the group elements.
In $$(\Z_7^\otimes,\otimes)$$ where $$a\otimes b=(a\cdot b)\fmod 7$$ we investigate the powers of all elements. Recall that $$\Z_7^\otimes=\{1,2,3,4,5,6\}$$ and $$\W=\{0,1,2,\dots\}\text{.}$$
• powers of $$1\text{:}$$.
$$\gexp{1}{0}{\otimes}=1\text{,}$$ $$\gexp{1}{1}{\otimes}=1\text{,}$$ $$\gexp{1}{2}{\otimes}=1\text{;}$$ as we obtain the $$n$$-th power of $$1$$ multiplying $$n$$ copies of $$1$$ we have for all $$n\in\W$$ that $$\gexp{1}{n}{\otimes}=1\text{.}$$
• powers of $$2\text{:}$$.
$$\gexp{2}{0}{\otimes}=1\text{,}$$ $$\gexp{2}{1}{\otimes}=2\text{,}$$ $$\gexp{2}{2}{\otimes}=4\text{,}$$ $$\gexp{2}{3}{\otimes}=1\text{,}$$ $$\gexp{2}{4}{\otimes}=2\text{;}$$ when we continue multiplying by 2 we cycle through 1, 2, and 4, see Figure 15.4.3 (b).
• powers of $$3\text{:}$$.
$$\gexp{3}{0}{\otimes}=1\text{,}$$ $$\gexp{3}{1}{\otimes}=3\text{,}$$ $$\gexp{3}{2}{\otimes}=2\text{,}$$ $$\gexp{3}{3}{\otimes}=6\text{,}$$ $$\gexp{3}{4}{\otimes}=4\text{,}$$ $$\gexp{3}{5}{\otimes}=5\text{,}$$ $$\gexp{3}{6}{\otimes}=1\text{;}$$ so all elements of $$\Z_7^\otimes$$ are powers of 3, see Figure 15.4.3 (c).
• powers of $$4\text{:}$$.
$$\gexp{4}{0}{\otimes}=1\text{,}$$ $$\gexp{4}{1}{\otimes}=4\text{,}$$ $$\gexp{4}{2}{\otimes}=2\text{,}$$ $$\gexp{4}{3}{\otimes}=1\text{,}$$ $$\gexp{4}{4}{\otimes}=4\text{;}$$ when we continue multiplying by 4 we cycle through 1, 4, and 2.
• powers of $$5\text{:}$$.
$$\gexp{5}{0}{\otimes}=1\text{,}$$ $$\gexp{5}{1}{\otimes}=5\text{,}$$ $$\gexp{5}{2}{\otimes}=4\text{,}$$ $$\gexp{5}{3}{\otimes}=6\text{,}$$ $$\gexp{5}{4}{\otimes}=2\text{,}$$ $$\gexp{5}{5}{\otimes}=4\text{,}$$ $$\gexp{5}{6}{\otimes}=1\text{;}$$ so all elements of $$\Z_7^\otimes$$ are powers of 5.
• powers of $$6\text{:}$$.
$$\gexp{6}{0}{\otimes}=1$$ and $$\gexp{6}{1}{\otimes}=6\text{;}$$ all other powers of 6 are 1 or 6, see Figure 15.4.3 (a).
Let $$(G,\star)$$ be a group. For two $$a$$ and $$b$$ in $$G$$ the discrete logarithm of $$a$$ to base $$b$$ is the answer to the following question. For which $$n\in\W$$ do we have:
\begin{equation*} \gexp{b}{n}{\star}=a \end{equation*}
Before we introduce a notation for the answer to this question, we look back at Example 15.4.2 and see that the answer does not always exist.
In $$(\Z_7^\otimes,\otimes)$$ where $$a\otimes b=(a\cdot b)\fmod 7$$ there is no $$n\in\W$$ such that $$\gexp{2}{n}{\otimes}=3\text{,}$$ because the only powers of 2 in $$(\Z_7^\otimes,\otimes)$$ are $$1\text{,}$$ $$2\text{,}$$ and $$4$$ (compare Example 15.4.2 powers of $$2$$).
Thus in our definition we have to allow for the possibility, that there is no answer.
### Definition15.4.5.
Let $$(G,\star)$$ be a group and let $$b\in G$$ and $$a\in G\text{.}$$ The discrete logarithm of $$a$$ to base $$b$$ with respect to $$\star$$ is the the smallest non-negative integer $$n$$ such that $$\gexp{b}{n}{\star}=a\text{.}$$ If such an $$n$$ does not exist we say that the discrete logarithm does not exist.
We denote the discrete logarithm of $$a$$ to base $$b$$ with respect to $$\star$$ by $$\glog{b}{a}{\star}\text{.}$$
Specializing Definition 15.4.5 to the group $$(\Z_p^\otimes,\otimes)$$ we have for all $$b\in\Z_p^\otimes$$ and $$a\in\Z_p^\otimes$$ that $$\glog{b}{a}{\otimes}$$ is the smallest non-negative integer such that $$\gexp{b}{n}{\otimes}=a\text{.}$$
We illustrate this in an example.
In the group $$(\Z_5^\otimes)$$ where $$a\otimes b:= (a\cdot b) \fmod 5$$ we have:
1. $$\glog{2}{1}{\otimes}=0$$ because $$\gexp{2}{0}{\otimes}=1\text{.}$$
2. $$\glog{2}{2}{\otimes}=1$$ because $$\gexp{2}{1}{\otimes}=2\text{.}$$
3. $$\glog{2}{3}{\otimes}=3$$ because $$\gexp{2}{3}{\otimes}=(2^3)\fmod 5=8\fmod 5=3\text{.}$$
To find discrete logarithms we often have two try out several possible answers. Sometimes we cannot find an answer and we conclude that the discrete logarithm does not exist.
In the group $$(\Z_5^\otimes)$$ where $$a\otimes b:= (a\cdot b) \fmod 5$$ find the following discrete logarithms provided they exits.
1. $$\displaystyle \glog{3}{2}{\otimes}$$
2. $$\displaystyle \glog{4}{2}{\otimes}$$
Solution.
1. We try out powers of $$3$$ until we obtain 2.
\begin{align*} \gexp{3}{0}{\otimes}\amp =1\\ \gexp{3}{1}{\otimes}\amp =3\fmod 5 =3\\ \gexp{3}{2}{\otimes}\amp =3\otimes 3=(3\cdot 3)\fmod 5=9\fmod 5 = 4\\ \gexp{3}{3}{\otimes}\amp =\gexp{3}{2}{\otimes} \otimes 3=4\otimes 3=(4\cdot 3)\fmod 5= 12\fmod 5=2 \end{align*}
Thus $$\glog{3}{2}{\otimes}=3\text{.}$$
2. We try out powers of $$4$$ until we obtain 2.
\begin{align*} \gexp{4}{0}{\otimes}\amp =1\\ \gexp{4}{1}{\otimes}\amp =4\fmod 5 =4\\ \gexp{4}{2}{\otimes}\amp =4\otimes 4=(4\cdot 4)\fmod 5=16\fmod 5 = 1\\ \gexp{4}{3}{\otimes}\amp =\gexp{4}{2}{\otimes} \otimes 4=(1\cdot 4)\fmod 5=4\fmod 5 = 4 \end{align*}
Continuing this we get $$\gexp{4}{4}{\otimes}=1\text{,}$$ $$\gexp{4}{5}{\otimes}=4\text{,}$$ $$\gexp{4}{6}{\otimes}=1\text{.}$$ As $$4\otimes 4=1$$ and $$1\otimes 4=4$$ further multiplication by $$4$$ only yields 1 or 4. So the only numbers that can be written as powers of $$4$$ in $$(\Z_5^\otimes)$$ are $$1$$ and $$4\text{.}$$ This means that there is no non-negative integer $$n$$ such that $$\gexp{4}{n}{\otimes}=2\text{.}$$ We have found that $$\glog{4}{2}{\otimes}$$ does not exist.
The following follows from the definition of exponentiation and discrete logarithm.
We give powers of elements and the corresponding discrete logarithm to base 5 for the elements of the group $$(\Z_7^\otimes,\otimes)$$ where $$a\otimes b =(a\cdot b)\fmod 7\text{.}$$ We see that all elements of $$\Z_7^\otimes$$ can be written as powers of $$5\text{.}$$
1. We have $$\gexp{5}{0}{\otimes}=1\text{.}$$ Thus $$\glog{5}{1}{\otimes}=0\text{.}$$
2. We have $$\gexp{5}{1}{\otimes}=5\text{.}$$ Thus $$\glog{5}{5}{\otimes}=1\text{.}$$
3. We have $$\gexp{5}{2}{\otimes}=5\otimes 5 = (5\cdot 5) \fmod 7=4\text{.}$$ Thus $$\glog{5}{4}{\otimes}=2\text{.}$$
4. We have $$\gexp{5}{3}{\otimes}=4\otimes 5 = (4\cdot 5) \fmod 7=6\text{.}$$ Thus $$\glog{5}{6}{\otimes}=3\text{.}$$
5. We have $$\gexp{5}{4}{\otimes}=6\otimes 5 = (6\cdot 5) \fmod 7=2\text{.}$$ Thus $$\glog{5}{2}{\otimes}=4\text{.}$$
6. We have $$\gexp{5}{5}{\otimes}=2\otimes 5 = (2\cdot 5) \fmod 7=3\text{.}$$ Thus $$\glog{5}{3}{\otimes}=5\text{.}$$
Exponentiation and discrete logarithm to the same base are inverse functions. This is illustrated in the next example.
In the group $$(\Z_5^\otimes,\otimes)$$ where $$a\otimes b =(a\cdot b)\fmod 5$$ we consider exponentiation and logarithm with base $$3\text{.}$$ Let the function $$e:\Z_4\to \Z_5^\otimes$$ be given by $$e(x)=\gexp{3}{x}{\otimes}\text{.}$$ The function $$e$$ is the exponentiation function with base $$3\text{.}$$ We have
\begin{equation*} e(0)=\gexp{3}{0}{\otimes}=1,\, e(1)=\gexp{3}{1}{\otimes}=3,\, e(2)=\gexp{3}{2}{\otimes}=4,\, e(3)=\gexp{3}{3}{\otimes}=2 \end{equation*}
The discrete logarithm $$\glog{3}{y}{\otimes}$$ of $$y\in\Z_5^\otimes$$ to base $$3$$ is the the smallest non-negative integer $$n$$ such that $$\gexp{3}{n}{\otimes}=y\text{.}$$ Let the function given by $$l:\Z_5^\otimes\to\Z_4$$ be given by
\begin{equation*} l(1)=\glog{3}{1}{\otimes}=0,\, l(2)=\glog{3}{2}{\otimes}=3,\, l(3)=\glog{3}{1}{\otimes}=1,\, l(4)=\glog{3}{1}{\otimes}=0 \end{equation*}
As $$l(e(x))=x$$ for all $$x\in\Z_5^\otimes\text{,}$$ the function $$l$$ is the inverse function of $$e\text{.}$$
Depending on the group, the effort of finding discrete logarithms varies considerably. Different approaches can be used to find discrete logarithms. For small groups, we can produce a table where we can quickly look up the values.
In the group $$(\Z_{7}^\otimes,\otimes)$$ where $$a\otimes b=(a\cdot b)\fmod 7$$ find the discrete logarithm to base 3 of 6.
Solution.
We need to find $$n\in\W$$ such that $$\gexp{3}{n}{\otimes}=6\text{.}$$ From Figure 15.4.3(c) we see that $$\gexp{3}{3}{\otimes}=3\otimes 3\otimes 3=6\text{.}$$ Thus $$\glog{3}{6}{\otimes}=3\text{.}$$
In general we try out exponents until we find the right one. We never have to try out more exponents than our group has elements, so we know when to stop in case the discrete logarithm does not exist.
In the group $$(\Z_{11}^\otimes,\otimes)$$ where $$a\otimes b=(a\cdot b)\fmod 11$$ find $$\glog{7}{9}{\otimes}\text{.}$$
Solution.
We need to find $$n\in\W$$ such that $$\gexp{7}{n}{\otimes}=9\text{.}$$ We compute
$$\gexp{7}{1}{\otimes}=7$$ $$\gexp{7}{2}{\otimes}=(7\cdot 7)\fmod 11=5$$ $$\gexp{7}{3}{\otimes}=5\otimes 7 =(5\cdot 7)\fmod 11=2$$ $$\gexp{7}{4}{\otimes}=(2\otimes 7)=(2\cdot 7)\fmod 11=3$$ $$\gexp{7}{5}{\otimes}=3\otimes 7 =(3\cdot 7)\fmod 11=10$$ $$\gexp{7}{6}{\otimes}=(10\otimes 7)=(10\cdot 7)\fmod 11=4$$ $$\gexp{7}{7}{\otimes}=4 \otimes 7 = (4\cdot 7)\fmod 11=6$$ $$\gexp{7}{8}{\otimes}=(6\otimes 7)=(6\cdot 7)\fmod 11=9$$
Thus $$\glog{7}{9}{\otimes}=8\text{.}$$
The method that we applied to find the discrete logarithm is called the naive method. We formulate it as an algorithm. To assure that our algorithm terminates we assume that our group is finite. When the group of is finite, the number of possible distinct powers of any element of the group is at most the number of elements in the group. We make this the termination criterion for the loop in our algorithm.
Next we illustrate with an example that computing powers using fast exponentiation is considerably faster than finding discrete logarithms with the naive method.
In the group $$(\Z_{101}^\otimes,\otimes)$$ where $$\otimes:\Z_{101}^\otimes\times\Z_{101}^\otimes\to\Z^\otimes_{101}$$ is defined by $$a\otimes b=(a\cdot b)\fmod 101\text{.}$$
1. Find $$\glog{2}{5}{\otimes}\text{.}$$
2. Count the number of group operation $$\otimes$$ you need to find $$\glog{2}{5}{\otimes}\text{.}$$
3. How many group operations $$\otimes$$ are needed to compute $$\gexp{2}{24}{\otimes}$$ using fast exponentiation.
Solution.
1. We check all powers of $$2$$ until we obtain $$5\text{.}$$ We get:
$$\gexp{2}{1}{\otimes}=2$$ $$\gexp{2}{2}{\otimes}=2\otimes 2=4$$ $$\gexp{2}{3}{\otimes}=4\otimes 2=8$$ $$\gexp{2}{4}{\otimes}=8\otimes 2=16$$ $$\gexp{2}{5}{\otimes}=16\otimes 2=32$$ $$\gexp{2}{6}{\otimes}=32\otimes 2=64$$ $$\gexp{2}{7}{\otimes}=64\otimes 2=27$$ $$\gexp{2}{8}{\otimes}=27\otimes 2=54$$ $$\gexp{2}{9}{\otimes}=54\otimes 2=7$$ $$\gexp{2}{{10}}{\otimes}=7\otimes 2=14$$ $$\gexp{2}{{11}}{\otimes}=14\otimes 2=28$$ $$\gexp{2}{{12}}{\otimes}=18\otimes 2=56$$ $$\gexp{2}{{13}}{\otimes}=56\otimes 2=11$$ $$\gexp{2}{{14}}{\otimes}=11\otimes 2=22$$ $$\gexp{2}{{15}}{\otimes}=22\otimes 2=44$$ $$\gexp{2}{{16}}{\otimes}=44\otimes 2=88$$ $$\gexp{2}{{17}}{\otimes}=88\otimes 2=75$$ $$\gexp{2}{{18}}{\otimes}=75\otimes 2=49$$ $$\gexp{2}{{19}}{\otimes}=49\otimes 2=98$$ $$\gexp{2}{{20}}{\otimes}=98\otimes 2=95$$ $$\gexp{2}{{21}}{\otimes}=95\otimes 2=89$$ $$\gexp{2}{{22}}{\otimes}=89\otimes 2=77$$ $$\gexp{2}{{23}}{\otimes}=77\otimes 2=53$$ $$\gexp{2}{{24}}{\otimes}=53\otimes 2=5$$
Thus the discrete logarithm to base 2 of 5 is 24.
2. We found the solution with 23 group operations $$\otimes\text{.}$$
3. To compute $$\gexp{2}{24}{\otimes}$$ using fast exponentiation we first write 24 as a sum of powers of 2. We find $$24=16+8=2^4+2^3$$ and compute
\begin{equation*} \gexp{2}{2}{\otimes}=2\otimes 2=4,\;\gexp{2}{4}{\otimes}=4\otimes 4=16,\; \gexp{2}{8}{\otimes}=16\otimes 16=54,\;\gexp{2}{16}{\otimes}=54\otimes 54=88\text{.} \end{equation*}
These are 4 group operations $$\otimes\text{.}$$ Now one more group operation $$\otimes$$ yields
\begin{equation*} \gexp{2}{24}{\otimes}=\gexp{2}{16}{\otimes}\otimes\gexp{2}{8}{\otimes}=88\otimes 54=5\text{.} \end{equation*}
So we need $$5$$ group operations $$\otimes$$ to compute $$\gexp{2}{24}{\otimes}\text{.}$$
The previous problem illustrates that more group operations are needed to find the discrete logarithm than for computing the corresponding power with the fast exponentiation algorithm (Algorithm 15.3.5). In Problem 15.3.9, we computed $$\gexp{2}{28}{\otimes}$$ in 5 group operations $$\otimes$$ while it took $$23$$ group operations $$\otimes$$ to find $$\glog{2}{5}{\otimes}\text{.}$$ In general, computing discrete logarithms in the group $$(\Z_p^\otimes,\otimes)$$ is difficult.
In Checkpoint 15.4.15 find a discrete logarithm.
In $$(\mathbb{Z}_{13}^\otimes,\otimes)$$ where $$a\otimes b=(a\cdot b)\bmod 13$$ compute:
$$8^{0 \otimes}=$$
$$8^{1 \otimes}=$$
$$8^{2 \otimes}=$$
$$8^{3 \otimes}=$$
$$8^{4 \otimes}=$$
$$8^{5 \otimes}=$$
$$8^{6 \otimes}=$$
$$8^{7 \otimes}=$$
$$8^{8 \otimes}=$$
$$8^{9 \otimes}=$$
$$8^{10 \otimes}=$$
$$8^{11 \otimes}=$$
$$8^{12 \otimes}=$$
Now use the information above to find the following:
$$\log_{8}^\otimes(1)=$$
$$1$$
$$8$$
$$12$$
$$5$$
$$1$$
$$8$$
$$12$$
$$5$$
$$1$$
$$8$$
$$12$$
$$5$$
$$1$$
$$0$$
There are methods for computing discrete logarithms in the group $$(\Z_p^\otimes,\otimes)$$ that are faster than checking all powers of the generator. Some of the methods are the Baby Step Giant Step algorithm, index calculus algorithm, and the number field sieve, all of which are outside the scope of this course.
|
2023-03-21 13:19:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9336017966270447, "perplexity": 219.77203756149927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943698.79/warc/CC-MAIN-20230321131205-20230321161205-00648.warc.gz"}
|
https://www.sarthaks.com/406807/define-i-critical-temperature-tc-ii-critical-pressure-pc-iii-critical-volume-vc
|
Define (i) Critical temperature (Tc.)ii)Critical pressure (Pc) (iii) Critical volume (Vc)
41 views
edited
Define
(i) Critical temperature (Tc.)
(ii)Critical pressure (Pc
(iii) Critical volume (Vc)
by (60.5k points)
selected
Critical temperature: It is defined as the temperature above which a gas cannot be liquefied by applying pressure.
Critical Pressure: It is defined as the minimum pressure required to cause liquefaction of a gas at critical temperature.
Critical Volume: It is the volume occupied by one mole of a gas at critical temperature and critical pressure.
|
2021-01-25 12:48:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8830304741859436, "perplexity": 2936.4531716171423}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703581888.64/warc/CC-MAIN-20210125123120-20210125153120-00560.warc.gz"}
|
http://koreascience.or.kr/article/JAKO201809454741252.page
|
간호대학생의 자아탄력성, 대인관계능력, 인지적 정서조절전략이 대학생활적응에 미치는 영향
• Accepted : 2018.11.02
• Published : 2018.11.30
• 83 26
Abstract
This study was conducted to identify the effects of ego resilience, interpersonal relationships, and cognitive emotion regulation strategies on adaptation to college life by nursing students and provide data to increase adaptation based on the results. This research was based on 304 nursing students in B and Y city. Data were collected from May 8 to May 13, 2017 and analyzed by ANOVA, Pearson's correlation coefficient, and multiple regression using SPSS/WIN 22.0. The average college life adaptation value was $3.21{\pm}0.53$. There were positive correlations between college life adaptation and ego resilience (r=0.443, p<0.001), interpersonal relationships (r=0.400, p<0.001) and cognitive emotion regulation strategies (r=0.465, p<0.001). Regression analysis revealed that 46.2% of the variance in college life adaptation by nursing students could be explained by grade, major satisfaction, ego resilience, and cognitive emotion regulation strategies. Additional studies to determine the various factors affecting adaptation of nursing students to college life and to increase college life adaptation should be conducted.
File
Table 1. General Characteristics of Subjects
Table 2. Degree of Ego Resiliency, Interpersonal Relation, Cognitive Emotion Regulation Strategies, College Life Adaptation
Table 3. Difference of College Life Adaptation according to General Characteristics
Table 5. Affected Factors of College Life Adaptation
Table 4. Correlation between Ego Resiliency, Interpersonal Relation, Cognitive Emotion Regulation Strategies, College Life Adaptation
Acknowledgement
Supported by : 영산대학교
References
1. J. N. Sohn, "Discriminating power of suicidal ideation by life stress, coping strategy, and depression in college students", Journal of Korean Academy Psychiatric and Mental Health Nursing, vol. 16, no. 3, pp. 267-275, 2007.
2. M. H. Jung, M. A. Shin, "The relationship between self-esteem and satisfaction in major of nursing students", Journal of Korean Academic Society of Nursing Education, vol. 12, no. 2, pp. 170-177, 2006.
3. J. S. Jung, M. J. Jeong, I. Y. Yoo, "Relationship between satisfaction in major, career decision-making self-efficacy and career identity of nursing students", Journal of Korean Academic Society of Nursing Education, vol. 20, no. 1, pp. 27-36, 2014. DOI: http://dx.doi.org/10.5977/jkasne.2014.20.1.27 https://doi.org/10.5977/jkasne.2014.20.1.27
4. S. J. Lee, J. H. Yu, "The mediation effect of self-efficacy between academic and career stress and adjustment to college", The Korean Journal of Educational Psychology, vol. 22, no. 3, pp. 589-607, 2008.
5. S. Y. Yun, S. H. Min, "Influence of ego-resilience and spiritual well-being on college adjustment in major of nursing student", Journal of Digital Convergence, vol. 12, no. 12, pp. 395-403, 2014. DOI: http://dx.doi.org/10.14400/JDC.2014.12.12.395 https://doi.org/10.14400/JDC.2014.12.12.395
6. Jang, S. M., Jung, Y. J. "The effect of participation motivation and ego-resilience of Taekwondo trainees on psychological well-being", Journal of the Korean Data Analysis Society, vol. 20, no. 1, pp. 403-413, 2018.
7. H. S. Song, E. S. Na, Y. H. Jeon, K. I. Jung, "Factors influencing college life adaptation in nursing students", The Korean Society of Living Environmental System, vol. 23, no. 1, pp. 72-81, 2016. DOI: http://dx.doi.org/10.21086/ksles.2016.03.23.1.72 https://doi.org/10.21086/ksles.2016.02.23.1.72
8. H. J. Lee, K. W. Kim, "The relationship among the parent-adolescent communication, adolescents self-esteem, and social development", Korean Journal of Human Ecology, vol. 9, no. 3, pp. 283-295, 2000.
9. M. Jung, "Influence of ego states, self esteem, and empathies on interpersonal relationship of nursing students", Journal of the Korean Academia Industrial cooperation Society, vol. 16, no. 7, pp. 4614-4620, 2015. DOI: http://dx.doi.org/10.5762/KAIS.2015.16.7.4614 https://doi.org/10.5762/KAIS.2015.16.7.4614
10. S. S. Sim, M. R. Bang. "The relationship between the character, interpersonal relations, and adjustment to a college life of nursing students", Journal of the Korean Academia-Industrial cooperation Society, vol. 17, no. 12, pp. 634-642, 2016. DOI: http://dx.doi.org/10.5762/KAIS.2016.17.12.634 https://doi.org/10.5762/KAIS.2016.17.12.634
11. N. Garnefski, V. Kraaij, P. Spinhoven, "Negative life events, cognitive emotion regulation and emotional problem", Personality and Individual Differences, vol. 30, no. 8, pp. 1311-1327, 2011. https://doi.org/10.1016/S0191-8869(00)00113-6
12. S. H. Kim, "A study on relationships among the stressful events, cognitive emotion regulation strategies and psychological well-being", Journal of Student Guidance and Counselling, vol. 26, pp. 5-29, 2008.
13. K. H. Lee, "The relationship of emotional regulation strategies and school adjustment in junior high school students", Journal of Korean Home Economics Education Association, vol. 21, no. 2, pp. 159-169, 2009.
14. Kwag, Y. K. "Effects of self-esteem, ego-resilience, social support on nursing student's adjustment to college", Journal of the Korean Academia-Industrial cooperation Society, vol. 14, no. 5, pp. 2178-2186, 2013. DOI: http://dx.doi.org/10.5762/KAIS.2013.14.5.2178 https://doi.org/10.5762/KAIS.2013.14.5.2178
15. E. J. Oh, S. A. Park, "The mediating effects of interpersonal competence between nursing student's adult attachment and adjustment to college life", Journal of the Korean Academia-Industrial cooperation Society, vol. 17, no. 8, pp. 94-102, 2016. DOI: http://dx.doi.org/10.5762/KAIS.2016.17.8.94
16. S. O. Choi, J. K. Park, S. H. Kim, "Factors influencing the adaptation to the college life of nursing students", The Journal of Korean Academic Society of Nursing Education, vol. 21, no. 2, pp. 182-189, 2015. DOI: http://doi.org/10.5977/jkasne.2015.21.2.182 https://doi.org/10.5977/jkasne.2015.21.2.182
17. S. M. Park, C. G. Kim, S. K. Cha, "Influence of emotional awareness, ambivalence over emotional expressiveness and emotional regulation style on nursing student adjustment", The Journal of Korean Academic Society of Nursing Education, vol. 20, no. 2, pp. 300-311, 2014. DOI: http://doi.org/10.5977/jkasne.2014.20.2.300 https://doi.org/10.5977/jkasne.2014.20.2.300
18. K. H. Kim, "Correlation of a stress-coping humor sense, and adaptation to college of baccalaureate nursing students", The Korean Contents Society. 14, no. 3, pp. 301-313, 2014. DOI: http://doi.org/10.5392/JKCA.2014.14.03.301
19. S. H. Park, E. K. Byun, "Effects of emotional intelligence, humor sence, and ego resilience on adjustment to the college life of nursing students", Journal of the Korea Academia-Industrial cooperation Society, vol. 18, no. 10, pp. 256-264, 2017. DOI: http://doi.org/10.5762/KAIS.2017.18.10.256 https://doi.org/10.5762/KAIS.2017.18.1.256
20. J. H. Choi, M. J. Park, "Methodological triangulation method to evaluate adjustment to college life in associate nursing college students", The Korea Contents Society, vol. 13, no. 7, pp. 339-349, 2013. DOI: http://doi.org/10.5392/JKCA.2013.13.07.339
21. C. L. Jew, K. E. Green, J. Kroger, "Development and validation of a measure of resilience", Measurement and Evaluation in Counseling and Development, vol. 32, no. 2, pp. 75-89, 1991.
22. J. Block, A. M. Kremen, "IQ and ego-resiliency: conceptual and empirical connections and separateness", Journal of Personality and Social Psychology, vol. 70, no. 2, pp. 349-361, 1996. https://doi.org/10.1037/0022-3514.70.2.349
23. S. K. Yoo, H. W. Shim, "Psychological protective factors in resilient adolescents in Korea", The Korean Journal of Educational Psychology, vol. 16, no. 4, pp. 189-206, 2002.
24. M. J. Steven, M. A. Compion, "The knowledge, skill and ability requirements for teamwork: implications for human resource management", Journal of Management, vol. 20, no. 2, pp. 503-530, 1994. https://doi.org/10.1177/014920639402000210
25. S. P. Schlein, B. G. Guerney, "Relationship enhancement", San Francisco, CA; Josey-Bass, 1971.
26. S. G. Chun, "A study on the effectiveness of social skill training program for rehabilitation of the schizophrenic patients", Unpublished doctoral dissertation, Soongsil University 1994.
27. R. A. Tompson, "Emotional regulation and emotional development", Education Psychology Review, vol. 3, pp. 269-307, 1991. https://doi.org/10.1007/BF01319934
28. R. W. Baker, B. Siryk, "Student adaptation to college questionnaire(SACQ): manual", Los Angeles, Western Psychological Service, 1989.
29. S. S. Baek, H. Y. Cho, "Influence of empathy, anxiety, and social support on adaptation to university life in undergraduate students", Journal of the Korean Data Analysis Society, vol. 19, no. 5, pp. 2841-2852, 2017.
30. H. J. Won, "Effects of self-leadership and stress coping on college life adjustment in nursing students", The Korean Journal of Health Service Management, vol. 9, no. 1, pp. 123-131, 2015. DOI: http://dx.doi.org/10.12811/kshsm.2015.9.1.123 https://doi.org/10.12811/kshsm.2015.9.1.123
31. H. G. Son, S. H. Kwon, H. J. Park, "The influence of life stress, ego-resilience, and spiritual well-being on adaptation to university life in nursing students", Journal of the Korean Academia-Industrial cooperation Society, vol. 18, no. 5, pp. 636-646, 2017. DOI: http://doi.org/10.5762/KAIS.2017.18.5.636
32. J. A. Park, E. K. Lee, "Influence of ego-resilience and stress coping styles on college adaptation in nursing students", Journal of Korean Academic of Nursing Administration, vol. 17, no. 3, pp. 267-276, 2011. https://doi.org/10.11111/jkana.2011.17.3.267
33. S. O. Kim, "Factors affecting school life adjustment of freshmen in nursing department", Journal of the Korean Data Analysis Society, vol. 20, no. 2, pp. 1077-1088, 2018.
34. S. H. Choi, E. K. Byun, "Factors influencing academic stress for the elderly in nursing students", Journal of the Korean Data Analysis Society, vol. 18, no. 4, pp. 2303- 2313, 2016.
35. O. H. Cho, Y. H. Kim, "Infiuences of cognitive emotion regulation and social support on social anxiety among nursing students", The Korean Journal of Stress Research, vol. 26, no. 1, pp. 25-30, 2018. DOI: http://doi.org/10.17547/kjsr.2018.26.1.25 https://doi.org/10.17547/kjsr.2018.26.1.25
|
2020-06-05 02:10:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3049914538860321, "perplexity": 11045.9446436979}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348492427.71/warc/CC-MAIN-20200605014501-20200605044501-00289.warc.gz"}
|
https://www.physicsforums.com/threads/second-order-de.656560/
|
# Second order DE
Gold Member
## Homework Statement
Find the general solution to ##A(x)y''+A'(x)y'+\frac{y}{A(x)}=0## where A(x) is a known function and y(x) is the unknown one.
Hint:Eliminate the term that contains the first derivative.
Not sure.
## The Attempt at a Solution
So I don't really know how to tackle it. I guess that the hint suggests a change of variable that would get rid of the y' term. So I tried z=y'A, z=Ay and z=y/A. All failed to express the ODE as a function of z and its derivative(s) and A and its derivative(s).
I am therefore stuck. Any other hint will be welcome!
## Answers and Replies
What if you try the following substition $y=Bz$, where B is a known function that is yet to be decided.
What do you get if you perform this substitution?
What should we take for B if we want the z' term to vanish?
Gold Member
What if you try the following substition $y=Bz$, where B is a known function that is yet to be decided.
What do you get if you perform this substitution?
Good idea! ##x''A'B'+z'(AB+AB'+A'B)+z (A'B'+B/A)=0##.
What should we take for B if we want the z' term to vanish?
##B=Ce^{-x}/A##. I checked out and indeed the terms in front of ##z'## vanishes.
I'm left to solve ##z''A'Ce^{-x} \left ( \frac{1}{A}+ \frac{A'}{A^2} \right ) + z \left ( \frac{A'}{A} + \frac{A'^2}{A^2}- \frac{Ce^{-x}}{A^2} \right ) =0##. That really does not look beautiful/easy to me.
Good idea! ##x''A'B'+z'(AB+AB'+A'B)+z (A'B'+B/A)=0##.
I get something different here. I get that the term of z' is
$$A^\prime B + 2A B^\prime$$
The form of my B is also much simpler.
Gold Member
I get something different here. I get that the term of z' is
$$A^\prime B + 2A B^\prime$$
The form of my B is also much simpler.
I see, I made a mistake; I don't know why I make so many of them. I reach the same as yours, so ##B=kA^{-1/2}## where k is a constant.
Edit: I might have rushed through the math but I reach that ##z''+z \left ( 1-\frac{2}{A'} \right ) =0##. Which has different solutions depending on the sign of the term in front of z. I guess I made another mistakes.
Last edited:
I see, I made a mistake; I don't know why I make so many of them. I reach the same as yours, so ##B=kA^{-1/2}## where k is a constant.
OK, I get the same B. But there is no real need of the constant. You just need to find one single B that makes z' disappear, so you can take k=1.
Anyway, with that choice of B, what does your equation become then?
Gold Member
OK, I get the same B. But there is no real need of the constant. You just need to find one single B that makes z' disappear, so you can take k=1.
Anyway, with that choice of B, what does your equation become then?
I now get ##B'=-\frac{1}{2}A^{-3/2}##, ##B''=\frac{3}{4}A^{-5/2}A'^2-\frac{A^{-3/2}A''}{2}##.
The equation becomes ##z''AB+z(AB''+A'B'+B/A)=0##. Now I have to replace the B and its derivatives in that.
Edit: I've just done it and it's not beautiful so far.
|
2021-11-30 13:05:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8882603645324707, "perplexity": 779.3185003981783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358973.70/warc/CC-MAIN-20211130110936-20211130140936-00146.warc.gz"}
|
https://www.albert.io/ie/gmat/angles-of-non-parallel-lines-4
|
?
Free Version
Difficult
# Angles of Non-Parallel Lines 4
GMAT-KALKHP
If $c=100$ degrees and the triangle with interior angles $d$, $m$, and $y$ is isosceles, what is $n$?
A
$40$ degrees
B
$80$ degrees
C
$100$ degrees
D
$130$ degrees
E
Not enough information is given.
|
2016-12-10 16:33:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31374213099479675, "perplexity": 4332.040657500063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543316.16/warc/CC-MAIN-20161202170903-00414-ip-10-31-129-80.ec2.internal.warc.gz"}
|
http://answerparty.com/question/answer/what-is-the-ratio-of-the-actual-yield-to-the-theoretical-yield-multiplied-by-100-percent
|
Question:
# What is the ratio of the actual yield to the theoretical yield multiplied by 100 percent?
## The percentage yield is the ratio between the actual yield and the theoretical yield multiplied by 100%. AnswerParty!
Tags:
theoretical yield
Fixed income analysis
Mechanics
Physics
Financial ratios
Stoichiometry
Yield
Yield gap
Dividend yield
|
2014-04-16 10:10:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7542967200279236, "perplexity": 3491.881992681762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/150/11/d/a/
|
# Properties
Label 150.11.d.a Level $150$ Weight $11$ Character orbit 150.d Analytic conductor $95.304$ Analytic rank $0$ Dimension $4$ CM no Inner twists $2$
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$150 = 2 \cdot 3 \cdot 5^{2}$$ Weight: $$k$$ $$=$$ $$11$$ Character orbit: $$[\chi]$$ $$=$$ 150.d (of order $$2$$, degree $$1$$, minimal)
## Newform invariants
Self dual: no Analytic conductor: $$95.3035879011$$ Analytic rank: $$0$$ Dimension: $$4$$ Coefficient field: $$\Q(\sqrt{-2}, \sqrt{85})$$ Defining polynomial: $$x^{4} - 2 x^{3} - 37 x^{2} + 38 x + 531$$ Coefficient ring: $$\Z[a_1, a_2, a_3]$$ Coefficient ring index: $$2^{11}\cdot 3^{4}$$ Twist minimal: no (minimal twist has level 6) Sato-Tate group: $\mathrm{SU}(2)[C_{2}]$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\beta_2,\beta_3$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q + \beta_{1} q^{2} + ( -21 - 3 \beta_{1} + \beta_{2} ) q^{3} -512 q^{4} + ( 1344 - 21 \beta_{1} + 4 \beta_{3} ) q^{6} + ( 11278 - 18 \beta_{1} + 48 \beta_{2} + 3 \beta_{3} ) q^{7} -512 \beta_{1} q^{8} + ( 39753 - 1404 \beta_{1} - 42 \beta_{2} - 21 \beta_{3} ) q^{9} +O(q^{10})$$ $$q + \beta_{1} q^{2} + ( -21 - 3 \beta_{1} + \beta_{2} ) q^{3} -512 q^{4} + ( 1344 - 21 \beta_{1} + 4 \beta_{3} ) q^{6} + ( 11278 - 18 \beta_{1} + 48 \beta_{2} + 3 \beta_{3} ) q^{7} -512 \beta_{1} q^{8} + ( 39753 - 1404 \beta_{1} - 42 \beta_{2} - 21 \beta_{3} ) q^{9} + ( 3696 \beta_{1} + 210 \beta_{2} - 105 \beta_{3} ) q^{11} + ( 10752 + 1536 \beta_{1} - 512 \beta_{2} ) q^{12} + ( -68810 - 360 \beta_{1} + 960 \beta_{2} + 60 \beta_{3} ) q^{13} + ( 11422 \beta_{1} - 384 \beta_{2} + 192 \beta_{3} ) q^{14} + 262144 q^{16} + ( -74304 \beta_{1} + 1416 \beta_{2} - 708 \beta_{3} ) q^{17} + ( 726912 + 38745 \beta_{1} + 2688 \beta_{2} - 168 \beta_{3} ) q^{18} + ( -392182 - 1746 \beta_{1} + 4656 \beta_{2} + 291 \beta_{3} ) q^{19} + ( 2407002 - 75144 \beta_{1} + 11278 \beta_{2} - 567 \beta_{3} ) q^{21} + ( -1932672 - 5040 \beta_{1} + 13440 \beta_{2} + 840 \beta_{3} ) q^{22} + ( -90336 \beta_{1} - 4956 \beta_{2} + 2478 \beta_{3} ) q^{23} + ( -688128 + 10752 \beta_{1} - 2048 \beta_{3} ) q^{24} + ( -65930 \beta_{1} - 7680 \beta_{2} + 3840 \beta_{3} ) q^{26} + ( -8654877 - 247779 \beta_{1} + 33579 \beta_{2} - 4797 \beta_{3} ) q^{27} + ( -5774336 + 9216 \beta_{1} - 24576 \beta_{2} - 1536 \beta_{3} ) q^{28} + ( -757128 \beta_{1} - 17682 \beta_{2} + 8841 \beta_{3} ) q^{29} + ( -5446462 + 54810 \beta_{1} - 146160 \beta_{2} - 9135 \beta_{3} ) q^{31} + 262144 \beta_{1} q^{32} + ( -6493536 - 1510236 \beta_{1} - 39690 \beta_{2} + 15099 \beta_{3} ) q^{33} + ( 37771776 - 33984 \beta_{1} + 90624 \beta_{2} + 5664 \beta_{3} ) q^{34} + ( -20353536 + 718848 \beta_{1} + 21504 \beta_{2} + 10752 \beta_{3} ) q^{36} + ( 17753542 + 102312 \beta_{1} - 272832 \beta_{2} - 17052 \beta_{3} ) q^{37} + ( -378214 \beta_{1} - 37248 \beta_{2} + 18624 \beta_{3} ) q^{38} + ( 54321810 - 619770 \beta_{1} - 68810 \beta_{2} - 11340 \beta_{3} ) q^{39} + ( 4121328 \beta_{1} - 80484 \beta_{2} + 40242 \beta_{3} ) q^{41} + ( 36308352 + 2379786 \beta_{1} + 72576 \beta_{2} + 45112 \beta_{3} ) q^{42} + ( 117672166 - 122094 \beta_{1} + 325584 \beta_{2} + 20349 \beta_{3} ) q^{43} + ( -1892352 \beta_{1} - 107520 \beta_{2} + 53760 \beta_{3} ) q^{44} + ( 47203584 + 118944 \beta_{1} - 317184 \beta_{2} - 19824 \beta_{3} ) q^{46} + ( -4185600 \beta_{1} + 86760 \beta_{2} - 43380 \beta_{3} ) q^{47} + ( -5505024 - 786432 \beta_{1} + 262144 \beta_{2} ) q^{48} + ( -12514605 - 406008 \beta_{1} + 1082688 \beta_{2} + 67668 \beta_{3} ) q^{49} + ( -177144192 - 8099568 \beta_{1} - 267624 \beta_{2} - 295092 \beta_{3} ) q^{51} + ( 35230720 + 184320 \beta_{1} - 491520 \beta_{2} - 30720 \beta_{3} ) q^{52} + ( -9081864 \beta_{1} - 356034 \beta_{2} + 178017 \beta_{3} ) q^{53} + ( 120415680 - 8885133 \beta_{1} + 614016 \beta_{2} + 134316 \beta_{3} ) q^{54} + ( -5848064 \beta_{1} + 196608 \beta_{2} - 98304 \beta_{3} ) q^{56} + ( 264688302 - 2830524 \beta_{1} - 392182 \beta_{2} - 54999 \beta_{3} ) q^{57} + ( 391044480 + 424368 \beta_{1} - 1131648 \beta_{2} - 70728 \beta_{3} ) q^{58} + ( -17198448 \beta_{1} - 139638 \beta_{2} + 69819 \beta_{3} ) q^{59} + ( -296009686 + 353592 \beta_{1} - 942912 \beta_{2} - 58932 \beta_{3} ) q^{61} + ( -5884942 \beta_{1} + 1169280 \beta_{2} - 584640 \beta_{3} ) q^{62} + ( 226251774 - 28899450 \beta_{1} + 1979652 \beta_{2} - 390171 \beta_{3} ) q^{63} -134217728 q^{64} + ( 780861312 - 5768784 \beta_{1} - 1932672 \beta_{2} - 158760 \beta_{3} ) q^{66} + ( 74341462 + 898506 \beta_{1} - 2396016 \beta_{2} - 149751 \beta_{3} ) q^{67} + ( 38043648 \beta_{1} - 724992 \beta_{2} + 362496 \beta_{3} ) q^{68} + ( 149067072 + 35706888 \beta_{1} + 936684 \beta_{2} - 368778 \beta_{3} ) q^{69} + ( 14146272 \beta_{1} + 1281588 \beta_{2} - 640794 \beta_{3} ) q^{71} + ( -372178944 - 19837440 \beta_{1} - 1376256 \beta_{2} + 86016 \beta_{3} ) q^{72} + ( -1633567250 + 832032 \beta_{1} - 2218752 \beta_{2} - 138672 \beta_{3} ) q^{73} + ( 16935046 \beta_{1} + 2182656 \beta_{2} - 1091328 \beta_{3} ) q^{74} + ( 200797184 + 893952 \beta_{1} - 2383872 \beta_{2} - 148992 \beta_{3} ) q^{76} + ( -35848848 \beta_{1} + 918876 \beta_{2} - 459438 \beta_{3} ) q^{77} + ( 330533760 + 53777490 \beta_{1} + 1451520 \beta_{2} - 275240 \beta_{3} ) q^{78} + ( 49820642 - 2886534 \beta_{1} + 7697424 \beta_{2} + 481089 \beta_{3} ) q^{79} + ( 364741137 - 70979328 \beta_{1} - 10971828 \beta_{2} - 1192590 \beta_{3} ) q^{81} + ( -2094667008 + 1931616 \beta_{1} - 5150976 \beta_{2} - 321936 \beta_{3} ) q^{82} + ( -24958608 \beta_{1} + 2330154 \beta_{2} - 1165077 \beta_{3} ) q^{83} + ( -1232385024 + 38473728 \beta_{1} - 5774336 \beta_{2} + 290304 \beta_{3} ) q^{84} + ( 118648918 \beta_{1} - 2604672 \beta_{2} + 1302336 \beta_{3} ) q^{86} + ( -52567200 + 136526292 \beta_{1} + 3341898 \beta_{2} - 3055035 \beta_{3} ) q^{87} + ( 989528064 + 2580480 \beta_{1} - 6881280 \beta_{2} - 430080 \beta_{3} ) q^{88} + ( 118832112 \beta_{1} + 3522372 \beta_{2} - 1761186 \beta_{3} ) q^{89} + ( 2079308020 - 2821500 \beta_{1} + 7524000 \beta_{2} + 470250 \beta_{3} ) q^{91} + ( 46252032 \beta_{1} + 2537472 \beta_{2} - 1268736 \beta_{3} ) q^{92} + ( -7936117098 + 142128336 \beta_{1} - 5446462 \beta_{2} + 1726515 \beta_{3} ) q^{93} + ( 2126369280 - 2082240 \beta_{1} + 5552640 \beta_{2} + 347040 \beta_{3} ) q^{94} + ( 352321536 - 5505024 \beta_{1} + 1048576 \beta_{3} ) q^{96} + ( 9794088766 + 1435608 \beta_{1} - 3828288 \beta_{2} - 239268 \beta_{3} ) q^{97} + ( -9266541 \beta_{1} - 8661504 \beta_{2} + 4330752 \beta_{3} ) q^{98} + ( -656728128 + 271729080 \beta_{1} - 586782 \beta_{2} - 6000813 \beta_{3} ) q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$4 q - 84 q^{3} - 2048 q^{4} + 5376 q^{6} + 45112 q^{7} + 159012 q^{9} + O(q^{10})$$ $$4 q - 84 q^{3} - 2048 q^{4} + 5376 q^{6} + 45112 q^{7} + 159012 q^{9} + 43008 q^{12} - 275240 q^{13} + 1048576 q^{16} + 2907648 q^{18} - 1568728 q^{19} + 9628008 q^{21} - 7730688 q^{22} - 2752512 q^{24} - 34619508 q^{27} - 23097344 q^{28} - 21785848 q^{31} - 25974144 q^{33} + 151087104 q^{34} - 81414144 q^{36} + 71014168 q^{37} + 217287240 q^{39} + 145233408 q^{42} + 470688664 q^{43} + 188814336 q^{46} - 22020096 q^{48} - 50058420 q^{49} - 708576768 q^{51} + 140922880 q^{52} + 481662720 q^{54} + 1058753208 q^{57} + 1564177920 q^{58} - 1184038744 q^{61} + 905007096 q^{63} - 536870912 q^{64} + 3123445248 q^{66} + 297365848 q^{67} + 596268288 q^{69} - 1488715776 q^{72} - 6534269000 q^{73} + 803188736 q^{76} + 1322135040 q^{78} + 199282568 q^{79} + 1458964548 q^{81} - 8378668032 q^{82} - 4929540096 q^{84} - 210268800 q^{87} + 3958112256 q^{88} + 8317232080 q^{91} - 31744468392 q^{93} + 8505477120 q^{94} + 1409286144 q^{96} + 39176355064 q^{97} - 2626912512 q^{99} + O(q^{100})$$
Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{4} - 2 x^{3} - 37 x^{2} + 38 x + 531$$:
$$\beta_{0}$$ $$=$$ $$1$$ $$\beta_{1}$$ $$=$$ $$($$$$-32 \nu^{3} + 48 \nu^{2} + 464 \nu - 240$$$$)/93$$ $$\beta_{2}$$ $$=$$ $$($$$$-36 \nu^{3} + 240 \nu^{2} + 1824 \nu - 4548$$$$)/31$$ $$\beta_{3}$$ $$=$$ $$($$$$-64 \nu^{3} - 2880 \nu^{2} + 6880 \nu + 54576$$$$)/31$$
$$1$$ $$=$$ $$\beta_0$$ $$\nu$$ $$=$$ $$($$$$\beta_{3} + 16 \beta_{2} - 60 \beta_{1} + 432$$$$)/864$$ $$\nu^{2}$$ $$=$$ $$($$$$-7 \beta_{3} + 32 \beta_{2} - 66 \beta_{1} + 16848$$$$)/864$$ $$\nu^{3}$$ $$=$$ $$($$$$\beta_{3} + 70 \beta_{2} - 870 \beta_{1} + 6264$$$$)/216$$
## Character values
We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/150\mathbb{Z}\right)^\times$$.
$$n$$ $$101$$ $$127$$ $$\chi(n)$$ $$-1$$ $$1$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
101.1
−4.10977 + 1.41421i 5.10977 + 1.41421i −4.10977 − 1.41421i 5.10977 − 1.41421i
22.6274i −242.269 18.8335i −512.000 0 −426.153 + 5481.92i −670.530 11585.2i 58339.6 + 9125.53i 0
101.2 22.6274i 200.269 + 137.627i −512.000 0 3114.15 4531.57i 23226.5 11585.2i 21166.4 + 55125.0i 0
101.3 22.6274i −242.269 + 18.8335i −512.000 0 −426.153 5481.92i −670.530 11585.2i 58339.6 9125.53i 0
101.4 22.6274i 200.269 137.627i −512.000 0 3114.15 + 4531.57i 23226.5 11585.2i 21166.4 55125.0i 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
Char Parity Ord Mult Type
1.a even 1 1 trivial
3.b odd 2 1 inner
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 150.11.d.a 4
3.b odd 2 1 inner 150.11.d.a 4
5.b even 2 1 6.11.b.a 4
5.c odd 4 2 150.11.b.a 8
15.d odd 2 1 6.11.b.a 4
15.e even 4 2 150.11.b.a 8
20.d odd 2 1 48.11.e.d 4
40.e odd 2 1 192.11.e.h 4
40.f even 2 1 192.11.e.g 4
45.h odd 6 2 162.11.d.d 8
45.j even 6 2 162.11.d.d 8
60.h even 2 1 48.11.e.d 4
120.i odd 2 1 192.11.e.g 4
120.m even 2 1 192.11.e.h 4
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
6.11.b.a 4 5.b even 2 1
6.11.b.a 4 15.d odd 2 1
48.11.e.d 4 20.d odd 2 1
48.11.e.d 4 60.h even 2 1
150.11.b.a 8 5.c odd 4 2
150.11.b.a 8 15.e even 4 2
150.11.d.a 4 1.a even 1 1 trivial
150.11.d.a 4 3.b odd 2 1 inner
162.11.d.d 8 45.h odd 6 2
162.11.d.d 8 45.j even 6 2
192.11.e.g 4 40.f even 2 1
192.11.e.g 4 120.i odd 2 1
192.11.e.h 4 40.e odd 2 1
192.11.e.h 4 120.m even 2 1
## Hecke kernels
This newform subspace can be constructed as the kernel of the linear operator $$T_{7}^{2} - 22556 T_{7} - 15574076$$ acting on $$S_{11}^{\mathrm{new}}(150, [\chi])$$.
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$( 512 + T^{2} )^{2}$$
$3$ $$3486784401 + 4960116 T - 75978 T^{2} + 84 T^{3} + T^{4}$$
$5$ $$T^{4}$$
$7$ $$( -15574076 - 22556 T + T^{2} )^{2}$$
$11$ $$21\!\cdots\!24$$$$+ 58313211264 T^{2} + T^{4}$$
$13$ $$( -52372127900 + 137620 T + T^{2} )^{2}$$
$17$ $$32\!\cdots\!84$$$$+ 7560967182336 T^{2} + T^{4}$$
$19$ $$( -1189491369116 + 784364 T + T^{2} )^{2}$$
$23$ $$61\!\cdots\!24$$$$+ 33055507478016 T^{2} + T^{4}$$
$29$ $$20\!\cdots\!00$$$$+ 907304099736960 T^{2} + T^{4}$$
$31$ $$( -1294078582786556 + 10892924 T + T^{2} )^{2}$$
$37$ $$( -4297319054834396 - 35507084 T + T^{2} )^{2}$$
$41$ $$28\!\cdots\!04$$$$+ 23561404561257984 T^{2} + T^{4}$$
$43$ $$( 7278142478596516 - 235344332 T + T^{2} )^{2}$$
$47$ $$26\!\cdots\!00$$$$+ 25124763600230400 T^{2} + T^{4}$$
$53$ $$37\!\cdots\!04$$$$+ 212636466457531776 T^{2} + T^{4}$$
$59$ $$20\!\cdots\!04$$$$+ 324064557407447424 T^{2} + T^{4}$$
$61$ $$( 32529703648081636 + 592019372 T + T^{2} )^{2}$$
$67$ $$( -350207761464045596 - 148682924 T + T^{2} )^{2}$$
$71$ $$49\!\cdots\!00$$$$+ 1847488216292328960 T^{2} + T^{4}$$
$73$ $$( 2363496913262627140 + 3267134500 T + T^{2} )^{2}$$
$79$ $$( -3668964988480567676 - 99641284 T + T^{2} )^{2}$$
$83$ $$57\!\cdots\!44$$$$+ 5977139602070968704 T^{2} + T^{4}$$
$89$ $$15\!\cdots\!84$$$$+ 27084125311735371264 T^{2} + T^{4}$$
$97$ $$( 95016028790224257796 - 19588177532 T + T^{2} )^{2}$$
|
2022-01-26 06:03:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9570842981338501, "perplexity": 10310.318253610663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304915.53/warc/CC-MAIN-20220126041016-20220126071016-00224.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-product-b-8-b-2
|
# How do you find the product (b+8)(b+2)?
Apr 24, 2017
By multiplying, you can get ${b}^{2} + 10 b + 16$
#### Explanation:
Multiply all terms $\left(b + 8\right) \left(b + 2\right)$
${b}^{2} + 2 b + 8 b + 16$
Arrange this as:
${b}^{2} + 10 b + 16$.
Apr 24, 2017
See the entire solution process below:
#### Explanation:
To multiply these two terms you multiply each individual term in the left parenthesis by each individual term in the right parenthesis.
$\left(\textcolor{red}{b} + \textcolor{red}{8}\right) \left(\textcolor{b l u e}{b} + \textcolor{b l u e}{2}\right)$ becomes:
$\left(\textcolor{red}{b} \times \textcolor{b l u e}{b}\right) + \left(\textcolor{red}{b} \times \textcolor{b l u e}{2}\right) + \left(\textcolor{red}{8} \times \textcolor{b l u e}{b}\right) + \left(\textcolor{red}{8} \times \textcolor{b l u e}{2}\right)$
${b}^{2} + 2 b + 8 b + 16$
We can now combine like terms:
${b}^{2} + \left(2 + 8\right) b + 16$
${b}^{2} + 10 b + 16$
|
2019-12-06 13:32:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5217804312705994, "perplexity": 3263.6410916368354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488620.24/warc/CC-MAIN-20191206122529-20191206150529-00057.warc.gz"}
|